Virtual Networking Deep Dive: Bridges, Bonds, VXLANs, and More
Once you start running VMs and containers, networking gets more complicated than a flat home network with a single subnet. Your VMs need to talk to each other, to the host, and to the outside world. Containers need isolated networks with controlled access. You might want VMs on different physical hosts to share a Layer 2 domain without a physical switch connection between them.
Linux has a rich set of virtual networking primitives built into the kernel. Understanding when to use a bridge versus macvlan versus VXLAN is the difference between a clean, performant network and a tangled mess that breaks every time you add a new service.

Linux Bridges
A Linux bridge works like a virtual switch. You connect physical interfaces, VM tap devices, and container veth pairs to it, and the bridge forwards Ethernet frames between them. This is the default networking mode for most hypervisors.
When to Use
- VMs that need to be on the same subnet as your physical network
- Multiple VMs that need to communicate with each other and the host
- The default choice for Proxmox, libvirt, and QEMU networking
Configuration
# Create a bridge
sudo ip link add br0 type bridge
sudo ip link set br0 up
# Add a physical interface to the bridge
sudo ip link set enp3s0 master br0
# Move the IP address from the physical interface to the bridge
sudo ip addr del 10.0.0.5/24 dev enp3s0
sudo ip addr add 10.0.0.5/24 dev br0
sudo ip route add default via 10.0.0.1 dev br0
For persistent configuration with systemd-networkd:
# /etc/systemd/network/10-br0.netdev
[NetDev]
Name=br0
Kind=bridge
# /etc/systemd/network/20-br0-bind.network
[Match]
Name=enp3s0
[Network]
Bridge=br0
# /etc/systemd/network/30-br0.network
[Match]
Name=br0
[Network]
Address=10.0.0.5/24
Gateway=10.0.0.1
DNS=10.0.0.1
Proxmox Bridge Configuration
Proxmox creates bridges by default. In /etc/network/interfaces:
auto vmbr0
iface vmbr0 inet static
address 10.0.0.5/24
gateway 10.0.0.1
bridge-ports enp3s0
bridge-stp off
bridge-fd 0
VMs attached to vmbr0 get their own IP on the 10.0.0.0/24 network, visible to everything on your LAN.
Network Bonds (Link Aggregation)
Bonding combines multiple physical NICs into a single logical interface for increased throughput, redundancy, or both.
Bonding Modes
| Mode | Name | Use Case | Switch Support Required |
|---|---|---|---|
| 0 | balance-rr | Round-robin, spreads packets across NICs | Yes (EtherChannel) |
| 1 | active-backup | Failover only, one NIC active at a time | No |
| 2 | balance-xor | Hash-based distribution | Yes |
| 3 | broadcast | Sends on all NICs | Yes |
| 4 | 802.3ad (LACP) | Dynamic link aggregation, best throughput | Yes (LACP) |
| 5 | balance-tlb | Adaptive transmit load balancing | No |
| 6 | balance-alb | Adaptive load balancing (TX + RX) | No |
For homelabs, mode 1 (active-backup) is the safest choice — it provides failover without requiring switch configuration. Mode 4 (LACP) gives the best throughput but requires a managed switch that supports 802.3ad.
Configuration
# Create a bond
sudo ip link add bond0 type bond mode 802.3ad
# Add physical interfaces
sudo ip link set enp3s0 master bond0
sudo ip link set enp4s0 master bond0
sudo ip link set bond0 up
# Then bridge the bond (for VM use)
sudo ip link add br0 type bridge
sudo ip link set bond0 master br0
sudo ip addr add 10.0.0.5/24 dev br0
sudo ip link set br0 up
Persistent with systemd-networkd:
# /etc/systemd/network/10-bond0.netdev
[NetDev]
Name=bond0
Kind=bond
[Bond]
Mode=802.3ad
MIIMonitorSec=100ms
LACPTransmitRate=fast
# /etc/systemd/network/20-bond0-port1.network
[Match]
Name=enp3s0
[Network]
Bond=bond0
# /etc/systemd/network/20-bond0-port2.network
[Match]
Name=enp4s0
[Network]
Bond=bond0
Macvlan
Macvlan creates virtual interfaces directly on a physical NIC, each with its own MAC address. From the network's perspective, each macvlan interface looks like a separate physical device plugged into the switch.
When to Use
- Containers that need their own IP on the physical network (without a bridge)
- Better performance than bridging for simple cases (less overhead)
- When you want containers to appear as independent hosts on your LAN
Modes
| Mode | Description |
|---|---|
| bridge | Macvlan interfaces can communicate with each other. Most common. |
| vepa | All traffic goes to the external switch, even between macvlan interfaces. |
| private | Macvlan interfaces are completely isolated from each other. |
| passthru | Direct NIC access for a single macvlan interface. |
Configuration
# Create a macvlan interface
sudo ip link add mymacvlan link enp3s0 type macvlan mode bridge
sudo ip addr add 10.0.0.100/24 dev mymacvlan
sudo ip link set mymacvlan up
Docker with Macvlan
# Create a Docker macvlan network
docker network create -d macvlan \
--subnet=10.0.0.0/24 \
--gateway=10.0.0.1 \
-o parent=enp3s0 \
my_macvlan
# Run a container with its own LAN IP
docker run --rm -it --network my_macvlan --ip 10.0.0.200 alpine sh
Caveat: The host cannot communicate directly with macvlan containers. Traffic between the host and a macvlan interface must go through the physical switch and back. To work around this, create a macvlan interface on the host too:
sudo ip link add macvlan-shim link enp3s0 type macvlan mode bridge
sudo ip addr add 10.0.0.99/32 dev macvlan-shim
sudo ip link set macvlan-shim up
sudo ip route add 10.0.0.200/32 dev macvlan-shim
Ipvlan
Ipvlan is similar to macvlan but all virtual interfaces share the parent's MAC address. Each gets its own IP, but traffic is differentiated by IP rather than MAC.
When to Use
- Environments where MAC limits matter (some wireless networks, cloud instances)
- When you need many virtual interfaces (macvlan can exhaust switch MAC tables)
- Works where macvlan doesn't (notably on WiFi adapters)
Modes
| Mode | Layer | Description |
|---|---|---|
| L2 | Layer 2 | Similar to macvlan bridge mode, but shared MAC |
| L3 | Layer 3 | Routing-based. No broadcast/ARP between ipvlan interfaces. |
| L3S | Layer 3 | Like L3 but with connection tracking (iptables/nftables work) |
# Create an ipvlan L2 interface
sudo ip link add myipvlan link enp3s0 type ipvlan mode l2
# Docker ipvlan network
docker network create -d ipvlan \
--subnet=10.0.0.0/24 \
--gateway=10.0.0.1 \
-o parent=enp3s0 \
my_ipvlan
VXLANs (Virtual Extensible LAN)
VXLAN creates Layer 2 overlays on top of a Layer 3 network. In practical terms: two hosts on different subnets (even in different locations) can have VMs that share the same Layer 2 network as if they were on the same switch.
When to Use
- Multi-host clusters where VMs/containers need Layer 2 connectivity
- Proxmox clusters spanning different subnets or sites
- K3s/Kubernetes clusters that need pod networking across hosts
- Extending VLANs beyond your physical switch infrastructure
How It Works
VXLAN encapsulates Ethernet frames in UDP packets (port 4789). Each VXLAN segment has a VNI (VXLAN Network Identifier) — think of it as a VLAN ID but with a 24-bit range (16 million segments instead of 4,094).
Point-to-Point Configuration
# Host A (10.0.0.5)
sudo ip link add vxlan100 type vxlan id 100 \
local 10.0.0.5 remote 10.0.0.6 dstport 4789 dev enp3s0
sudo ip link set vxlan100 up
sudo ip addr add 192.168.100.1/24 dev vxlan100
# Host B (10.0.0.6)
sudo ip link add vxlan100 type vxlan id 100 \
local 10.0.0.6 remote 10.0.0.5 dstport 4789 dev enp3s0
sudo ip link set vxlan100 up
sudo ip addr add 192.168.100.2/24 dev vxlan100
Multicast VXLAN (Multiple Hosts)
# On all hosts — join a multicast group instead of specifying a single remote
sudo ip link add vxlan100 type vxlan id 100 \
group 239.1.1.1 dstport 4789 dev enp3s0
sudo ip link set vxlan100 up
# Bridge VXLAN to VMs
sudo ip link add br-vxlan100 type bridge
sudo ip link set vxlan100 master br-vxlan100
sudo ip link set br-vxlan100 up
MTU Considerations
VXLAN adds 50 bytes of overhead. If your physical network MTU is 1500, set VXLAN interfaces to 1450. Better yet, enable jumbo frames (MTU 9000) on your physical network and set VXLAN to 8950:
sudo ip link set enp3s0 mtu 9000
sudo ip link set vxlan100 mtu 8950
Choosing the Right Option
| Scenario | Recommended |
|---|---|
| VMs on same host, need LAN access | Linux bridge |
| Containers need own LAN IPs | Macvlan (bridge mode) |
| Many containers, MAC table limits | Ipvlan L2 |
| Multi-host VM/container networking | VXLAN + bridge |
| NIC redundancy, no managed switch | Bond mode 1 (active-backup) |
| Maximum throughput, managed switch | Bond mode 4 (LACP) |
| WiFi-connected containers | Ipvlan (macvlan doesn't work on WiFi) |
| Isolated container networks | Docker default bridge (no changes needed) |
Start with Linux bridges for VMs — it's what Proxmox and libvirt use by default, it's well-understood, and it works. Move to macvlan when you need containers on your LAN without a bridge. Reach for VXLAN only when you need Layer 2 connectivity across multiple hosts. Each layer of abstraction adds complexity, so use the simplest option that meets your requirements.