← All articles
NETWORKING MetalLB: Bare-Metal Load Balancer for Kubernetes 2026-02-09 · 6 min read · metallb · kubernetes · load-balancer

MetalLB: Bare-Metal Load Balancer for Kubernetes

Networking 2026-02-09 · 6 min read metallb kubernetes load-balancer networking k8s

When you deploy a service with type: LoadBalancer in a cloud Kubernetes cluster, the cloud provider automatically assigns an external IP from its pool. In a bare-metal or homelab cluster, that service just sits in Pending state forever. There's no cloud integration to provision an IP address, so the request goes nowhere.

MetalLB fills this gap. It's a load balancer implementation for bare-metal Kubernetes that assigns real IP addresses from a pool you define on your local network. Your services get routable IPs that work just like cloud load balancers — except the IPs come from your homelab subnet instead of AWS or GCP.

MetalLB logo

If you're running k3s, K8s, or any Kubernetes distribution on bare metal and you want LoadBalancer services to actually work, MetalLB is the standard solution.

How MetalLB Works

MetalLB runs as a set of pods in your cluster. When a service of type: LoadBalancer is created, MetalLB assigns it an IP from a configured pool and makes that IP reachable on your network.

It operates in two modes:

Layer 2 mode — MetalLB responds to ARP requests for the assigned IP, directing traffic to a single node. That node then distributes traffic to the service's pods via kube-proxy. Simple, no router configuration needed.

BGP mode — MetalLB peers with your network router via BGP and announces the assigned IPs. Traffic is distributed across nodes by the router. More sophisticated, requires a BGP-capable router.

For most homelabs, Layer 2 mode is the right choice. It works with any network setup and requires zero router configuration.

Layer 2 vs BGP: Which to Use

Layer 2 mode:

BGP mode:

Unless you're running a BGP-capable router and need true multi-path load distribution, start with Layer 2.

Prerequisites

You need a Kubernetes cluster (k3s, kubeadm, etc.) running on bare metal or VMs. If you're using k3s, disable its built-in ServiceLB first, since it conflicts with MetalLB:

# If k3s is already installed, edit the service
sudo systemctl edit k3s
# Add to [Service] section:
# ExecStart=
# ExecStart=/usr/local/bin/k3s server --disable=servicelb

# Or reinstall with it disabled
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable=servicelb" sh -

You also need a range of IP addresses on your network reserved for MetalLB. These should be in your LAN subnet but outside your DHCP range so there are no conflicts. For example, if your network is 192.168.1.0/24 and DHCP hands out .100-.200, you could reserve .240-.250 for MetalLB.

Installation

The recommended installation method is via Kubernetes manifests:

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.14.9/config/manifests/metallb-native.yaml

Wait for the MetalLB pods to be ready:

kubectl get pods -n metallb-system -w

You should see a controller pod and a speaker pod on each node. The controller handles IP assignment. The speakers handle network-level announcements (ARP in L2 mode, BGP in BGP mode).

Installation via Helm

Alternatively, use Helm:

helm repo add metallb https://metallb.github.io/metallb
helm repo update
helm install metallb metallb/metallb --namespace metallb-system --create-namespace

Configuring Layer 2 Mode

MetalLB is configured through Kubernetes custom resources. Create an IP address pool and a L2 advertisement:

# metallb-config.yaml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: homelab-pool
  namespace: metallb-system
spec:
  addresses:
    - 192.168.1.240-192.168.1.250
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: homelab-l2
  namespace: metallb-system
spec:
  ipAddressPools:
    - homelab-pool

Apply it:

kubectl apply -f metallb-config.yaml

That's the entire configuration for Layer 2 mode. MetalLB is now ready to assign IPs from .240-.250 to any LoadBalancer service.

Multiple Address Pools

You can define multiple pools for different purposes:

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: web-services
  namespace: metallb-system
spec:
  addresses:
    - 192.168.1.240-192.168.1.245
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: internal-services
  namespace: metallb-system
spec:
  addresses:
    - 192.168.1.246-192.168.1.250

Services can request a specific pool via annotation:

metadata:
  annotations:
    metallb.universe.tf/address-pool: web-services

Deploying a LoadBalancer Service

Let's test MetalLB with a simple nginx deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:latest
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx
spec:
  type: LoadBalancer
  selector:
    app: nginx
  ports:
    - port: 80
      targetPort: 80
kubectl apply -f nginx-lb.yaml

Check the service:

kubectl get svc nginx
NAME    TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)        AGE
nginx   LoadBalancer   10.43.12.145   192.168.1.240   80:31234/TCP   10s

The EXTERNAL-IP is assigned from your MetalLB pool. From any device on your network:

curl http://192.168.1.240

You should see the nginx welcome page. That IP is now routable on your LAN, just like any other device.

Requesting a Specific IP

If you want a service to always get a specific IP:

apiVersion: v1
kind: Service
metadata:
  name: grafana
spec:
  type: LoadBalancer
  loadBalancerIP: 192.168.1.242
  selector:
    app: grafana
  ports:
    - port: 80
      targetPort: 3000

This is useful for services where you want a predictable IP to set up DNS records.

Configuring BGP Mode

If you have a BGP-capable router and want true multi-node load balancing:

apiVersion: metallb.io/v1beta2
kind: BGPPeer
metadata:
  name: router
  namespace: metallb-system
spec:
  myASN: 64500
  peerASN: 64501
  peerAddress: 192.168.1.1
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: bgp-pool
  namespace: metallb-system
spec:
  addresses:
    - 192.168.1.240-192.168.1.250
---
apiVersion: metallb.io/v1beta1
kind: BGPAdvertisement
metadata:
  name: homelab-bgp
  namespace: metallb-system
spec:
  ipAddressPools:
    - bgp-pool

On your router, configure a BGP peer with:

In pfSense/OPNsense, this is configured through the FRR or OpenBGPD package. In VyOS:

set protocols bgp 64501 neighbor 192.168.1.101 remote-as 64500
set protocols bgp 64501 neighbor 192.168.1.102 remote-as 64500
set protocols bgp 64501 neighbor 192.168.1.103 remote-as 64500

Combining MetalLB with Ingress

MetalLB and Ingress controllers complement each other. A typical pattern:

  1. MetalLB assigns a single IP to your Ingress controller (Traefik, Nginx, etc.)
  2. The Ingress controller routes traffic by hostname to different services
  3. Services behind the Ingress use ClusterIP type (no direct external IP)
# The ingress controller service gets a MetalLB IP
apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
spec:
  type: LoadBalancer
  loadBalancerIP: 192.168.1.240
  selector:
    app.kubernetes.io/name: ingress-nginx
  ports:
    - name: http
      port: 80
    - name: https
      port: 443

Then create a wildcard DNS record pointing *.k8s.homelab.local to 192.168.1.240. All your services are accessible via hostname routing through the single MetalLB IP.

This is the most efficient use of IPs and the most common production pattern, even in cloud environments.

Troubleshooting

Service Stuck in Pending

# Check MetalLB controller logs
kubectl logs -n metallb-system -l app.kubernetes.io/component=controller

# Common causes:
# - IP pool exhausted (all IPs assigned)
# - No L2Advertisement or BGPAdvertisement configured
# - MetalLB pods not running

IP Not Reachable

# Check which node is announcing the IP (L2 mode)
kubectl get events -n metallb-system

# Verify the speaker pods are running on all nodes
kubectl get pods -n metallb-system -o wide

# Check ARP from another machine
arp -a | grep 192.168.1.240

In Layer 2 mode, traffic for a given IP always enters the cluster through a single node. If that node's network is misconfigured, the IP won't be reachable even though MetalLB assigned it.

Conflict with k3s ServiceLB

If you see duplicate IPs or erratic behavior and you're running k3s, make sure ServiceLB (Klipper) is disabled. They can't coexist:

kubectl get pods -n kube-system | grep svclb
# If you see svclb pods, ServiceLB is still active

Layer 2 Limitations

Layer 2 mode has one significant limitation: all traffic for a given service IP goes through a single node. MetalLB assigns one node as the "owner" of each IP, and that node handles all incoming traffic. The node then uses kube-proxy to distribute traffic to pods, which may be on other nodes.

This means:

For a homelab, this is rarely a problem. Your services aren't hitting the bandwidth limits of a single node. If they are, BGP mode or multiple service IPs can distribute the load.

MetalLB is one of those homelab tools that solves a specific problem perfectly. Without it, LoadBalancer services don't work on bare metal. With it, they work exactly as expected, and your homelab Kubernetes cluster behaves like a proper production environment. The installation takes five minutes, the configuration is a handful of YAML lines, and it just works.