← All articles
VIRTUALIZATION Incus for Your Homelab: System Containers and VMs Wi... 2026-02-14 · 8 min read · incus · containers · virtualization

Incus for Your Homelab: System Containers and VMs Without the Drama

Virtualization 2026-02-14 · 8 min read incus containers virtualization lxd lxc

If you've been following the Linux container ecosystem, you might have noticed some upheaval in 2023 when Canonical changed the LXD project's governance model. The Linux Containers community responded by forking LXD into Incus, a fully community-driven project under the same umbrella as LXC and LXCFS. The result is a capable system container and VM manager with a clean governance model and active development.

Incus occupies a unique niche in the homelab virtualization stack. Docker containers share the host kernel and are designed to run single processes. Virtual machines emulate complete hardware and run full operating systems. Incus system containers sit between these two: they run a full init system and look like complete machines, but share the host kernel for near-zero overhead.

Incus system containers and VMs

For homelab use, Incus is particularly compelling. You get the density and speed of containers with the management experience of VMs. Need 10 Ubuntu servers for testing a Kubernetes cluster? Incus spins them up in seconds, each with its own IP address, init system, and package manager. Need a Windows VM for that one app? Incus handles that too.

Installation

Incus is available as a native package on several distributions and via Zabbly's package repository for others.

Debian/Ubuntu (Zabbly Repository)

The Zabbly repository provides up-to-date Incus packages maintained by the project's lead developer:

# Add the Zabbly repository
curl -fsSL https://pkgs.zabbly.com/key.asc | sudo gpg --dearmor -o /etc/apt/keyrings/zabbly.gpg
echo "deb [signed-by=/etc/apt/keyrings/zabbly.gpg] https://pkgs.zabbly.com/incus/stable $(lsb_release -cs) main" | \
  sudo tee /etc/apt/sources.list.d/zabbly-incus-stable.list

# Install
sudo apt update
sudo apt install incus

# Add your user to the incus-admin group
sudo usermod -aG incus-admin $USER
newgrp incus-admin

Fedora

# Available in Fedora repos
sudo dnf install incus incus-client

# Enable and start
sudo systemctl enable --now incus
sudo usermod -aG incus-admin $USER

Initial Setup

Run the interactive initialization to configure storage, networking, and clustering:

incus admin init

For a typical homelab setup, here are the recommended answers:

Would you like to use clustering? no
Do you want to configure a new storage pool? yes
Name of the storage pool: default
Storage backend: zfs (or btrfs, or dir)
Create a new ZFS pool? yes
Would you like to use an existing block device? no
Size in GiB of the new loop device: 50
Would you like to connect to a MAAS server? no
Would you like to create a new local network bridge? yes
What should the new bridge be called? incusbr0
What IPv4 address should be used? auto
What IPv6 address should be used? auto
Would you like the server to be available over the network? yes
Address to bind to: [::]
Port to bind to: 8443
Would you like stale cached images to be updated automatically? yes

The storage backend matters. If you're running on a ZFS or Btrfs filesystem already, use that backend for instant snapshots and copy-on-write clones. The dir backend works everywhere but doesn't support snapshots or efficient copies.

Launching Containers

Incus downloads images from the community image server. The first launch takes a moment to download; subsequent launches from cached images are nearly instant.

# List available images
incus image list images:

# Filter by distribution
incus image list images: ubuntu/24.04
incus image list images: debian/12
incus image list images: alpine/3.20

# Launch an Ubuntu 24.04 container
incus launch images:ubuntu/24.04 web-server

# Launch with specific resource limits
incus launch images:debian/12 database \
  --config limits.cpu=2 \
  --config limits.memory=4GiB

# Launch a VM instead of a container
incus launch images:ubuntu/24.04 test-vm --vm \
  --config limits.cpu=4 \
  --config limits.memory=8GiB

# List running instances
incus list

The output of incus list shows you every running instance with its IP address, state, and type:

+------------+---------+-----------------------+------+-----------+-----------+
|    NAME    |  STATE  |         IPV4          | TYPE | SNAPSHOTS | LOCATION  |
+------------+---------+-----------------------+------+-----------+-----------+
| database   | RUNNING | 10.0.0.45 (eth0)     | CT   | 0         | homelab   |
| test-vm    | RUNNING | 10.0.0.46 (enp5s0)   | VM   | 0         | homelab   |
| web-server | RUNNING | 10.0.0.44 (eth0)     | CT   | 0         | homelab   |
+------------+---------+-----------------------+------+-----------+-----------+

Working with Instances

Incus provides a consistent management interface whether you're working with containers or VMs:

# Open a shell inside a container
incus exec web-server -- bash

# Run a command without entering
incus exec web-server -- apt update && apt upgrade -y

# Push a file into the container
incus file push ./nginx.conf web-server/etc/nginx/nginx.conf

# Pull a file from the container
incus file pull web-server/var/log/nginx/access.log ./

# View resource usage
incus info web-server

# Stop, start, restart
incus stop web-server
incus start web-server
incus restart web-server

# Delete an instance
incus delete web-server --force

Resource Limits and Profiles

Profiles let you define reusable resource configurations. Instead of setting limits on every container individually, create profiles for different workload types:

# Create a profile for lightweight services
incus profile create lightweight
incus profile set lightweight limits.cpu 1
incus profile set lightweight limits.memory 512MiB

# Create a profile for database servers
incus profile create database
incus profile set database limits.cpu 4
incus profile set database limits.memory 8GiB
incus profile device add database data disk \
  source=/mnt/fast-storage/incus-data pool=fast-pool

# Launch with a profile
incus launch images:ubuntu/24.04 my-cache --profile lightweight
incus launch images:ubuntu/24.04 my-postgres --profile database

# Apply multiple profiles
incus launch images:ubuntu/24.04 my-app --profile default --profile lightweight

Networking

Incus creates a bridge network by default. For homelab use, you often want containers to have IP addresses on your actual LAN so other devices can reach them directly.

Bridged Networking (LAN IP Addresses)

Create a macvlan or bridged network that gives containers addresses on your physical network:

# Option 1: Use macvlan (simplest, no host bridge needed)
incus network create labnet \
  --type=macvlan \
  parent=eth0

# Option 2: Bridge to physical network
# First, create a bridge on the host
# /etc/network/interfaces (Debian) or nmcli (Fedora)
sudo nmcli con add type bridge con-name br0 ifname br0
sudo nmcli con add type bridge-slave con-name br0-port1 ifname eth0 master br0
sudo nmcli con modify br0 ipv4.method auto
sudo nmcli con up br0

# Then tell Incus to use it
incus network create labnet \
  --type=bridged \
  bridge.external_interfaces=br0
# Attach a container to the bridged network
incus network attach labnet web-server eth0

# Or use a profile for all containers
incus profile device add default eth0 nic \
  nictype=bridged parent=br0

With bridged networking, containers get DHCP addresses from your router (or static IPs if you configure them inside the container). Other devices on your LAN can reach them directly.

Storage

Incus storage pools determine where instance data lives and what features are available:

# List storage pools
incus storage list

# Create a ZFS storage pool on a dedicated disk
incus storage create fast-pool zfs source=/dev/nvme0n1

# Create a Btrfs pool
incus storage create data-pool btrfs source=/dev/sdb

# Create a directory-based pool (works everywhere, fewer features)
incus storage create simple-pool dir source=/mnt/storage/incus

# Set a pool as default
incus profile device set default root pool=fast-pool

For homelab use, ZFS or Btrfs storage pools give you instant snapshots, efficient clones, and quotas. The dir backend is fine for testing but lacks these features.

Snapshots and Backups

Snapshots are instant and efficient with ZFS or Btrfs storage backends:

# Create a snapshot
incus snapshot create web-server pre-upgrade

# List snapshots
incus snapshot list web-server

# Restore a snapshot (stops the container first)
incus snapshot restore web-server pre-upgrade

# Delete a snapshot
incus snapshot delete web-server pre-upgrade

# Create an automatic snapshot schedule
incus config set web-server snapshots.schedule "0 2 * * *"
incus config set web-server snapshots.expiry 7d
incus config set web-server snapshots.pattern "auto-%d"

For full backups (exportable, transferable):

# Export an instance to a tarball
incus export web-server /backups/web-server-backup.tar.gz

# Import on another Incus host
incus import /backups/web-server-backup.tar.gz web-server-restored

# Copy an instance to a remote Incus server
incus copy web-server remote-host:web-server-copy

Migrating from LXD

If you're coming from LXD, migration is straightforward. The lxd-to-incus tool handles the conversion:

# Install the migration tool
sudo apt install incus lxd-to-incus

# Run the migration (converts everything in place)
sudo lxd-to-incus

# Verify your instances
incus list

The migration tool converts:

CLI commands are nearly identical. Replace lxc with incus:

# LXD:  lxc launch ubuntu:24.04 mycontainer
# Incus: incus launch images:ubuntu/24.04 mycontainer

# LXD:  lxc exec mycontainer -- bash
# Incus: incus exec mycontainer -- bash

Incus vs Docker vs Proxmox

Each tool serves different homelab needs:

Feature Incus (System Containers) Docker (App Containers) Proxmox (VMs)
Overhead ~5 MB per container ~10 MB per container ~512 MB per VM
Boot time 1-3 seconds Sub-second 15-60 seconds
Full OS experience Yes (init, systemd, ssh) No (single process) Yes
Different kernels No (shares host kernel) No (shares host kernel) Yes
Windows support No No (WSL only) Yes
GPU passthrough Limited Via runtime flags Full VFIO
Snapshotting Instant (ZFS/Btrfs) Via image layers Instant (ZFS/Btrfs)
Clustering Built-in Swarm/K8s Built-in

Use Incus when you need multiple Linux environments that feel like full machines but with minimal overhead. Testing infrastructure changes, running network services, development environments, and any workload where you'd normally spin up a VM but don't need a separate kernel.

Use Docker when you need to run specific applications with defined images. Web apps, databases, monitoring stacks — anything with a Dockerfile or Compose file.

Use Proxmox when you need full hardware virtualization, Windows VMs, GPU passthrough, or a web-based management interface.

Many homelabs run all three: Proxmox as the hypervisor, with Incus inside a Proxmox VM for lightweight system containers, and Docker on the same or another VM for application workloads. They complement each other well.

Getting Started: A Practical Lab

Here's a quick setup to get a feel for Incus — a three-container mini-lab:

# Create containers
incus launch images:ubuntu/24.04 web
incus launch images:ubuntu/24.04 app
incus launch images:ubuntu/24.04 db

# Install nginx on the web container
incus exec web -- apt update
incus exec web -- apt install -y nginx

# Install PostgreSQL on the db container
incus exec db -- apt update
incus exec db -- apt install -y postgresql
incus exec db -- systemctl enable --now postgresql

# Check IPs
incus list

# Test connectivity between containers
incus exec web -- ping -c 3 $(incus list db -f csv -c 4 | cut -d' ' -f1)

You now have three isolated Linux environments, each with their own IP, running services, consuming minimal resources, and manageable from a single CLI. That's the Incus experience — lightweight virtualization without the weight.