← All articles
STORAGE ZFS vs Btrfs: Choosing the Right Filesystem for Your... 2026-02-09 · 15 min read · zfs · btrfs · filesystem

ZFS vs Btrfs: Choosing the Right Filesystem for Your Homelab Storage

Storage 2026-02-09 · 15 min read zfs btrfs filesystem storage raid

If you're building a homelab storage server, you've probably landed on one of two filesystems: ZFS or Btrfs. Both are copy-on-write (CoW) filesystems with built-in checksumming, snapshots, and RAID-like capabilities. Both are miles ahead of ext4 + mdraid for data integrity. But they have fundamentally different architectures, different strengths, and different failure modes.

This isn't a "which is better" article — it's a "which is right for your specific homelab" guide. I've run both in production homelabs, and the answer depends on what you're storing, what hardware you have, and how much you want to think about your storage layer.

OpenZFS logo

The Basics: What They Share

Both ZFS and Btrfs are copy-on-write filesystems, which means they never overwrite data in place. When you modify a file, the new data is written to a new location, and the metadata pointer is updated atomically. This gives you:

But the implementations are very different, and those differences matter a lot in practice.

Architectural Differences

ZFS: The Integrated Storage Stack

ZFS doesn't just manage files — it manages entire disks. ZFS replaces the traditional layers of partition table + RAID controller + volume manager + filesystem with a single integrated stack:

Traditional:           ZFS:
┌─────────────┐       ┌─────────────┐
│  Filesystem │       │     ZFS     │
├─────────────┤       │  (all-in-   │
│   LVM/LV    │       │    one)     │
├─────────────┤       │             │
│   mdraid    │       │             │
├─────────────┤       └──────┬──────┘
│    Disks    │              │
└─────────────┘       ┌──────┴──────┐
                      │    Disks    │
                      └─────────────┘

ZFS pools (zpools) contain one or more vdevs (virtual devices), and each vdev can be a single disk, a mirror, or a RAIDZ group. Datasets (filesystems or zvols) live within the pool and share all available space.

Btrfs: The Flexible Filesystem

Btrfs operates as a filesystem layer. It can manage multiple devices directly (its own RAID), but it's still fundamentally a filesystem that the kernel mounts. It integrates volume management into the filesystem but doesn't take over the entire block device stack the way ZFS does.

Btrfs:
┌─────────────────┐
│      Btrfs      │
│  (filesystem +  │
│  volume mgmt)   │
└────────┬────────┘
         │
┌────────┴────────┐
│     Disks       │
│ (or partitions) │
└─────────────────┘

This matters because Btrfs can be placed on top of other block devices (LVM, mdraid, dm-crypt) if you want, while ZFS generally wants raw disks.

RAID Configurations

This is where the differences get very practical.

ZFS RAID Options

Configuration Min Disks Parity Usable Space Read Speed Write Speed
Mirror 2 n-1 disks 50% (2 disks) Excellent Good
RAIDZ1 3 1 disk (n-1)/n Good Fair
RAIDZ2 4 2 disks (n-2)/n Good Fair
RAIDZ3 5 3 disks (n-3)/n Good Fair
Striped Mirrors 4 1 per mirror 50% Excellent Excellent

Creating a ZFS pool with RAIDZ2 (recommended for homelab):

# Create a RAIDZ2 pool with 4 disks
zpool create -o ashift=12 tank raidz2 \
  /dev/disk/by-id/ata-WDC_WD40EFAX-001 \
  /dev/disk/by-id/ata-WDC_WD40EFAX-002 \
  /dev/disk/by-id/ata-WDC_WD40EFAX-003 \
  /dev/disk/by-id/ata-WDC_WD40EFAX-004

# Check the pool status
zpool status tank

Important ZFS limitation: You cannot add a single disk to an existing RAIDZ vdev. If you create a 4-disk RAIDZ2, you can't later expand it to 5 disks (although ZFS RAIDZ expansion landed in OpenZFS 2.3, it's still new and carries risk). You can add another vdev to the pool, but that new vdev should have the same redundancy level.

# Add another RAIDZ2 vdev to an existing pool (this works)
zpool add tank raidz2 \
  /dev/disk/by-id/ata-WDC_WD40EFAX-005 \
  /dev/disk/by-id/ata-WDC_WD40EFAX-006 \
  /dev/disk/by-id/ata-WDC_WD40EFAX-007 \
  /dev/disk/by-id/ata-WDC_WD40EFAX-008

# RAIDZ expansion (OpenZFS 2.3+ — still new, test first)
zpool attach tank raidz2-0 /dev/disk/by-id/ata-WDC_WD40EFAX-005

Btrfs RAID Options

Configuration Min Disks Parity Usable Space Status
RAID1 2 1 copy 50% Stable
RAID1C3 3 2 copies 33% Stable
RAID1C4 4 3 copies 25% Stable
RAID10 4 1 per mirror 50% Stable
RAID0 2 None 100% Stable
RAID5 3 1 disk (n-1)/n UNSTABLE — DO NOT USE
RAID6 4 2 disks (n-2)/n UNSTABLE — DO NOT USE

Creating a Btrfs RAID1 array:

# Create a Btrfs RAID1 filesystem across 2 disks
mkfs.btrfs -d raid1 -m raid1 \
  /dev/sdb /dev/sdc

# Mount it
mount /dev/sdb /mnt/storage

# Check the filesystem
btrfs filesystem show /mnt/storage

Critical Btrfs warning: Btrfs RAID5 and RAID6 have a known write-hole bug that can cause data loss on power failure. This has been known since 2014 and remains unfixed. Do not use Btrfs RAID5/6 for any data you care about. This is the single biggest limitation of Btrfs for homelab use.

Adding a device to an existing Btrfs filesystem is straightforward:

# Add a new device
btrfs device add /dev/sdd /mnt/storage

# Rebalance data across all devices
btrfs balance start -dconvert=raid1 -mconvert=raid1 /mnt/storage

# Check progress
btrfs balance status /mnt/storage

This flexibility is a genuine advantage of Btrfs — you can grow and reshape your storage without rebuilding the array.

Snapshots

Both filesystems support instant snapshots, but they work differently.

ZFS Snapshots

# Create a snapshot
zfs snapshot tank/data@2026-02-09-daily

# List snapshots
zfs list -t snapshot

# Roll back to a snapshot (destroys all changes since)
zfs rollback tank/data@2026-02-09-daily

# Clone a snapshot into a new writable dataset
zfs clone tank/data@2026-02-09-daily tank/data-recovered

# Destroy a snapshot
zfs destroy tank/data@2026-02-09-daily

ZFS snapshots are managed at the dataset level and are named with the @ separator. You can have hierarchical datasets with independent snapshot schedules:

# Create datasets for different workloads
zfs create tank/vms
zfs create tank/media
zfs create tank/backups

# Different snapshot policies per dataset
# Using zfs-auto-snapshot or sanoid for automation

Automated snapshots with sanoid (the standard tool):

# /etc/sanoid/sanoid.conf
[tank/data]
    use_template = production
    recursive = yes

[template_production]
    frequently = 0
    hourly = 24
    daily = 30
    monthly = 12
    yearly = 2
    autosnap = yes
    autoprune = yes
# Install and enable sanoid
sudo apt install sanoid
sudo systemctl enable --now sanoid.timer

# Sanoid runs on a timer and manages snapshot creation/pruning

Btrfs Snapshots

# Btrfs uses subvolumes, which are like lightweight directories
# that can be independently snapshotted
btrfs subvolume create /mnt/storage/data
btrfs subvolume create /mnt/storage/vms

# Create a read-only snapshot
btrfs subvolume snapshot -r /mnt/storage/data \
  /mnt/storage/.snapshots/data-2026-02-09

# Create a writable snapshot (for testing changes)
btrfs subvolume snapshot /mnt/storage/data \
  /mnt/storage/data-test

# Delete a snapshot
btrfs subvolume delete /mnt/storage/.snapshots/data-2026-02-09

Automated snapshots with snapper (the standard tool for Btrfs):

# Install snapper
sudo apt install snapper

# Create a snapper config for a subvolume
snapper -c data create-config /mnt/storage/data

# Edit the config
sudo vim /etc/snapper/configs/data
# /etc/snapper/configs/data
SUBVOLUME="/mnt/storage/data"
TIMELINE_CREATE="yes"
TIMELINE_CLEANUP="yes"
TIMELINE_LIMIT_HOURLY="24"
TIMELINE_LIMIT_DAILY="30"
TIMELINE_LIMIT_WEEKLY="8"
TIMELINE_LIMIT_MONTHLY="12"
TIMELINE_LIMIT_YEARLY="2"
# Enable the timers
sudo systemctl enable --now snapper-timeline.timer
sudo systemctl enable --now snapper-cleanup.timer

Snapshot Performance Comparison

Aspect ZFS Btrfs
Creation speed Instant Instant
Space overhead None initially None initially
Impact on write perf Minimal Minimal
Max snapshots (practical) Hundreds Hundreds
Snapshot of snapshot No (use bookmarks) Yes (nested snapshots)
Send/receive Excellent Good

Send/Receive for Replication

Both filesystems support sending snapshots to another machine, which is incredibly useful for backups and disaster recovery.

ZFS Send/Receive

# Full send (initial replication)
zfs send tank/data@2026-02-09 | ssh backup-server zfs recv backup/data

# Incremental send (only changes since last snapshot)
zfs send -i tank/data@2026-02-08 tank/data@2026-02-09 | \
  ssh backup-server zfs recv backup/data

# Compressed and encrypted send over the network
zfs send -w -c tank/data@2026-02-09 | \
  ssh backup-server zfs recv backup/data

# Resume a failed transfer
zfs send -t <resume_token> | ssh backup-server zfs recv backup/data

ZFS send/receive is battle-tested and supports:

syncoid (from the sanoid project) automates this:

# Replicate a dataset to a remote server
syncoid tank/data backup-server:backup/data

# Replicate recursively
syncoid -r tank backup-server:backup

# With bandwidth limit
syncoid --bwlimit=50M tank/data backup-server:backup/data

Btrfs Send/Receive

# Full send (must be a read-only snapshot)
btrfs send /mnt/storage/.snapshots/data-2026-02-09 | \
  ssh backup-server btrfs receive /mnt/backup/

# Incremental send
btrfs send -p /mnt/storage/.snapshots/data-2026-02-08 \
  /mnt/storage/.snapshots/data-2026-02-09 | \
  ssh backup-server btrfs receive /mnt/backup/

# With compression
btrfs send /mnt/storage/.snapshots/data-2026-02-09 | \
  zstd | ssh backup-server "zstd -d | btrfs receive /mnt/backup/"

Btrfs send/receive works but lacks ZFS's resume support and raw-encrypted sends. For large datasets over unreliable links, ZFS has the edge here.

Memory Requirements

This is often the deciding factor for homelab builders on a budget.

ZFS ARC (Adaptive Replacement Cache)

ZFS uses a significant amount of RAM for its Adaptive Replacement Cache (ARC). The ARC caches recently and frequently accessed data in RAM, which dramatically improves read performance.

Default behavior: ZFS will use up to 50% of system RAM for ARC on Linux (configurable).

# Check current ARC usage
arc_summary  # or cat /proc/spl/kstat/zfs/arcstats

# Limit ARC to 8 GB
echo "options zfs zfs_arc_max=8589934592" | \
  sudo tee /etc/modprobe.d/zfs.conf
sudo update-initramfs -u

# Or set it live (doesn't survive reboot)
echo 8589934592 > /sys/module/zfs/parameters/zfs_arc_max

Practical RAM guidelines for ZFS:

Use Case Minimum RAM Recommended RAM
Basic NAS (no dedup) 4 GB 8 GB
NAS with many snapshots 8 GB 16 GB
VM storage (zvols) 8 GB 16-32 GB
With L2ARC (SSD cache) +1-2 GB per TB of L2ARC
With deduplication 5 GB per TB of storage Don't — just don't

Important: ZFS deduplication requires enormous amounts of RAM (the dedup table must fit in memory). For homelab use, just enable compression instead — it's free and often more effective.

Btrfs Memory Usage

Btrfs has much more modest memory requirements. It uses the standard Linux page cache for caching, which the kernel manages automatically.

Use Case Minimum RAM Recommended RAM
Basic NAS 2 GB 4 GB
With many snapshots 4 GB 8 GB
With RAID1 2 GB 4 GB
With heavy quotas 4 GB 8 GB

Btrfs is the clear winner here for memory-constrained systems. If you're building a NAS from a Raspberry Pi 5 or an old laptop, Btrfs will work comfortably with 4 GB of RAM where ZFS would be constantly fighting for memory.

Compression

Both support transparent compression, and both do it well.

ZFS Compression

# Enable LZ4 compression (recommended default)
zfs set compression=lz4 tank/data

# Enable zstd compression (better ratio, more CPU)
zfs set compression=zstd tank/backups

# Check compression ratio
zfs get compressratio tank/data

# Different compression per dataset
zfs set compression=lz4 tank/vms          # Fast for VM images
zfs set compression=zstd-3 tank/logs      # Better ratio for text logs
zfs set compression=off tank/media        # Already-compressed media

Btrfs Compression

# Mount with compression
mount -o compress=zstd:3 /dev/sdb /mnt/storage

# Or set in fstab
echo 'UUID=xxx /mnt/storage btrfs compress=zstd:3,space_cache=v2 0 0' >> /etc/fstab

# Per-directory compression via properties
btrfs property set /mnt/storage/logs compression zstd
btrfs property set /mnt/storage/media compression ""  # Disable

# Check compression stats
compsize /mnt/storage/data  # Needs btrfs-compsize package

Compression Performance Comparison

Algorithm Compression Ratio Speed (compress) Speed (decompress) Notes
LZ4 (ZFS/Btrfs) 2-3x (text), ~1x (media) ~700 MB/s ~3000 MB/s Almost free, always enable
ZSTD-1 3-4x (text) ~500 MB/s ~1200 MB/s Good balance
ZSTD-3 3.5-4.5x (text) ~300 MB/s ~1200 MB/s Better ratio, still fast
ZSTD-9 4-5x (text) ~60 MB/s ~1200 MB/s Archival use

Both implementations are solid. ZFS has finer-grained per-dataset control, while Btrfs allows per-file/directory properties. In practice, just set LZ4 or ZSTD-3 globally and override for specific datasets that contain already-compressed media.

Scrubbing and Data Healing

Both filesystems can detect and repair data corruption through scrubbing.

ZFS Scrub

# Start a scrub
zpool scrub tank

# Check scrub progress
zpool status tank

# Schedule regular scrubs (systemd timer or cron)
# Most distros include a zfs-scrub timer
sudo systemctl enable [email protected]

When ZFS finds a corrupt block during a scrub, it automatically repairs it from the redundant copy (mirror or parity). If there's no redundancy, it reports the error but can't fix it.

# Check for errors
zpool status -v tank

# Example output with errors:
#   pool: tank
#  state: DEGRADED
# status: One or more devices has experienced an unrecoverable error.
# scan: scrub repaired 4K in 00:02:15 with 0 errors

Btrfs Scrub

# Start a scrub
btrfs scrub start /mnt/storage

# Check progress
btrfs scrub status /mnt/storage

# Schedule regular scrubs
sudo systemctl enable [email protected]

Btrfs scrub behavior with RAID1:

# View scrub results
btrfs scrub status /mnt/storage

# Example output:
# Scrub started:    Sun Feb  9 02:00:01 2026
# Status:           finished
# Duration:         0:15:23
# Total to scrub:   1.82TiB
# Rate:             2.03GiB/s
# Error summary:    csum=0 super=0 verify=0 read=0

Performance Benchmarks

Real-world performance varies enormously based on workload, but here are general patterns from homelab-scale hardware (4-8 SATA drives, 32 GB RAM):

Sequential Read/Write (fio, 1 MB blocks)

Configuration Seq Read Seq Write Notes
ZFS Mirror (2 disks) ~350 MB/s ~180 MB/s Reads from both disks
ZFS RAIDZ2 (4 disks) ~400 MB/s ~150 MB/s Parity calculation overhead
Btrfs RAID1 (2 disks) ~200 MB/s ~180 MB/s Reads from one disk by default
Btrfs RAID10 (4 disks) ~350 MB/s ~300 MB/s Best Btrfs write perf

Random 4K IOPS (fio, queue depth 32)

Configuration Random Read Random Write Notes
ZFS Mirror (SSD) ~80K ~40K ARC caching helps enormously
ZFS RAIDZ2 (HDD) ~400 ~200 RAIDZ random write penalty
Btrfs RAID1 (SSD) ~60K ~45K Less overhead, no ARC
Btrfs RAID1 (HDD) ~300 ~250 Simpler write path

Key takeaways:

How to Benchmark Your Own Setup

# Install fio
sudo apt install fio

# Sequential write test
fio --name=seq-write --ioengine=libaio --direct=1 --bs=1M \
  --size=4G --numjobs=1 --runtime=60 --time_based \
  --rw=write --filename=/mnt/storage/fio-test

# Random read test
fio --name=rand-read --ioengine=libaio --direct=1 --bs=4k \
  --size=4G --numjobs=4 --runtime=60 --time_based \
  --rw=randread --iodepth=32 --filename=/mnt/storage/fio-test

# Mixed read/write (simulates VM workload)
fio --name=mixed --ioengine=libaio --direct=1 --bs=4k \
  --size=4G --numjobs=4 --runtime=60 --time_based \
  --rw=randrw --rwmixread=70 --iodepth=32 \
  --filename=/mnt/storage/fio-test

# Clean up
rm /mnt/storage/fio-test

Platform Support

Platform ZFS Btrfs
Linux Via OpenZFS (DKMS or built-in on Ubuntu) In-kernel, mainline since 2013
FreeBSD Native, first-class Not supported
Proxmox Built-in, well-supported Built-in, well-supported
TrueNAS CORE Native (FreeBSD) Not available
TrueNAS SCALE Via OpenZFS (Linux) Not the default, but possible
Ubuntu Built-in since 20.04 In-kernel
Fedora DKMS (copr repo) Default filesystem option
Debian contrib repo (licensing) In-kernel

The licensing issue: ZFS is licensed under CDDL, which is incompatible with the Linux kernel's GPL. This means ZFS can't be distributed as part of the kernel — it's always an out-of-tree module. Ubuntu ships it anyway (their legal interpretation differs), but Fedora and Debian require you to install it separately.

This matters for homelab because:

# Installing ZFS on Ubuntu (easiest)
sudo apt install zfsutils-linux

# Installing ZFS on Fedora (requires copr)
sudo dnf install https://zfsonlinux.org/fedora/zfs-release-2-5$(rpm --eval "%{dist}").noarch.rpm
sudo dnf install kernel-devel zfs

# Installing ZFS on Debian
sudo apt install linux-headers-amd64
sudo apt install -t bookworm-backports zfsutils-linux

Btrfs has no licensing issues — it's GPL and part of the mainline kernel. It just works on every Linux distribution.

When to Choose ZFS

Choose ZFS when:

  1. You're building a NAS and data integrity is paramount. ZFS's track record for data integrity spans decades. If you're storing irreplaceable photos, documents, or backups, ZFS gives you the most confidence.

  2. You have plenty of RAM (16+ GB). ZFS's ARC makes a massive difference for read performance. If you have the RAM to spare, ZFS will outperform Btrfs on repeated reads.

  3. You need replication to a remote backup server. ZFS send/receive with syncoid is the gold standard for incremental backup replication.

  4. You're running Proxmox or TrueNAS. Both have excellent ZFS integration with GUI management.

  5. You need RAIDZ2 or RAIDZ3. If you have 4+ disks and want parity-based redundancy, ZFS is the only option (Btrfs RAID5/6 is broken).

# A solid ZFS NAS setup
zpool create -o ashift=12 \
  -O compression=lz4 \
  -O atime=off \
  -O xattr=sa \
  -O dnodesize=auto \
  tank raidz2 \
  /dev/disk/by-id/ata-disk1 \
  /dev/disk/by-id/ata-disk2 \
  /dev/disk/by-id/ata-disk3 \
  /dev/disk/by-id/ata-disk4

# Create datasets
zfs create tank/media
zfs create tank/documents
zfs create tank/backups
zfs create -o recordsize=64K tank/vms

# Set up automated snapshots
# (install sanoid first)
systemctl enable --now sanoid.timer

When to Choose Btrfs

Choose Btrfs when:

  1. You're on a memory-constrained system. Btrfs works great with 4 GB of RAM. A Raspberry Pi 5 with USB drives and Btrfs is a perfectly reasonable small NAS.

  2. You need flexible storage growth. Adding and removing disks, converting between RAID levels, and rebalancing data online is much easier with Btrfs.

  3. You're using Fedora or openSUSE as your base. Btrfs is the default filesystem, well-tested, and tightly integrated. Snapshots with snapper + grub-btrfs give you rollback-to-boot capability.

  4. Your redundancy needs are covered by RAID1 or RAID10. If mirrors work for you, Btrfs RAID1 is solid and well-tested.

  5. You want simpler kernel integration. No DKMS, no out-of-tree modules, no licensing headaches. It just works.

# A solid Btrfs NAS setup (RAID1 with 2 disks)
mkfs.btrfs -d raid1 -m raid1 -L storage \
  /dev/sdb /dev/sdc

# Mount with recommended options
mount -o compress=zstd:3,space_cache=v2,autodefrag \
  /dev/sdb /mnt/storage

# Create subvolumes
btrfs subvolume create /mnt/storage/@data
btrfs subvolume create /mnt/storage/@media
btrfs subvolume create /mnt/storage/@backups

# fstab entry
echo 'LABEL=storage /mnt/storage btrfs compress=zstd:3,space_cache=v2,autodefrag,subvol=@data 0 0' \
  >> /etc/fstab

Workload-Specific Recommendations

NAS / File Storage

Factor Winner Why
Data integrity confidence ZFS Longer track record, more battle-tested
RAID5/6 equivalent ZFS Btrfs RAID5/6 is broken
Flexible expansion Btrfs Online device add/remove
Low-memory NAS Btrfs Works well with 4 GB RAM
Offsite replication ZFS syncoid is unbeatable

VM Storage (Proxmox, libvirt)

Factor Winner Why
Random I/O performance ZFS (mirrors) ARC + mirror reads
Thin provisioning Tie Both support it
Snapshot for backup ZFS Better tooling (sanoid)
Disk space efficiency Btrfs Reflinks, better dedup story

Container Storage (Docker, Podman)

Factor Winner Why
Docker storage driver Btrfs Native btrfs driver, reflinks
Overlay support ZFS zfs storage driver works well
Layer deduplication Btrfs Reflinks handle this naturally
Simplicity Btrfs Less configuration needed

Migration Considerations

If you're already on one filesystem and thinking about switching, here's what's involved.

From ext4/XFS to ZFS or Btrfs

You can't convert in-place. You need to:

  1. Back up all data
  2. Destroy the existing filesystem
  3. Create the new ZFS pool or Btrfs filesystem
  4. Restore data
# Backup approach using rsync
rsync -aHAXv --progress /mnt/old-storage/ /mnt/backup-drive/

# Create new filesystem, then restore
rsync -aHAXv --progress /mnt/backup-drive/ /mnt/new-storage/

From Btrfs to ZFS (or vice versa)

Same process — there's no in-place conversion between the two. Plan for this to take a while if you have terabytes of data.

Dual-Filesystem Setup

Many homelabs actually run both. This is a perfectly valid approach:

# Example: Proxmox host with both
# Boot drive: Btrfs (system snapshots)
# VM storage: ZFS mirror (performance)
# Backup storage: ZFS RAIDZ2 (capacity + redundancy)

Quick Decision Matrix

Question ZFS Btrfs
Do I have 16+ GB RAM? Yes -> ZFS shines Either works
Do I have < 8 GB RAM? Manageable with tuning Btrfs is easier
Do I need RAID5/6? RAIDZ2 is excellent DO NOT USE Btrfs RAID5/6
Do I need mirrors only? Either works Either works
Am I on FreeBSD/TrueNAS CORE? Only option Not available
Am I on Fedora/openSUSE? Extra setup (DKMS) Native, just works
Do I need offsite replication? ZFS send is better Btrfs send works
Will I expand disks over time? Less flexible Very flexible

Final Thoughts

There's no universally "better" choice. I run ZFS on my main NAS because I have 64 GB of RAM, need RAIDZ2 across 6 drives, and replicate snapshots to an offsite server with syncoid. But I run Btrfs on my Docker host because it's a 16 GB machine where containers benefit from reflinks and I don't need parity RAID.

The worst choice is ext4 + mdraid without checksumming. Either ZFS or Btrfs is a massive upgrade over that. Pick the one that fits your hardware, your RAID requirements, and your comfort level, and you'll be well-served.

Whatever you choose, remember: RAID is not a backup. Snapshots are not a backup. Both ZFS and Btrfs make it easy to replicate data to another machine — actually do it. Your future self will thank you.