ext4 vs XFS vs Btrfs vs ZFS: Choosing a Linux Filesystem for Your Homelab
Filesystem choice is one of those decisions that's easy to make and painful to change. Unlike swapping a Docker container or switching a reverse proxy, changing your filesystem means backing up everything, reformatting, and restoring. Get it right the first time, and you'll never think about it again. Get it wrong, and you'll be planning a migration weekend.
The good news: there's no universally wrong choice among the major Linux filesystems. Each one has genuine strengths for different homelab use cases. This guide covers the practical differences that actually matter — not theoretical benchmarks on enterprise hardware you don't own, but real-world behavior on the kind of hardware homelabbers actually use.

Quick Comparison
| Feature | ext4 | XFS | Btrfs | ZFS |
|---|---|---|---|---|
| Max volume size | 1 EiB | 8 EiB | 16 EiB | 256 ZiB |
| Max file size | 16 TiB | 8 EiB | 16 EiB | 16 EiB |
| Copy-on-write | No | No (reflink yes) | Yes | Yes |
| Snapshots | No | No | Yes (subvolumes) | Yes (datasets) |
| Checksumming | Metadata only | Metadata only | Data + metadata | Data + metadata |
| Compression | No | No | zstd, lzo, zlib | lz4, zstd, gzip |
| RAID support | External (mdraid) | External (mdraid) | Built-in | Built-in |
| Deduplication | No | No | Yes (offline) | Yes (inline) |
| Self-healing | No | No | Yes (with RAID) | Yes (with mirrors/raidz) |
| RAM usage | Low | Low | Moderate | High (ARC cache) |
| Maturity | Very high | Very high | High | Very high |
| Default in | Most distros | RHEL/Fedora | SUSE, some Fedora | TrueNAS, Proxmox option |
ext4: The Reliable Default
ext4 has been the default Linux filesystem since 2008. It's the Toyota Camry of filesystems — not exciting, not flashy, but it starts every morning and gets you where you need to go without drama.
Strengths
- Battle-tested reliability: ext4 has been the default filesystem for billions of Linux installations. Its failure modes are well-understood, and recovery tools (e2fsck, debugfs) are mature and effective.
- Low resource usage: ext4 needs minimal RAM and CPU. It runs happily on a Raspberry Pi or an ancient NAS box.
- Universal support: Every Linux tool, backup utility, and recovery environment supports ext4. If something goes wrong, you'll never lack for tools.
- Predictable performance: ext4 doesn't have the performance variability that CoW filesystems can exhibit under certain write patterns.
Weaknesses
- No snapshots: Without snapshots, you can't create instant point-in-time copies of your data. Backups require copying data the traditional way.
- No checksumming of data: ext4 checksums its metadata but not your actual data. Silent data corruption (bit rot) goes undetected.
- No built-in compression: Every byte you write takes a byte of disk space. No free storage savings.
- No built-in RAID: You need mdraid or hardware RAID underneath ext4 for redundancy.
Creating an ext4 Filesystem
# Basic ext4 filesystem
mkfs.ext4 /dev/sdb1
# With optimizations for large storage
mkfs.ext4 -T largefile4 -O extent,huge_file,flex_bg,metadata_csum,64bit /dev/sdb1
# Disable reserved blocks (default 5% reserved for root — wasteful on data drives)
tune2fs -m 0 /dev/sdb1
# Or set reserved blocks during creation
mkfs.ext4 -m 0 /dev/sdb1
Recommended fstab Options
/dev/sdb1 /mnt/data ext4 defaults,noatime,errors=remount-ro 0 2
noatime: Don't update access times on every read (significant performance improvement)errors=remount-ro: Mount read-only on errors instead of continuing (protects data)
When to Choose ext4
- Boot drives: System drive where stability matters most
- Simple data storage: A single drive for downloads, media, or general files
- Low-resource systems: Raspberry Pi, old NAS hardware, embedded systems
- VM disks: Virtual disk images where the host filesystem handles advanced features
- When in doubt: ext4 is never a bad choice for any single-drive use case
XFS: The Performance Workhorse
XFS was developed by Silicon Graphics in 1993 and has been the default filesystem in RHEL and Fedora since 2014. It excels at handling large files and parallel I/O — exactly the workload pattern of media servers, databases, and VM storage.
Strengths
- Excellent large-file performance: XFS was designed for media production. Streaming large video files, database tablespaces, and VM images are where it shines.
- Parallel I/O scaling: XFS scales better than ext4 on multi-threaded workloads across multiple cores.
- Reflink support: While not a full CoW filesystem, XFS supports reflinks — instant file copies that share disk blocks. Tools like Sonarr/Radarr use this for hardlink-like behavior.
- Online growth: You can grow an XFS filesystem while it's mounted and in use.
- Mature and reliable: Decades of production use in enterprise environments.
Weaknesses
- Cannot shrink: XFS filesystems can grow but never shrink. If you need to resize a partition smaller, you must back up, reformat, and restore.
- No snapshots: Like ext4, XFS relies on external tools (LVM snapshots, mdraid) for point-in-time copies.
- No data checksumming: No protection against silent data corruption.
- No compression: Raw storage only.
- Small file performance: Slightly slower than ext4 for workloads involving many small files (though the difference is minimal in practice).
Creating an XFS Filesystem
# Basic XFS filesystem
mkfs.xfs /dev/sdb1
# Optimized for RAID (match stripe unit and width to your array)
mkfs.xfs -d su=64k,sw=4 /dev/md0
# For NVMe/SSD (disable unnecessary log options)
mkfs.xfs -f -m reflink=1 /dev/nvme0n1p1
Recommended fstab Options
/dev/sdb1 /mnt/data xfs defaults,noatime,logbufs=8,logbsize=256k 0 2
logbufs=8,logbsize=256k: Increase journal buffer size for better write performance
When to Choose XFS
- Media storage: Large video files, music libraries, photo archives
- Database hosting: PostgreSQL, MariaDB tablespaces benefit from XFS's large-file I/O
- VM/container storage: Backing store for virtual disk images
- RHEL/Fedora systems: Native default, best tested
- High-throughput workloads: When you need consistent performance under load
Btrfs: The Feature-Rich Contender
Btrfs (B-tree filesystem, pronounced "butter-FS" or "better-FS") brings modern features like snapshots, checksumming, and compression to Linux without the complexity of ZFS. It's been the default in openSUSE since 2014 and is increasingly adopted by Fedora and other distributions.
Strengths
- Snapshots: Create instant, space-efficient point-in-time copies of your data. Roll back to any snapshot instantly.
- Data checksumming: Every data block is checksummed. Silent data corruption is detected (and corrected if you have redundancy).
- Transparent compression: zstd compression can save 30-50% disk space on compressible data with negligible performance impact (often faster than uncompressed due to reduced I/O).
- Built-in RAID: RAID 0, 1, 10, 5, 6 (RAID 5/6 are not recommended — see below).
- Subvolumes: Organize your filesystem into independent subvolumes, each with its own snapshot and mount options.
- Send/receive: Stream snapshots to another Btrfs filesystem for incremental replication.
- Reflinks: Instant file copies that share disk blocks.
Weaknesses
- RAID 5/6 is broken: Btrfs's RAID 5 and RAID 6 implementations have known write-hole issues. Do not use them for data you care about.
- Higher RAM usage: Btrfs uses more RAM than ext4/XFS, especially with many snapshots.
- Performance variability: Copy-on-write can cause fragmentation and performance drops on some workloads, particularly databases and VMs.
- Less mature than ext4/XFS/ZFS: While Btrfs is stable for most use cases, it has had data-loss bugs in the past (particularly in RAID configurations).
Creating a Btrfs Filesystem
# Single drive
mkfs.btrfs /dev/sdb1
# RAID 1 mirror (two drives)
mkfs.btrfs -m raid1 -d raid1 /dev/sdb /dev/sdc
# RAID 10 (four drives)
mkfs.btrfs -m raid10 -d raid10 /dev/sdb /dev/sdc /dev/sdd /dev/sde
Subvolumes and Snapshots
# Mount the top-level subvolume
mount /dev/sdb1 /mnt/btrfs-root
# Create subvolumes
btrfs subvolume create /mnt/btrfs-root/@data
btrfs subvolume create /mnt/btrfs-root/@docker
btrfs subvolume create /mnt/btrfs-root/@snapshots
# Mount subvolumes individually
mount -o subvol=@data /dev/sdb1 /mnt/data
mount -o subvol=@docker /dev/sdb1 /var/lib/docker
# Create a snapshot
btrfs subvolume snapshot /mnt/data /mnt/btrfs-root/@snapshots/data-2026-02-09
# Create a read-only snapshot (required for send/receive)
btrfs subvolume snapshot -r /mnt/data /mnt/btrfs-root/@snapshots/data-2026-02-09-ro
Enabling Compression
# /etc/fstab
/dev/sdb1 /mnt/data btrfs defaults,noatime,compress=zstd:3,subvol=@data 0 0
Compression levels for zstd range from 1 (fast, less compression) to 15 (slow, more compression). Level 3 is the sweet spot for most data.
To check compression ratio:
btrfs filesystem defragment -rv -czstd /mnt/data
compsize /mnt/data
# Shows original size, compressed size, and ratio
Automated Snapshots with Snapper
# Install snapper
sudo apt install snapper # Debian/Ubuntu
sudo dnf install snapper # Fedora
# Create a snapper config for a subvolume
snapper -c data create-config /mnt/data
# Configure retention
snapper -c data set-config "TIMELINE_CREATE=yes"
snapper -c data set-config "TIMELINE_LIMIT_HOURLY=24"
snapper -c data set-config "TIMELINE_LIMIT_DAILY=7"
snapper -c data set-config "TIMELINE_LIMIT_WEEKLY=4"
snapper -c data set-config "TIMELINE_LIMIT_MONTHLY=12"
# List snapshots
snapper -c data list
# Rollback to a snapshot
snapper -c data rollback <snapshot-number>
When to Choose Btrfs
- NAS/file server: Where snapshots and checksumming protect your data
- Docker host: Btrfs storage driver works well, snapshots protect volumes
- Desktop/laptop: Snapshot before system updates, roll back if something breaks
- RAID 1/10 arrays: Built-in RAID with self-healing (avoid RAID 5/6)
- Backup targets: Receive snapshots from other Btrfs systems for incremental replication
ZFS: The Enterprise Powerhouse
ZFS is the most feature-complete filesystem available on Linux. Originally developed by Sun Microsystems for Solaris, it combines a filesystem and volume manager into a single, integrated system. It's the foundation of TrueNAS and a popular choice in Proxmox.
Strengths
- Bulletproof data integrity: End-to-end checksumming of all data and metadata. ZFS detects and (with redundancy) corrects silent data corruption automatically.
- Flexible RAID: RAIDZ1 (like RAID 5), RAIDZ2 (like RAID 6), RAIDZ3 (triple parity), and mirrors. All are well-tested and reliable.
- ARC cache: ZFS uses RAM as a read cache (ARC — Adaptive Replacement Cache), dramatically accelerating repeated reads.
- Snapshots and clones: Instant, space-efficient snapshots. Clone datasets from snapshots for testing.
- Send/receive: Efficient incremental replication between ZFS pools, even over SSH.
- Compression: Built-in lz4 or zstd compression.
- Deduplication: Inline deduplication (but requires enormous RAM — rarely practical in homelabs).
- Decades of reliability: ZFS has been protecting enterprise data since 2005.
Weaknesses
- RAM hungry: The general recommendation is 1GB of RAM per 1TB of storage, plus additional RAM for the ARC cache. A 32TB pool wants 32GB+ of RAM.
- Not in the Linux kernel: ZFS is distributed as a kernel module (OpenZFS) due to licensing. Kernel updates can temporarily break ZFS.
- Cannot add single drives to a VDEV: Once you create a RAIDZ vdev, you can't add more drives to it. You must add entire new vdevs.
- ECC RAM recommended: While not strictly required, ZFS's data integrity promises are weakened without ECC RAM. Corrupted data in RAM can be written to disk with a valid checksum.
- Complexity: ZFS has many concepts (pools, vdevs, datasets, zvols) and tuning parameters.
ZFS Concepts
Pool (tank) ← Top-level storage container
├── VDev (mirror-0) ← Redundancy group (mirror, raidz1/2/3)
│ ├── /dev/sdb ← Physical disk
│ └── /dev/sdc ← Physical disk
├── VDev (mirror-1) ← Another redundancy group
│ ├── /dev/sdd
│ └── /dev/sde
└── Datasets ← Filesystems within the pool
├── tank/data
├── tank/docker
├── tank/backups
└── tank/media
Creating a ZFS Pool
# Install ZFS
sudo apt install zfsutils-linux # Debian/Ubuntu
sudo dnf install zfs # Fedora (from ZFS repo)
# Mirror (2 drives) — like RAID 1
zpool create tank mirror /dev/sdb /dev/sdc
# RAIDZ1 (3+ drives) — like RAID 5, one drive can fail
zpool create tank raidz1 /dev/sdb /dev/sdc /dev/sdd
# RAIDZ2 (4+ drives) — like RAID 6, two drives can fail
zpool create tank raidz2 /dev/sdb /dev/sdc /dev/sdd /dev/sde
# Striped mirrors (4 drives) — like RAID 10, best performance
zpool create tank mirror /dev/sdb /dev/sdc mirror /dev/sdd /dev/sde
Creating Datasets
# Create datasets with specific properties
zfs create tank/data
zfs create tank/docker
zfs create tank/media
zfs create tank/backups
# Set properties
zfs set compression=zstd tank/data
zfs set compression=lz4 tank/docker # lz4 for speed on container layers
zfs set atime=off tank # Disable access time updates
zfs set recordsize=1M tank/media # Large records for media files
zfs set recordsize=16K tank/docker # Small records for database-like workloads
# Check compression ratio
zfs get compressratio tank/data
Snapshots and Replication
# Create a snapshot
zfs snapshot tank/data@2026-02-09
# List snapshots
zfs list -t snapshot
# Rollback to a snapshot
zfs rollback tank/data@2026-02-09
# Send a snapshot to another pool
zfs send tank/data@2026-02-09 | zfs receive backup/data
# Incremental send (only changes since last snapshot)
zfs send -i tank/data@2026-02-08 tank/data@2026-02-09 | zfs receive backup/data
# Remote replication
zfs send -i tank/data@2026-02-08 tank/data@2026-02-09 | ssh nas zfs receive backup/data
ZFS RAM Tuning
# Check current ARC usage
arc_summary
# Limit ARC size (useful if ZFS is using too much RAM)
# /etc/modprobe.d/zfs.conf
options zfs zfs_arc_max=8589934592 # 8 GB maximum ARC size
# Apply without reboot
echo 8589934592 > /sys/module/zfs/parameters/zfs_arc_max
When to Choose ZFS
- NAS/file server: The gold standard for data integrity. TrueNAS is built on ZFS for a reason.
- Proxmox storage: ZFS as the backing store for VMs and containers with snapshots.
- Large storage pools: 4+ drives where RAIDZ provides space-efficient redundancy.
- Critical data: When data integrity is non-negotiable (photos, financial records, backups).
- When you have enough RAM: Plan for 1GB per TB of storage plus whatever the ARC needs.
Practical Decision Guide
By Use Case
| Use Case | Recommended | Runner-up | Avoid |
|---|---|---|---|
| Boot/system drive | ext4 | XFS | ZFS (kernel update risks) |
| Single data drive | ext4 or Btrfs | XFS | ZFS (overkill) |
| NAS (2-4 drives) | ZFS mirror | Btrfs RAID 1 | ext4 on mdraid |
| NAS (4+ drives) | ZFS RAIDZ2 | mdraid + XFS | Btrfs RAID 5/6 |
| Media storage | XFS | ext4 | - |
| Docker host | Btrfs or ext4 | XFS | ZFS (overhead) |
| Database server | XFS | ext4 | Btrfs (CoW fragmentation) |
| VM storage | ZFS (zvols) | XFS | Btrfs |
| Raspberry Pi | ext4 | - | ZFS (RAM), Btrfs |
By Hardware
| Hardware | Recommended | Reason |
|---|---|---|
| < 4GB RAM | ext4 or XFS | ZFS and Btrfs want more RAM |
| 4-8GB RAM | ext4, XFS, or Btrfs | ZFS possible with limited ARC |
| 8-16GB RAM | Any filesystem | ZFS comfortable with small pools |
| 16GB+ RAM | ZFS for storage pools | ARC can cache effectively |
| HDD storage | ZFS or Btrfs | Checksumming catches bit rot on spinning drives |
| NVMe/SSD only | XFS or ext4 | Less need for data integrity features on reliable media |
| Mixed HDD + SSD | ZFS with SLOG/L2ARC | SSD as write log and read cache |
Performance Comparison
Real-world performance on typical homelab hardware (consumer SSDs, 4-8 drives):
| Operation | ext4 | XFS | Btrfs | ZFS |
|---|---|---|---|---|
| Sequential write | Excellent | Excellent | Good | Good |
| Sequential read | Excellent | Excellent | Good | Excellent (ARC) |
| Random write (small) | Good | Good | Fair (CoW) | Fair (CoW) |
| Random read (small) | Good | Good | Good | Excellent (ARC) |
| Many small files | Good | Fair | Good | Fair |
| Large files | Good | Excellent | Good | Good |
| Compression benefit | N/A | N/A | 30-50% savings | 30-50% savings |
| Metadata operations | Fast | Fast | Fast | Moderate |
The performance differences between ext4, XFS, and Btrfs are generally small enough that workload characteristics, drive hardware, and configuration matter more than filesystem choice. ZFS stands out with its ARC cache for repeated reads and with compression for reducing I/O.
Migration Tips
If you need to change filesystems:
- Back up everything to an independent device (not just another partition on the same disk)
- Verify your backup — restore a few files to confirm integrity
- Create the new filesystem on the target drive(s)
- Restore data from backup
- Update /etc/fstab with new UUID and filesystem type
- Test thoroughly before deleting the backup
For Docker volumes, also export container configurations and database dumps separately — don't rely solely on filesystem-level backup of volume directories.
Final Thoughts
The filesystem landscape on Linux has never been better. ext4 remains a rock-solid default for single drives. XFS delivers consistent performance for media and database workloads. Btrfs brings modern features without the complexity of ZFS. And ZFS provides unmatched data integrity for serious storage pools.
For most homelabbers, the recommendation is simple: use ext4 for your boot drive, and either ZFS (if you have 8GB+ RAM and multiple drives) or Btrfs (if you want snapshots without ZFS's complexity) for your data storage. XFS is the right choice if raw performance on large files matters more than data management features.
Don't overthink it. Pick the filesystem that matches your hardware and use case, format your drives, and start using them. You can always migrate later — it's just a backup and restore away.