← All articles
BACKUP Homelab Backup Strategies: Borg, Restic, Duplicati, ... 2026-02-09 · 10 min read · backup · borg · restic

Homelab Backup Strategies: Borg, Restic, Duplicati, and the 3-2-1 Rule

Backup 2026-02-09 · 10 min read backup borg restic duplicati zfs snapshots disaster-recovery 3-2-1

The most expensive lesson in homelabbing is the one where you lose data. Not the cost of the hardware — the cost of realizing that every photo, every configuration file, every database you spent months setting up is gone because "I'll set up backups this weekend" turned into six months of procrastination.

Backups are the least exciting part of a homelab and the most important. This guide covers practical backup strategies that actually work — from the philosophy behind good backup design to specific tools and automation that you can deploy today.

BorgBackup logo

The 3-2-1 Backup Rule

The 3-2-1 rule is the gold standard for data protection:

For a homelab, this translates to:

Copy Location Example
Original Production server Data on your Proxmox host's SSDs
Backup 1 Local NAS or backup server Borg/Restic repo on a TrueNAS box
Backup 2 Offsite/cloud Encrypted backup to Backblaze B2 or a friend's NAS

Why Offsite Matters

Local backups protect against drive failure. Offsite backups protect against everything else — fire, flood, theft, ransomware that encrypts everything on your network, or an overenthusiastic rm -rf /. If all your backups are on the same LAN, a single catastrophic event takes everything.

The 3-2-1-1-0 Extension

The modern extension adds:

Immutable backups are critical for ransomware protection. If an attacker compromises your server, they can delete or encrypt your local backups. An immutable offsite copy survives.

What to Back Up

Not everything in your homelab needs the same backup treatment.

Critical (Back up aggressively)

Important (Back up regularly)

Replaceable (Optional backup)

Explicitly Skip

Backup Tools Compared

Feature Borg Restic Duplicati Kopia
Deduplication Yes (block-level) Yes (content-defined) Yes (block-level) Yes (content-defined)
Encryption Yes (AES-256) Yes (AES-256) Yes (AES-256) Yes (AES-256-GCM)
Compression LZ4, ZSTD, ZLIB None built-in Various ZSTD, others
Cloud backends SSH/SFTP only S3, B2, SFTP, rclone S3, B2, Azure, Google, SFTP S3, B2, SFTP, and more
GUI Vorta (desktop) None (Restatic) Web UI Web UI
Incremental Yes Yes Yes Yes
Mount backups Yes (FUSE) Yes (FUSE) No Yes (FUSE)
Prune/retention Flexible Flexible Flexible Flexible
Speed Very fast Fast Moderate Fast
Community Large Very large Large Growing

BorgBackup

Borg is the workhorse of homelab backups. It's fast, reliable, and its deduplication is excellent — a full daily backup might only add a few hundred megabytes to the repository even if the source is 100GB. Borg compresses and encrypts by default.

Installation

# Debian/Ubuntu
sudo apt install borgbackup

# Fedora
sudo dnf install borgbackup

# Arch
sudo pacman -S borg

Initial Repository Setup

# Local repository
borg init --encryption=repokey /mnt/backup/borg-repo

# Remote repository (via SSH)
borg init --encryption=repokey ssh://backup-server/home/borg/repo

# IMPORTANT: Export and save your key!
borg key export /mnt/backup/borg-repo ~/borg-key-backup.txt
# Store this key somewhere safe — without it, your backups are unrecoverable

Creating a Backup

#!/bin/bash
# backup.sh — Daily Borg backup script

export BORG_REPO="/mnt/backup/borg-repo"
export BORG_PASSPHRASE="your-secure-passphrase"

# Create backup with date-based name
borg create                         \
    --verbose                       \
    --filter AME                    \
    --list                          \
    --stats                         \
    --show-rc                       \
    --compression auto,zstd         \
    --exclude-caches                \
    --exclude '/home/*/.cache/*'    \
    --exclude '/var/tmp/*'          \
    --exclude '*.pyc'               \
    ::'{hostname}-{now:%Y-%m-%d}'   \
    /home                           \
    /etc                            \
    /opt/docker                     \
    /var/lib/docker/volumes

# Prune old backups
borg prune                          \
    --list                          \
    --glob-archives '{hostname}-*'  \
    --show-rc                       \
    --keep-daily    7               \
    --keep-weekly   4               \
    --keep-monthly  6               \
    --keep-yearly   2

# Compact the repository
borg compact

Backing Up Docker Volumes

Docker volumes need special handling. You can't just copy files while a database is writing to them.

Method 1: Stop containers briefly

# Stop containers, backup, restart
docker compose -f /opt/docker/compose.yml stop
borg create --compression auto,zstd ::'docker-{now:%Y-%m-%d}' /opt/docker
docker compose -f /opt/docker/compose.yml start

Method 2: Database dumps first

# Dump databases before Borg runs
docker exec postgres pg_dumpall -U postgres > /opt/docker/backups/postgres.sql
docker exec mariadb mysqldump -u root --all-databases > /opt/docker/backups/mariadb.sql

# Then backup everything (containers stay running)
borg create --compression auto,zstd ::'docker-{now:%Y-%m-%d}' /opt/docker

Restoring from Borg

# List available backups
borg list /mnt/backup/borg-repo

# List files in a specific backup
borg list /mnt/backup/borg-repo::hostname-2026-02-09

# Restore everything
cd /tmp/restore
borg extract /mnt/backup/borg-repo::hostname-2026-02-09

# Restore specific paths
borg extract /mnt/backup/borg-repo::hostname-2026-02-09 home/user/documents

# Mount a backup as a filesystem (browse and selectively restore)
mkdir /tmp/borg-mount
borg mount /mnt/backup/borg-repo::hostname-2026-02-09 /tmp/borg-mount
# Browse /tmp/borg-mount, copy what you need
borg umount /tmp/borg-mount

Restic

Restic is Borg's main competitor and has one significant advantage: native support for cloud storage backends. While Borg requires SSH/SFTP, Restic can write directly to S3, Backblaze B2, Azure Blob, Google Cloud Storage, and more.

Installation

# Debian/Ubuntu
sudo apt install restic

# Fedora
sudo dnf install restic

# Or install latest version
curl -L https://github.com/restic/restic/releases/latest/download/restic_0.17.3_linux_amd64.bz2 | bunzip2 > /usr/local/bin/restic
chmod +x /usr/local/bin/restic

Repository Setup

# Local repository
restic init --repo /mnt/backup/restic-repo

# Backblaze B2
export B2_ACCOUNT_ID="your-account-id"
export B2_ACCOUNT_KEY="your-account-key"
restic init --repo b2:bucket-name:restic

# S3-compatible (MinIO, Wasabi, etc.)
export AWS_ACCESS_KEY_ID="your-key"
export AWS_SECRET_ACCESS_KEY="your-secret"
restic init --repo s3:https://s3.amazonaws.com/bucket-name/restic

# SFTP
restic init --repo sftp:backup-server:/home/borg/restic-repo

Backup Script

#!/bin/bash
# restic-backup.sh

export RESTIC_REPOSITORY="b2:my-homelab-backup:restic"
export RESTIC_PASSWORD="your-secure-passphrase"
export B2_ACCOUNT_ID="your-account-id"
export B2_ACCOUNT_KEY="your-account-key"

# Create backup
restic backup                       \
    --verbose                       \
    --exclude-caches                \
    --exclude='/home/*/.cache'      \
    --exclude='*.tmp'               \
    --tag homelab                   \
    /home                           \
    /etc                            \
    /opt/docker

# Apply retention policy
restic forget                       \
    --prune                         \
    --keep-daily 7                  \
    --keep-weekly 4                 \
    --keep-monthly 6               \
    --keep-yearly 2

# Verify backup integrity (run periodically, not every time)
# restic check --read-data-subset=5%

Restoring from Restic

# List snapshots
restic snapshots

# Restore latest snapshot
restic restore latest --target /tmp/restore

# Restore specific files
restic restore latest --target /tmp/restore --include /etc/nginx

# Mount and browse (like Borg)
mkdir /tmp/restic-mount
restic mount /tmp/restic-mount

Duplicati

Duplicati targets users who want a web-based GUI for backup management. It's written in C# and runs on .NET, which makes some Linux users uneasy, but it's reliable and very approachable.

Deployment

services:
  duplicati:
    image: lscr.io/linuxserver/duplicati:latest
    container_name: duplicati
    restart: unless-stopped
    ports:
      - "8200:8200"
    volumes:
      - ./duplicati-config:/config
      - /opt/docker:/source/docker:ro
      - /home:/source/home:ro
      - /mnt/backup:/backup
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=America/New_York

Configure backups through the web UI at http://your-server:8200. Duplicati supports a wide range of backends including S3, B2, Google Drive, OneDrive, SFTP, and local storage.

When to Choose Duplicati

When to Avoid Duplicati

Snapshot-Based Backups

If you run ZFS or Btrfs, filesystem snapshots give you a different kind of backup — instant, space-efficient point-in-time copies of your data.

ZFS Snapshots

# Create a snapshot
zfs snapshot tank/data@2026-02-09

# List snapshots
zfs list -t snapshot

# Rollback to a snapshot (destructive — replaces current data)
zfs rollback tank/data@2026-02-09

# Clone a snapshot to a new dataset (non-destructive)
zfs clone tank/data@2026-02-09 tank/data-restored

# Send a snapshot to another pool (local replication)
zfs send tank/data@2026-02-09 | zfs receive backup/data

# Incremental send (much faster for subsequent backups)
zfs send -i tank/data@2026-02-08 tank/data@2026-02-09 | zfs receive backup/data

# Send to a remote server
zfs send -i tank/data@2026-02-08 tank/data@2026-02-09 | ssh backup-server zfs receive backup/data

Automated ZFS Snapshots with Sanoid

Sanoid automates ZFS snapshot creation and pruning:

# Install Sanoid
sudo apt install sanoid    # Debian/Ubuntu

Configure /etc/sanoid/sanoid.conf:

[tank/data]
    use_template = production
    recursive = yes

[tank/docker]
    use_template = production

[template_production]
    frequently = 0
    hourly = 24
    daily = 30
    monthly = 12
    yearly = 2
    autosnap = yes
    autoprune = yes

Syncoid (included with Sanoid) handles replication:

# Replicate to local backup pool
syncoid tank/data backup/data

# Replicate to remote server
syncoid tank/data backup-server:backup/data

# Add to cron for automated replication
# 0 */6 * * * syncoid --no-sync-snap tank/data backup-server:backup/data

Btrfs Snapshots

# Create a snapshot
btrfs subvolume snapshot /mnt/data /mnt/data/.snapshots/2026-02-09

# Create a read-only snapshot (required for send/receive)
btrfs subvolume snapshot -r /mnt/data /mnt/data/.snapshots/2026-02-09

# Send to another drive
btrfs send /mnt/data/.snapshots/2026-02-09 | btrfs receive /mnt/backup/data/

# Incremental send
btrfs send -p /mnt/data/.snapshots/2026-02-08 /mnt/data/.snapshots/2026-02-09 | btrfs receive /mnt/backup/data/

# Delete old snapshots
btrfs subvolume delete /mnt/data/.snapshots/2026-01-01

Snapshots Are Not Backups (Alone)

A critical point: snapshots on the same pool/filesystem as your data are not backups. If the drive fails, both the data and its snapshots are lost. Snapshots are a fast recovery mechanism for accidental deletion or corruption. They must be combined with replication to another device for real backup protection.

Automating Backups

Systemd Timer (Recommended over Cron)

# /etc/systemd/system/backup.service
[Unit]
Description=Borg Backup
After=network-online.target
Wants=network-online.target

[Service]
Type=oneshot
ExecStart=/opt/scripts/backup.sh
User=root
Nice=19
IOSchedulingClass=idle
Environment="BORG_REPO=/mnt/backup/borg-repo"
Environment="BORG_PASSPHRASE=your-passphrase"

# Prevent resource starvation
CPUQuota=50%
MemoryMax=2G
# /etc/systemd/system/backup.timer
[Unit]
Description=Daily Borg Backup

[Timer]
OnCalendar=*-*-* 03:00:00
RandomizedDelaySec=1800
Persistent=true

[Install]
WantedBy=timers.target
sudo systemctl enable --now backup.timer

Monitoring Backup Health

Create a simple monitoring script that checks backup freshness:

#!/bin/bash
# check-backups.sh — Alert if backups are stale

BORG_REPO="/mnt/backup/borg-repo"
MAX_AGE_HOURS=26  # Alert if no backup in 26 hours (gives 2-hour grace for daily)

last_backup=$(borg list --last 1 --format '{time}' "$BORG_REPO" 2>/dev/null)

if [ -z "$last_backup" ]; then
    echo "CRITICAL: No backups found in $BORG_REPO"
    exit 2
fi

last_epoch=$(date -d "$last_backup" +%s)
now_epoch=$(date +%s)
age_hours=$(( (now_epoch - last_epoch) / 3600 ))

if [ "$age_hours" -gt "$MAX_AGE_HOURS" ]; then
    echo "WARNING: Last backup is ${age_hours} hours old (threshold: ${MAX_AGE_HOURS}h)"
    exit 1
fi

echo "OK: Last backup is ${age_hours} hours old"
exit 0

Integrate this with your monitoring system (Prometheus, Uptime Kuma, etc.).

Offsite Backup Destinations

Backblaze B2 (Recommended)

Backblaze B2 is the most cost-effective cloud storage for homelab backups:

# Restic to B2
export B2_ACCOUNT_ID="your-id"
export B2_ACCOUNT_KEY="your-key"
restic -r b2:my-backup-bucket:homelab backup /opt/docker

# Rclone to B2 (works with any tool)
rclone sync /mnt/backup/borg-repo b2:my-backup-bucket/borg --fast-list

A Friend's NAS (Free)

The classic homelab offsite backup: swap backup capacity with a friend who also has a NAS. Each of you provides storage to the other.

# Borg over SSH to a friend's server
borg init --encryption=repokey ssh://friend@friends-ip:22/~/borg-backup

# Backups are encrypted — your friend can't read your data
borg create ssh://friend@friends-ip:22/~/borg-backup::'backup-{now:%Y-%m-%d}' /home

Hetzner Storage Box

Hetzner offers storage boxes starting at about 1TB for $4/month with SSH/SFTP/rsync access — purpose-built for backups.

Disaster Recovery Testing

The most important backup practice is one most people skip: actually testing your restores.

Monthly Restore Test

Pick one backup each month and restore it to a temporary location:

# Create a test restoration
mkdir /tmp/restore-test

# Restore from Borg
borg extract /mnt/backup/borg-repo::hostname-2026-02-09 --target /tmp/restore-test

# Verify critical files exist and are intact
ls -la /tmp/restore-test/opt/docker/
ls -la /tmp/restore-test/etc/

# For databases, test that the dump restores successfully
docker run --rm -v /tmp/restore-test/opt/docker/backups:/backups postgres:16 \
  pg_restore --list /backups/postgres.sql

# Clean up
rm -rf /tmp/restore-test

Document Your Restore Process

Keep a restore runbook that covers:

  1. How to access your backup repository (location, credentials, encryption key)
  2. How to restore each critical service
  3. The order of restoration (databases first, then applications)
  4. How to verify each service is working after restoration
  5. Where your encryption keys and passphrases are stored (offline!)

Store this document in at least two locations outside your backup system — printed, on a USB drive, or in a password manager that doesn't depend on your homelab.

Recommended Strategy by Homelab Size

Starter Homelab (1-2 servers, < 1TB data)

Medium Homelab (3-5 servers, 1-5TB data)

Large Homelab (5+ servers, 5TB+ data)

Final Thoughts

The best backup system is one that runs automatically, is tested regularly, and includes an offsite copy. Start with the simplest setup that covers the 3-2-1 rule — even just Restic to Backblaze B2 on a cron job is infinitely better than no backups.

Don't let perfect be the enemy of good. A basic daily backup to a single offsite location protects against the vast majority of data loss scenarios. You can add ZFS snapshots, local NAS replication, and immutable storage later. But start backing up today. Tomorrow might be the day your RAID controller decides to corrupt everything, and "I was going to set up backups this weekend" is a cold comfort when you're staring at data loss.

Your future self will thank you.