← All articles
HARDWARE Homelab Hardware Lifecycle: Upgrading, Retiring, and... 2026-02-09 · 16 min read · hardware · upgrades · recycling

Homelab Hardware Lifecycle: Upgrading, Retiring, and Recycling

Hardware 2026-02-09 · 16 min read hardware upgrades recycling inventory lifecycle

Homelab hardware doesn't last forever. Drives fail, components become obsolete, and power consumption adds up. Managing your hardware through its full lifecycle — from acquisition through testing, production use, retirement, and disposal — saves you money, reduces e-waste, and prevents the dreaded "I forgot what's in that server" problem.

Hardware lifecycle

This guide covers the entire lifecycle: tracking what you have, deciding when to upgrade, stress-testing new hardware, migrating data safely, wiping retired drives, and responsibly disposing of e-waste. We'll also build a practical inventory system so you actually know what's running in your rack.

Building a Hardware Inventory System

You can't manage what you don't track. Before we talk about lifecycles, let's set up an inventory system.

Option 1: Spreadsheet (Quick Start)

A spreadsheet works great for small homelabs (1-5 machines). Here's a template:

Field Description Example
Device Name Hostname or label nas-01
Role What it does NAS / File Server
Location Physical location Rack shelf 2
Make/Model Hardware model Dell PowerEdge R720
Serial Number Manufacturer S/N ABC123XYZ
CPU Processor model 2x Xeon E5-2670 v2
RAM Total installed 128 GB DDR3 ECC
Storage Drives installed 8x 4TB WD Red (ZFS RAIDZ2)
NIC Network interfaces 2x 1GbE + 1x 10GbE
OS Operating system Proxmox 8.2
Purchase Date When acquired 2024-03-15
Purchase Price What you paid $350
Power Draw Measured watts (idle) 145W
Annual Power Cost Calculated $127/yr (@$0.10/kWh)
Warranty Expires If applicable N/A (used)
Notes Anything relevant Fan 3 replaced 2025-01

Calculate annual power cost:

Annual Cost = Watts × 24 hours × 365 days ÷ 1000 × $/kWh
Example: 145W × 24 × 365 ÷ 1000 × $0.10 = $127.02/year

Option 2: Snipe-IT (Professional Inventory)

For larger homelabs or if you want a proper web-based inventory with asset tags, depreciation tracking, and check-in/check-out, Snipe-IT is excellent:

# docker-compose.yml for Snipe-IT
services:
  snipeit:
    image: snipe/snipe-it:latest
    container_name: snipeit
    restart: unless-stopped
    depends_on:
      - db
    environment:
      APP_URL: "https://inventory.homelab.local"
      APP_KEY: "${APP_KEY}"
      DB_CONNECTION: mysql
      DB_HOST: db
      DB_DATABASE: snipeit
      DB_USERNAME: snipeit
      DB_PASSWORD: "${DB_PASSWORD}"
      MAIL_DRIVER: smtp
      MAIL_HOST: smtp.fastmail.com
      MAIL_PORT: 587
      MAIL_USERNAME: "${SMTP_USER}"
      MAIL_PASSWORD: "${SMTP_PASS}"
    volumes:
      - snipeit-data:/var/lib/snipeit
    ports:
      - "8080:80"

  db:
    image: mariadb:11
    container_name: snipeit-db
    restart: unless-stopped
    environment:
      MYSQL_ROOT_PASSWORD: "${DB_ROOT_PASSWORD}"
      MYSQL_DATABASE: snipeit
      MYSQL_USER: snipeit
      MYSQL_PASSWORD: "${DB_PASSWORD}"
    volumes:
      - db-data:/var/lib/mysql

volumes:
  snipeit-data:
  db-data:
# Generate an app key
docker compose run --rm snipeit php artisan key:generate --show
# Put the output in your .env file as APP_KEY

docker compose up -d

Snipe-IT features that matter for homelabs:

Option 3: Automated Discovery with Netbox

For very large or constantly changing setups, Netbox can autodiscover devices on your network:

# Quick Netbox deploy (for the inventory-obsessed)
git clone https://github.com/netbox-community/netbox-docker.git
cd netbox-docker
# Edit docker-compose.override.yml with your settings
docker compose up -d

For most homelabs, the spreadsheet is honestly fine. Don't over-engineer your inventory system.

When to Upgrade vs. Replace

The upgrade-or-replace decision comes down to three factors: cost efficiency, power consumption, and capability gaps.

Cost-Per-Year Thinking

Instead of looking at the sticker price, think about cost per year of useful life:

Cost Per Year = (Purchase Price + Upgrade Costs) ÷ Years of Service
Scenario Purchase Upgrades Years Cost/Year
Used Dell R720, run as-is $300 $0 3 $100/yr
Used Dell R720, max RAM $300 $120 4 $105/yr
New Mini PC (N100) $200 $0 5 $40/yr
Used Dell R730 $500 $0 4 $125/yr

But that's not the full picture. Add power costs:

Server Power (idle) Annual Power (@$0.12/kWh) Total Annual Cost
Dell R720 (2x E5-2670) 180W $189 $289/yr
Dell R720 (maxed RAM) 195W $205 $310/yr
Mini PC (N100) 15W $16 $56/yr
Dell R730 (2x E5-2680 v3) 150W $158 $283/yr

That N100 mini PC is looking very attractive now. The power savings alone pay for the hardware in under 2 years.

The Upgrade Decision Framework

Should I upgrade or replace?

1. Is the bottleneck upgradeable?
   ├── RAM: Usually yes, and cheap → UPGRADE
   ├── Storage (add drives): Usually yes → UPGRADE
   ├── CPU: Sometimes (socket compatibility) → CHECK
   ├── Network (add NIC): Usually yes → UPGRADE
   └── CPU + Platform (DDR3→DDR5): No → REPLACE

2. Will the upgrade extend useful life by 2+ years?
   ├── Yes → UPGRADE (if cost is < 50% of replacement)
   └── No → REPLACE

3. Is power consumption a concern?
   ├── Current draw > 150W idle → Consider REPLACE
   └── Current draw < 80W idle → UPGRADE is fine

4. Is the platform end-of-life?
   ├── No security updates → REPLACE
   ├── DDR3 platform → Start planning REPLACE
   └── DDR4/DDR5 platform → UPGRADE

Common Upgrade Paths

RAM Upgrade (Almost Always Worth It)

# Check current RAM configuration
sudo dmidecode -t memory | grep -E "Size:|Type:|Speed:|Locator:"

# Example output:
#   Size: 8192 MB
#   Type: DDR4
#   Speed: 2666 MT/s
#   Locator: DIMM_A1

# Check maximum supported RAM
sudo dmidecode -t memory | grep "Maximum Capacity"
# Maximum Capacity: 128 GB

# Check available slots
sudo dmidecode -t memory | grep -c "Size: No Module Installed"
# 2 (two empty slots)
RAM Type Typical Homelab Price Notes
DDR3 ECC 16 GB $8-15 Cheap, but platform is aging
DDR4 ECC 32 GB $40-60 Sweet spot for homelab
DDR4 non-ECC 32 GB $30-45 Fine for non-critical workloads
DDR5 non-ECC 32 GB $50-70 Current gen, future-proof

Storage Upgrade (Drives or Controller)

# Check current storage
lsblk -o NAME,SIZE,MODEL,SERIAL,ROTA

# Check drive health
sudo smartctl -a /dev/sda | grep -E "Reallocated|Power_On|Temperature"

# Check available SATA/SAS ports
ls /sys/class/scsi_host/ | wc -l

# If you need more ports, add an HBA card:
# Dell H310 (flashed to IT mode): ~$20 on eBay
# LSI 9207-8i: ~$30 on eBay
# Both give you 8 additional SAS/SATA ports

Network Upgrade (10 GbE)

# Check current NICs
ip link show
lspci | grep -i ethernet

# Affordable 10 GbE options:
# Mellanox ConnectX-3 (SFP+): ~$15 on eBay
# Intel X520-DA2 (SFP+): ~$20 on eBay
# Intel X550-T2 (RJ45): ~$50 on eBay

# After installing a new NIC:
sudo dmesg | grep -i "ethernet\|mlx\|ixgbe"
ip link show
# Configure with netplan or NetworkManager

Stress Testing New Hardware

Never put new (or new-to-you) hardware directly into production. Stress test it first to catch defects during the return window.

Memory Testing (memtest86+)

# Install memtest86+
sudo apt install memtest86+
sudo update-grub

# Reboot and select memtest86+ from the GRUB menu
# Let it run for at least 4 full passes (8-12 hours)
# ANY errors = defective RAM, return/replace immediately

For testing without rebooting (less thorough but useful):

# Install stressapptest (Google's memory stress tester)
sudo apt install stressapptest

# Test with 90% of available RAM for 1 hour
FREE_MB=$(free -m | awk '/^Mem:/{print int($7 * 0.9)}')
stressapptest -M $FREE_MB -s 3600 -W

# Exit status 0 = PASS, non-zero = FAIL

CPU Stress Testing

# Install stress-ng
sudo apt install stress-ng

# CPU stress test (all cores, 30 minutes)
stress-ng --cpu $(nproc) --cpu-method all --timeout 30m --metrics-brief

# Monitor temperatures during test
watch -n 1 sensors

# Or with mprime (Prime95 for Linux) — the gold standard for CPU testing
# Download from https://www.mersenne.org/download/
tar xzf p95v3019b13.linux64.tar.gz
cd p95v3019b13.linux64
./mprime -t
# Let it run for 12-24 hours
# ANY errors = potentially defective CPU, RAM, or motherboard

Storage Testing

# Install fio (Flexible I/O Tester)
sudo apt install fio

# Sequential write test (checks for bad sectors and measures speed)
fio --name=seq-write --ioengine=libaio --direct=1 \
  --rw=write --bs=1M --size=100G --numjobs=1 \
  --runtime=300 --time_based --filename=/dev/sdX
# WARNING: This DESTROYS data on /dev/sdX — only use on new/empty drives

# Random write test (stress test)
fio --name=rand-write --ioengine=libaio --direct=1 \
  --rw=randwrite --bs=4k --size=10G --numjobs=4 \
  --runtime=300 --time_based --iodepth=32 \
  --filename=/dev/sdX

# SMART extended self-test
sudo smartctl -t long /dev/sdX
# Check results after it completes (can take hours for large drives)
sudo smartctl -a /dev/sdX | grep -A 5 "Self-test"

Full System Burn-In Script

#!/bin/bash
# burn-in.sh — 24-hour burn-in test for new hardware
# Run this on new servers before putting them into production

set -e

echo "=== Homelab Hardware Burn-In Test ==="
echo "Start time: $(date)"
echo "Duration: 24 hours"
echo ""

# Check temperatures are readable
if ! command -v sensors &>/dev/null; then
    echo "Installing lm-sensors..."
    sudo apt install -y lm-sensors
    sudo sensors-detect --auto
fi

echo "Initial temperatures:"
sensors

# Start background temperature logging
(while true; do
    echo "$(date +%H:%M:%S) $(sensors | grep -oP 'Core \d+:\s+\+\K[0-9.]+' | tr '\n' ' ')"
    sleep 60
done) > /tmp/temp-log.txt &
TEMP_PID=$!

# Phase 1: Memory test (6 hours)
echo ""
echo "=== Phase 1: Memory Test (6 hours) ==="
FREE_MB=$(free -m | awk '/^Mem:/{print int($7 * 0.9)}')
timeout 21600 stressapptest -M $FREE_MB -s 21600 -W
echo "Memory test: PASSED"

# Phase 2: CPU test (6 hours)
echo ""
echo "=== Phase 2: CPU Test (6 hours) ==="
timeout 21600 stress-ng --cpu $(nproc) --cpu-method all \
  --timeout 21600 --metrics-brief
echo "CPU test: PASSED"

# Phase 3: Storage test (6 hours)
echo ""
echo "=== Phase 3: Storage I/O Test (6 hours) ==="
# Create a test file (don't test raw devices in a mixed workload)
fio --name=burn-in --ioengine=libaio --direct=1 \
  --rw=randrw --rwmixread=70 --bs=4k \
  --size=50G --numjobs=$(nproc) \
  --runtime=21600 --time_based --iodepth=32 \
  --filename=/tmp/fio-burnin-test \
  --group_reporting
rm -f /tmp/fio-burnin-test
echo "Storage test: PASSED"

# Phase 4: Combined stress (6 hours)
echo ""
echo "=== Phase 4: Combined Stress (6 hours) ==="
stress-ng --cpu $(nproc) --vm 2 --vm-bytes 2G \
  --io 4 --timeout 21600 --metrics-brief
echo "Combined test: PASSED"

# Cleanup
kill $TEMP_PID 2>/dev/null

echo ""
echo "=== Burn-In Complete ==="
echo "End time: $(date)"
echo "All tests PASSED"
echo ""
echo "Temperature log saved to /tmp/temp-log.txt"
echo "Review for thermal throttling or temperature spikes."

Data Migration Strategies

When moving data from old hardware to new, the strategy depends on your storage setup.

ZFS Pool Migration

If both old and new hardware support the same disk connections:

# Option 1: Physical move (same disks, new server)
# 1. Export the pool on the old server
zpool export tank

# 2. Move the disks to the new server
# 3. Import on the new server
zpool import tank

# That's it — ZFS pools are portable across machines
# Option 2: Send/receive to new pool (different disks)
# On the new server, create the new pool
zpool create newtank raidz2 /dev/disk/by-id/...

# Send from old to new over the network
ssh old-server "zfs send -Rv tank@migrate" | zfs recv -Fv newtank

# Verify
zfs list -t all newtank

Rsync Migration (Universal)

# Full system migration
rsync -aHAXv --progress --numeric-ids \
  root@old-server:/mnt/data/ /mnt/new-data/

# With bandwidth limiting (to not saturate the network)
rsync -aHAXv --progress --bwlimit=100M \
  root@old-server:/mnt/data/ /mnt/new-data/

# Incremental sync (run multiple times, final run with minimal downtime)
# First pass (may take hours):
rsync -aHAXv --progress root@old-server:/mnt/data/ /mnt/new-data/

# Final pass (stop services first, sync only changes):
ssh old-server "systemctl stop samba smbd"
rsync -aHAXv --progress --delete root@old-server:/mnt/data/ /mnt/new-data/

Block-Level Migration with dd (Disk Cloning)

# Clone an entire disk over the network (same-size or larger target)
ssh old-server "dd if=/dev/sda bs=64K status=progress" | \
  dd of=/dev/sda bs=64K

# Or use Clonezilla for a more robust disk cloning experience
# Boot both machines from Clonezilla USB

Secure Data Wiping

When retiring drives, you must securely erase them before disposal or sale. Deleted files are trivially recoverable without proper wiping.

For HDDs (Spinning Drives)

# Option 1: nwipe (interactive, recommended)
sudo apt install nwipe
sudo nwipe /dev/sdX
# Select the DoD Short method (3 passes) for most cases
# Or PRNG Stream (1 pass) for drives that will be destroyed

# Option 2: shred (command line)
sudo shred -vfz -n 3 /dev/sdX
# -v: verbose, -f: force, -z: final zero pass, -n 3: three random passes
# This takes HOURS for large drives

# Option 3: ATA Secure Erase (fastest, firmware-level)
# Check if the drive supports it
sudo hdparm -I /dev/sdX | grep -i "security"
# Look for "supported" and "not frozen"

# If the drive is "frozen", suspend and wake the machine:
sudo rtcwake -m mem -s 5
# Then check again — it should be "not frozen"

# Set a temporary password
sudo hdparm --user-master u --security-set-pass Erase /dev/sdX

# Execute the secure erase
sudo hdparm --user-master u --security-erase Erase /dev/sdX
# This uses the drive's built-in erase function — usually faster than software wiping

For SSDs (Solid State Drives)

Software wiping (shred, dd) is NOT reliable for SSDs because of wear leveling — the drive firmware may keep copies of data in areas that aren't addressable by the OS.

# Option 1: NVMe Sanitize (best for NVMe drives)
# Install nvme-cli
sudo apt install nvme-cli

# Check sanitize support
sudo nvme id-ctrl /dev/nvme0 -H | grep -i "sanitize"

# Block erase (fastest)
sudo nvme sanitize /dev/nvme0 -a 2
# Check progress
sudo nvme sanitize-log /dev/nvme0

# Crypto erase (if drive supports encryption)
sudo nvme sanitize /dev/nvme0 -a 4

# Option 2: ATA Secure Erase (for SATA SSDs)
sudo hdparm --user-master u --security-set-pass Erase /dev/sdX
sudo hdparm --user-master u --security-erase-enhanced Erase /dev/sdX
# "Enhanced" erase also wipes the hidden areas

# Option 3: Manufacturer tools
# Samsung: Samsung Magician (Linux CLI available)
# Intel: Intel MAS (Memory and Storage Tool)
# WD: WD Dashboard
# These often support the most thorough erase for their specific drives

Verification After Wiping

# Verify the drive is actually zeroed (spot check)
sudo hexdump -C /dev/sdX | head -20
# Should show all zeros: 00000000  00 00 00 00 00 00 00 00 ...

# More thorough verification (sample random sectors)
for i in $(seq 1 100); do
    SECTOR=$((RANDOM * RANDOM))
    DATA=$(sudo dd if=/dev/sdX bs=512 count=1 skip=$SECTOR 2>/dev/null | xxd | head -1)
    if ! echo "$DATA" | grep -q "0000 0000 0000 0000 0000 0000 0000 0000"; then
        echo "WARNING: Non-zero data found at sector $SECTOR"
    fi
done
echo "Spot check complete"

Wiping Decision Table

Drive Type Best Method Time (4 TB) Confidence
HDD (selling) ATA Secure Erase ~4 hours High
HDD (destroying) 1-pass PRNG ~8 hours High
HDD (government) nwipe DoD 3-pass ~24 hours Very High
SATA SSD (selling) ATA Enhanced Erase Minutes High
NVMe SSD (selling) NVMe Sanitize Minutes Very High
SSD (destroying) Physical destruction Minutes Absolute

Parts That Are Reusable

When retiring a server, many parts have useful second lives:

Component Reusable? Typical Second Life
Case Yes New build, parts storage
PSU Yes (if ATX standard) New build (test first)
Fans Yes Replacement fans, projects
Cables (SATA, power) Yes Spares drawer
HDD caddies/trays Yes Same model servers
Rail kits Yes Same model servers
HDDs (healthy) Yes Cold backup drives, test drives
RAM (DDR4+) Yes Upgrade other machines
RAM (DDR3) Maybe Low demand, but still usable
Network cards Yes Other machines or sell
HBA cards Yes Other machines or sell
CPUs Rarely Platform-specific, limited reuse

Testing Reused Components

# Test a reused PSU
# Use a PSU tester ($10-15) or the paperclip test:
# Short the green wire (PS_ON, pin 16) to any black wire (ground)
# PSU should spin up. Check voltages with a multimeter:
# 12V rail: 11.4V - 12.6V (acceptable)
# 5V rail: 4.75V - 5.25V (acceptable)
# 3.3V rail: 3.14V - 3.47V (acceptable)

# Test reused RAM
# Boot with the RAM installed and run memtest86+
# At minimum, run 2 full passes

# Test reused HDDs
sudo smartctl -t long /dev/sdX
# Wait for completion, then check:
sudo smartctl -a /dev/sdX
# Key metrics:
# - Reallocated_Sector_Ct: should be 0
# - Current_Pending_Sector: should be 0
# - Offline_Uncorrectable: should be 0
# - Power_On_Hours: check lifetime usage
# - SMART overall-health: should be PASSED

E-Waste Disposal

Don't throw old electronics in the trash. They contain hazardous materials (lead, mercury, cadmium) and recyclable materials (copper, gold, rare earth elements).

Disposal Options

Option Best For Cost Data Security
Electronics recycler Most hardware Free-$20 You wipe first
Best Buy recycling Consumer electronics Free You wipe first
Manufacturer take-back Dell, HP, Lenovo Free (sometimes) They handle it
eBay/r/homelabsales Working hardware Profit You wipe first
Local electronics recycler Everything Free-$20 You wipe first
Metal scrap yard Cases, heatsinks Small profit N/A

Before Disposal Checklist

[ ] All drives securely wiped (see wiping section above)
[ ] Serial numbers recorded (remove from inventory)
[ ] Any reusable parts removed
[ ] Batteries removed (ship/dispose separately)
[ ] No labels with personal/network information visible
[ ] Photos taken (for insurance/records if valuable)

Physical Destruction for Sensitive Drives

If you're truly paranoid (or handling drives with financial/medical data):

# For HDDs: drill through the platters
# 3 holes through the drive, going through all platters
# Use a drill press, not a hand drill (safety!)

# For SSDs: shred the PCB
# Remove the case, snap the PCB and chips
# Some people use a hammer on the NAND chips

# Professional destruction:
# Iron Mountain, Shred-it, etc. provide certificates of destruction
# Overkill for a homelab, but available if needed

Power Efficiency as Upgrade Motivation

One of the best reasons to upgrade homelab hardware is power efficiency. Modern hardware does more with less:

Generation Typical Server Idle Power Performance (relative) Power/Performance
2012 (DDR3, Sandy Bridge) Dell R620 100-150W 1.0x 1.0x
2015 (DDR4, Haswell) Dell R630 60-100W 1.8x 3.0x
2018 (DDR4, Skylake) Dell R640 50-80W 2.5x 4.5x
2022 (DDR5, Alder Lake) Mini PC (i5-12400) 15-25W 3.0x 12.0x
2024 (DDR5, Efficient) Mini PC (N100) 8-15W 1.2x 10.0x

The N100 mini PC at 10W idle has roughly the same single-threaded performance as a Xeon E5-2670 v2 at 150W idle. For many homelab workloads (NAS, reverse proxy, DNS, containers), that's all you need.

Measuring Your Current Power Draw

# Software-based estimation (rough)
sudo apt install powertop
sudo powertop --auto-tune  # Optimize power settings
sudo powertop --html=power-report.html

# For accurate measurements, use a Kill-A-Watt meter ($20-30)
# Plug your server into it and note:
# - Idle power (no workload)
# - Typical power (normal use)
# - Peak power (stress test)

# Calculate annual cost
echo "Annual cost at \$0.12/kWh:"
echo "Idle: $(echo "scale=2; 145 * 24 * 365 / 1000 * 0.12" | bc) dollars"
echo "Typical: $(echo "scale=2; 180 * 24 * 365 / 1000 * 0.12" | bc) dollars"

Power Budget for Your Homelab

Category Target Power Examples
Minimal < 50W total 1 mini PC + 1 NAS
Moderate 50-200W total 2-3 servers + switch
Heavy 200-500W total Rack with multiple servers
Excessive 500W+ Time to consolidate

A good rule of thumb: every watt of 24/7 power costs about $1/year at $0.12/kWh. A 200W homelab costs roughly $200/year just in electricity.

Used Enterprise Hardware Lifecycle Tips

If you're buying used enterprise hardware (and you should — it's the best value in homelab), here are lifecycle tips:

What to Buy

Hardware Sweet Spot Age Why
Dell PowerEdge 2-3 gen old Well-documented, cheap parts
HP ProLiant 2-3 gen old Similar to Dell, good iLO
Supermicro Any age Standards-based, flexible
Enterprise SSDs 2-3 years old Still have write endurance left
Enterprise HDDs < 30K power-on hours Check SMART data before buying

Where to Buy

What to Check When Buying Used

# After receiving used hardware, immediately:

# 1. Check all drive SMART data
for drive in /dev/sd?; do
    echo "=== $drive ==="
    sudo smartctl -a $drive | grep -E "Model|Serial|Power_On|Reallocated|Pending|Temp"
done

# 2. Check RAM for errors
# Boot into memtest86+ and run 4 passes

# 3. Check CPU for throttling under load
stress-ng --cpu $(nproc) --timeout 10m &
watch -n 1 "cat /proc/cpuinfo | grep 'cpu MHz' | head -4; sensors | grep Core"
# Frequency should stay near max boost, temps should stay under TDP thermal limit

# 4. Check all fans are working
sudo ipmitool sensor list | grep -i fan

# 5. Check IPMI/iDRAC/iLO for hardware alerts
sudo ipmitool sel list
# Look for any error entries

The Retirement Checklist

When a machine reaches end-of-life, work through this:

Pre-Retirement:
[ ] Data migrated to replacement hardware
[ ] Services migrated and verified on new hardware
[ ] DNS/IP addresses updated to point to new hardware
[ ] Old machine removed from monitoring
[ ] Old machine removed from backup schedules

Data Wiping:
[ ] All drives securely wiped
[ ] IPMI/iDRAC/iLO passwords reset to defaults
[ ] BIOS settings reset to defaults
[ ] Any RAID controller config cleared

Inventory Update:
[ ] Machine marked as retired in inventory
[ ] Retirement date recorded
[ ] Reason for retirement noted (failed, obsolete, power, replaced by X)
[ ] Disposal method recorded

Physical:
[ ] Reusable parts removed and cataloged
[ ] Labels removed or updated
[ ] Machine physically removed from rack
[ ] Disposed of properly (recycler, sold, donated)

Final Thoughts

Hardware lifecycle management isn't glamorous, but it prevents the slow accumulation of mystery boxes in your homelab. Know what you have, know when it should be replaced, and have a plan for both the new hardware coming in and the old hardware going out.

The most common mistake I see in homelabs is keeping old, power-hungry hardware running "because it still works." A $200 mini PC that uses 10W is almost always a better deal than a free enterprise server that uses 200W. Do the math, track your inventory, and don't be afraid to retire hardware that's costing more in electricity than it's worth.