Time Series Data with InfluxDB in Your Homelab
If you run a homelab long enough, you'll want to answer questions like: "What was my server's CPU temperature at 3 AM last Tuesday?" or "How much bandwidth has my NAS used over the past six months?" or "Is my basement humidity trending up?" These are time series questions — questions about how a measurement changes over time — and they need a database designed specifically for that workload.
InfluxDB is one of the most popular time series databases, and for good reason. It's purpose-built for high-write, time-stamped data, it has a rich query language, it integrates with hundreds of data sources through Telegraf, and it ships with a built-in dashboard UI. For homelabs in particular, it hits a sweet spot: powerful enough for serious monitoring, lightweight enough to run on modest hardware.
This guide walks through deploying InfluxDB 2.x in your homelab, collecting metrics with Telegraf, writing Flux queries, setting up retention policies, and building dashboards — plus some practical advice on when InfluxDB makes sense versus other options like Prometheus.
InfluxDB vs Prometheus: Picking the Right Tool
Before diving into setup, it's worth understanding where InfluxDB fits relative to Prometheus, since both are common choices for homelab monitoring.
| Feature | InfluxDB 2.x | Prometheus |
|---|---|---|
| Data model | Measurements, tags, fields | Metrics with labels |
| Data collection | Push (Telegraf agents push data in) | Pull (Prometheus scrapes endpoints) |
| Query language | Flux (functional, pipeline-based) | PromQL (mathematical, selector-based) |
| Built-in UI | Yes (dashboards, data explorer, alerts) | Basic expression browser (needs Grafana) |
| Storage engine | TSM (custom columnar) | Custom TSDB with WAL |
| Best for | IoT, sensor data, events, logs | Infrastructure metrics, Kubernetes |
| Retention policies | Native, per-bucket | Via flags or Thanos/Cortex |
| Cardinality handling | Handles high cardinality better | Can struggle with high cardinality |
| Alerting | Built-in checks and notifications | Requires Alertmanager |
The short version: Prometheus excels at infrastructure monitoring in a pull-based model and pairs beautifully with Grafana and Alertmanager. InfluxDB excels when you need to push data in from diverse sources — IoT sensors, custom applications, network devices, environmental monitors — and you want a single tool that handles ingestion, storage, querying, visualization, and alerting.
Many homelabs run both. Prometheus for infrastructure metrics (CPU, memory, disk, container stats) and InfluxDB for everything else (weather stations, energy monitors, custom app metrics, network flow data). They complement each other well.
Installing InfluxDB 2.x
Docker Compose (Recommended)
Docker is the cleanest way to run InfluxDB in a homelab. It isolates the database, makes upgrades straightforward, and keeps your host system clean:
# docker-compose.yml
services:
influxdb:
image: influxdb:2.7
container_name: influxdb
restart: unless-stopped
ports:
- "8086:8086"
volumes:
- influxdb-data:/var/lib/influxdb2
- influxdb-config:/etc/influxdb2
environment:
- DOCKER_INFLUXDB_INIT_MODE=setup
- DOCKER_INFLUXDB_INIT_USERNAME=admin
- DOCKER_INFLUXDB_INIT_PASSWORD=your-secure-password-here
- DOCKER_INFLUXDB_INIT_ORG=homelab
- DOCKER_INFLUXDB_INIT_BUCKET=default
- DOCKER_INFLUXDB_INIT_RETENTION=30d
- DOCKER_INFLUXDB_INIT_ADMIN_TOKEN=your-admin-token-here
healthcheck:
test: ["CMD", "influx", "ping"]
interval: 30s
timeout: 10s
retries: 3
volumes:
influxdb-data:
influxdb-config:
Start it up:
docker compose up -d
The DOCKER_INFLUXDB_INIT_* variables handle first-run setup automatically. After the container starts, the InfluxDB UI is available at http://your-server:8086.
Bare Metal Installation
If you prefer running InfluxDB directly on the host (useful for dedicated monitoring servers):
# Ubuntu/Debian
curl -fsSL https://repos.influxdata.com/influxdata-archive.key | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/influxdata.gpg
echo "deb https://repos.influxdata.com/debian stable main" | sudo tee /etc/apt/sources.list.d/influxdata.list
sudo apt update && sudo apt install influxdb2
# Start and enable
sudo systemctl enable --now influxdb
# Run initial setup
influx setup \
--username admin \
--password your-secure-password \
--org homelab \
--bucket default \
--retention 30d \
--force
Resource Requirements
InfluxDB 2.x is reasonably lightweight for homelab workloads:
| Metric Rate | RAM | CPU | Disk |
|---|---|---|---|
| < 5,000 points/sec | 1-2 GB | 1-2 cores | SSD recommended |
| 5,000-50,000 points/sec | 4-8 GB | 2-4 cores | SSD required |
| > 50,000 points/sec | 8+ GB | 4+ cores | NVMe recommended |
Most homelabs fall well under 5,000 points/sec. A Raspberry Pi 4 with 4 GB RAM can handle typical homelab monitoring loads comfortably.
Collecting Data with Telegraf
Telegraf is InfluxDB's companion agent — a plugin-driven collector that can gather metrics from systems, databases, network gear, IoT sensors, APIs, and more. It runs on each machine you want to monitor and pushes data to InfluxDB.
Installing Telegraf
# Ubuntu/Debian
sudo apt install telegraf
# Or via Docker alongside InfluxDB
# Add to your docker-compose.yml:
telegraf:
image: telegraf:1.30
container_name: telegraf
restart: unless-stopped
volumes:
- ./telegraf.conf:/etc/telegraf/telegraf.conf:ro
- /var/run/docker.sock:/var/run/docker.sock:ro # For Docker metrics
- /:/hostfs:ro # For disk metrics
environment:
- HOST_ETC=/hostfs/etc
- HOST_PROC=/hostfs/proc
- HOST_SYS=/hostfs/sys
- HOST_VAR=/hostfs/var
- HOST_RUN=/hostfs/run
- HOST_MOUNT_PREFIX=/hostfs
depends_on:
- influxdb
Telegraf Configuration
Telegraf's config file defines which inputs to collect and where to send the data. Here's a practical homelab configuration:
# telegraf.conf
# Global agent configuration
[agent]
interval = "10s"
round_interval = true
metric_batch_size = 1000
metric_buffer_limit = 10000
flush_interval = "10s"
hostname = "homeserver"
# Output to InfluxDB 2.x
[[outputs.influxdb_v2]]
urls = ["http://influxdb:8086"]
token = "your-admin-token-here"
organization = "homelab"
bucket = "default"
# System metrics
[[inputs.cpu]]
percpu = true
totalcpu = true
collect_cpu_time = false
[[inputs.mem]]
[[inputs.disk]]
ignore_fs = ["tmpfs", "devtmpfs", "devfs", "iso9660", "overlay", "aufs", "squashfs"]
[[inputs.diskio]]
[[inputs.net]]
[[inputs.system]]
[[inputs.processes]]
# Docker container metrics
[[inputs.docker]]
endpoint = "unix:///var/run/docker.sock"
gather_services = false
container_names = []
perdevice = true
total = true
# Temperature sensors (great for homelab hardware monitoring)
[[inputs.sensors]]
# Ping latency to external services
[[inputs.ping]]
urls = ["1.1.1.1", "8.8.8.8", "google.com"]
count = 3
ping_interval = 1.0
timeout = 2.0
# SMART disk health
[[inputs.smart]]
use_sudo = true
Start Telegraf and verify data is flowing:
# Test the config
telegraf --config telegraf.conf --test
# Start the service
sudo systemctl enable --now telegraf
# Check logs for errors
journalctl -u telegraf -f
Collecting IoT and Sensor Data
One of InfluxDB's strengths is handling IoT data. If you have temperature sensors, energy monitors, or weather stations, you can push data directly via the InfluxDB API:
# Write a temperature reading using the line protocol
curl -s -X POST "http://influxdb:8086/api/v2/write?org=homelab&bucket=sensors" \
-H "Authorization: Token your-token" \
-H "Content-Type: text/plain" \
--data-raw "temperature,location=basement,sensor=dht22 value=18.5 $(date +%s)000000000"
For MQTT-based IoT devices (common with Home Assistant, Zigbee2MQTT, etc.), Telegraf has an MQTT consumer plugin:
# Collect from MQTT broker (e.g., Mosquitto)
[[inputs.mqtt_consumer]]
servers = ["tcp://mosquitto:1883"]
topics = [
"zigbee2mqtt/+/temperature",
"zigbee2mqtt/+/humidity",
"homeassistant/sensor/+/state",
]
data_format = "value"
data_type = "float"
topic_tag = "topic"
Querying with Flux
Flux is InfluxDB 2.x's query and scripting language. It uses a pipeline-based syntax where data flows through transformations:
Basic Queries
// Get CPU usage for the last hour
from(bucket: "default")
|> range(start: -1h)
|> filter(fn: (r) => r._measurement == "cpu")
|> filter(fn: (r) => r._field == "usage_idle")
|> filter(fn: (r) => r.cpu == "cpu-total")
|> map(fn: (r) => ({r with _value: 100.0 - r._value})) // Convert idle to usage
|> aggregateWindow(every: 5m, fn: mean)
Comparison Queries
// Compare disk usage across all hosts
from(bucket: "default")
|> range(start: -24h)
|> filter(fn: (r) => r._measurement == "disk")
|> filter(fn: (r) => r._field == "used_percent")
|> filter(fn: (r) => r.path == "/")
|> aggregateWindow(every: 1h, fn: last)
|> group(columns: ["host"])
Alerting Queries
// Alert when any disk exceeds 85% capacity
import "influxdata/influxdb/monitor"
from(bucket: "default")
|> range(start: -5m)
|> filter(fn: (r) => r._measurement == "disk")
|> filter(fn: (r) => r._field == "used_percent")
|> last()
|> monitor.check(
crit: (r) => r._value > 90.0,
warn: (r) => r._value > 85.0,
messageFn: (r) => "Disk ${r.path} on ${r.host} at ${string(v: r._value)}%",
)
Retention Policies and Downsampling
Raw 10-second metrics are useful for recent troubleshooting but wasteful to keep forever. InfluxDB uses buckets with retention periods to age out old data. Combine this with downsampling to keep long-term trends without the storage cost.
Bucket Strategy for Homelabs
# Create buckets with different retention periods
influx bucket create --name raw --retention 7d --org homelab
influx bucket create --name hourly --retention 90d --org homelab
influx bucket create --name daily --retention 365d --org homelab
influx bucket create --name longterm --retention 0 --org homelab # forever
Downsampling Task
Create an InfluxDB task that periodically aggregates raw data into coarser buckets:
// Task: Downsample raw metrics to hourly averages
option task = {name: "downsample_hourly", every: 1h, offset: 5m}
from(bucket: "raw")
|> range(start: -task.every)
|> filter(fn: (r) => r._measurement == "cpu" or
r._measurement == "mem" or
r._measurement == "disk")
|> aggregateWindow(every: 1h, fn: mean, createEmpty: false)
|> to(bucket: "hourly", org: "homelab")
// Task: Downsample hourly to daily
option task = {name: "downsample_daily", every: 1d, offset: 10m}
from(bucket: "hourly")
|> range(start: -task.every)
|> aggregateWindow(every: 1d, fn: mean, createEmpty: false)
|> to(bucket: "daily", org: "homelab")
This gives you 10-second resolution for the past week, hourly resolution for the past 3 months, and daily resolution for a year. Adjust to your storage constraints and monitoring needs.
Building Dashboards
InfluxDB 2.x includes a built-in dashboard system that's surprisingly capable. You can build monitoring dashboards without installing Grafana (though Grafana remains an excellent option if you want more customization).
Dashboard Layout for a Homelab
A practical homelab dashboard might include these panels:
- System Overview — CPU, memory, and swap usage per host (line charts)
- Disk Health — Used percentage per mount point (gauge charts with thresholds)
- Network Traffic — Bytes in/out per interface (area charts)
- Docker Containers — CPU and memory per container (stacked bar chart)
- Temperature — CPU and ambient temps (line chart with alert thresholds)
- Ping Latency — Round-trip time to external hosts (line chart)
Create dashboards through the InfluxDB UI at http://your-server:8086 or via the API:
# Export a dashboard to JSON for version control
influx dashboards list --json | jq '.[] | select(.name == "Homelab Overview")' > dashboard-backup.json
# Import a dashboard
influx dashboards create --json-file dashboard-backup.json
Grafana Integration
If you prefer Grafana (and many people do — its visualization options are unmatched), InfluxDB 2.x works as a native data source:
# Grafana data source configuration (provisioning)
apiVersion: 1
datasources:
- name: InfluxDB
type: influxdb
access: proxy
url: http://influxdb:8086
jsonData:
version: Flux
organization: homelab
defaultBucket: default
secureJsonData:
token: your-admin-token-here
Performance Tuning
A few tweaks make InfluxDB run better on homelab hardware:
# /etc/influxdb2/config.toml (or environment variables)
# Reduce WAL fsync frequency for non-critical data
# (trades durability for write performance)
storage-wal-fsync-delay = "100ms"
# Limit concurrent queries to prevent OOM on low-RAM systems
query-concurrency = 4
query-queue-size = 10
# Set memory limit for query execution
query-max-memory-bytes = 1073741824 # 1 GB
For Docker deployments, set memory limits to prevent InfluxDB from consuming all available RAM:
deploy:
resources:
limits:
memory: 2G
Backup and Recovery
Don't forget to back up InfluxDB itself. The built-in backup command creates portable snapshots:
# Full backup
influx backup /mnt/nas/backups/influxdb/$(date +%Y%m%d) --org homelab
# Restore to a fresh instance
influx restore /mnt/nas/backups/influxdb/20260214 --org homelab
Automate this with a cron job or systemd timer, and integrate it with your broader homelab backup strategy.
Getting Started
If you're starting from scratch, here's the practical order:
- Deploy InfluxDB via Docker Compose
- Install Telegraf on your primary server with the basic system inputs
- Verify data in the InfluxDB Data Explorer UI
- Build your first dashboard with CPU, memory, and disk panels
- Add Telegraf to additional hosts
- Add specialized inputs (Docker, SMART, ping, MQTT) as needed
- Set up retention buckets and downsampling tasks
- Configure alerting for critical metrics (disk full, high temperature, host unreachable)
InfluxDB gives you deep visibility into your homelab's behavior over time. Once you start collecting metrics, you'll wonder how you ever ran a homelab without them.