If you’re running a home server, you need visibility into what’s happening. Grafana + Prometheus is the gold standard monitoring stack — and it’s completely free to self-host.
In this guide, you’ll set up Prometheus to collect metrics and Grafana to visualize them with beautiful dashboards. By the end, you’ll have real-time monitoring of your server’s CPU, memory, disk, network, and any Docker containers you’re running.
What You’ll Build
- Prometheus — time-series database that scrapes metrics from your services
- Grafana — dashboard and visualization platform
- Node Exporter — exposes hardware/OS metrics to Prometheus
- cAdvisor — exposes Docker container metrics
Prerequisites
- A Linux server (Ubuntu, Debian, or similar)
- Docker and Docker Compose installed
- Basic terminal knowledge
- ~512MB RAM for the full stack
Step 1: Create the Project Directory
mkdir -p ~/monitoring && cd ~/monitoring
Step 2: Docker Compose Configuration
Create docker-compose.yml:
services:
prometheus:
image: prom/prometheus:latest
container_name: prometheus
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
- prometheus_data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.retention.time=30d'
ports:
- "9090:9090"
restart: unless-stopped
grafana:
image: grafana/grafana:latest
container_name: grafana
volumes:
- grafana_data:/var/lib/grafana
environment:
- GF_SECURITY_ADMIN_PASSWORD=changeme
- GF_USERS_ALLOW_SIGN_UP=false
ports:
- "3000:3000"
restart: unless-stopped
depends_on:
- prometheus
node-exporter:
image: prom/node-exporter:latest
container_name: node-exporter
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
command:
- '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys'
- '--path.rootfs=/rootfs'
- '--collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($$|/)'
ports:
- "9100:9100"
restart: unless-stopped
cadvisor:
image: gcr.io/cadvisor/cadvisor:latest
container_name: cadvisor
volumes:
- /:/rootfs:ro
- /var/run:/var/run:ro
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
ports:
- "8080:8080"
restart: unless-stopped
volumes:
prometheus_data:
grafana_data:
Step 3: Configure Prometheus
Create prometheus.yml:
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'node-exporter'
static_configs:
- targets: ['node-exporter:9100']
- job_name: 'cadvisor'
static_configs:
- targets: ['cadvisor:8080']
Step 4: Start the Stack
docker compose up -d
Wait about 30 seconds for everything to initialize, then verify:
docker compose ps
All four containers should show as “running.”
Step 5: Access Grafana
Open your browser to http://your-server-ip:3000.
- Username: admin
- Password: changeme (set in docker-compose.yml)
Important: Change the default password immediately after first login.
Step 6: Add Prometheus as a Data Source
- In Grafana, go to Connections → Data Sources
- Click Add data source
- Select Prometheus
- Set the URL to
http://prometheus:9090 - Click Save & Test — you should see “Data source is working”
Step 7: Import Pre-Built Dashboards
Grafana has thousands of community dashboards. Here are the essential ones:
Node Exporter Dashboard (Server Metrics)
- Go to Dashboards → Import
- Enter ID: 1860
- Select your Prometheus data source
- Click Import
This gives you CPU, memory, disk, and network graphs instantly.
Docker Container Dashboard
- Go to Dashboards → Import
- Enter ID: 893
- Select your Prometheus data source
- Click Import
Now you can see per-container CPU, memory, and network usage.
Step 8: Set Up Alerts
Grafana can alert you when things go wrong. Here’s how to set up a basic disk space alert:
- Go to Alerting → Alert Rules → New Alert Rule
- Set the query:
- Set evaluation interval to 5m
- Add a notification channel (email, Discord webhook, Slack, etc.)
- Save the rule
Common Alert Rules
| Alert | PromQL Query |
|---|---|
| High CPU | 100 - (avg(rate(node_cpu_seconds_total{mode="idle"}[5m])) * 100) > 90 |
| Low Memory | (node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes) * 100 < 10 |
| Disk Full | (node_filesystem_avail_bytes{mountpoint="/"} / node_filesystem_size_bytes{mountpoint="/"}) * 100 < 10 |
| Container Down | absent(container_last_seen{name="your-container"}) |
Step 9: Monitoring Additional Services
To monitor more services, add scrape targets to prometheus.yml:
- job_name: 'my-app'
static_configs:
- targets: ['my-app:8080']
metrics_path: '/metrics'
Many self-hosted apps expose Prometheus metrics natively:
- Traefik — built-in metrics endpoint
- Nextcloud — via exporter plugin
- Pi-hole — via pi-hole-exporter
- Nginx — via nginx-prometheus-exporter
After adding new targets, restart Prometheus:
docker compose restart prometheus
Securing the Stack
For production use:
- Change default passwords — Update
GF_SECURITY_ADMIN_PASSWORD - Restrict ports — Use a reverse proxy (Traefik, Caddy) instead of exposing ports directly
- Enable HTTPS — Put Grafana behind your reverse proxy with SSL
- Network isolation — Keep monitoring on an internal Docker network
Example with Traefik labels (replace port exposure):
grafana:
labels:
- "traefik.enable=true"
- "traefik.http.routers.grafana.rule=Host(`grafana.yourdomain.com`)"
- "traefik.http.routers.grafana.tls.certresolver=letsencrypt"
Troubleshooting
Prometheus shows “DOWN” for targets
- Check if the exporter container is running:
docker compose logs node-exporter - Verify network connectivity between containers
- Ensure you’re using container names (not localhost) in prometheus.yml
Grafana dashboard shows “No Data”
- Verify the data source is connected (Settings → Data Sources → Test)
- Check the time range picker — make sure it includes recent data
- Wait a few minutes for Prometheus to collect initial metrics
High memory usage
- Reduce retention: change
--storage.tsdb.retention.time=15d - Increase scrape interval to
30sfor less critical metrics - Use recording rules to pre-aggregate expensive queries
cAdvisor won’t start on newer kernels
Some newer kernels need an extra mount:
cadvisor:
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
# Add this if default config fails
privileged: true
Resource Usage
Expected resource consumption for this stack:
| Service | RAM | CPU | Disk (30 days) |
|---|---|---|---|
| Prometheus | ~200MB | Low | ~500MB |
| Grafana | ~100MB | Low | ~50MB |
| Node Exporter | ~15MB | Minimal | — |
| cAdvisor | ~50MB | Low | — |
| Total | ~365MB | Low | ~550MB |
What’s Next
Once your monitoring stack is running:
- Add more exporters for services you care about
- Create custom dashboards for your specific setup
- Set up alerting to catch problems before they become outages
- Add Loki for log aggregation (Grafana’s log companion)
- Try Alertmanager for advanced alert routing and grouping
Conclusion
Grafana + Prometheus gives you enterprise-grade monitoring for free. With this setup, you’ll never wonder “is my server okay?” again — you’ll know, with real-time dashboards and proactive alerts.
The stack is lightweight enough to run on a Raspberry Pi and powerful enough for production workloads. Start with the basics here, then expand as your homelab grows.