The Complete Docker Compose Backup Strategy for Self-Hosters

You’ve got a dozen Docker Compose stacks humming along — Nextcloud for files, Vaultwarden for passwords, Gitea for code, maybe Immich for photos. Everything works great.

Until it doesn’t.

A corrupted disk, a bad update, an accidental docker volume rm — and suddenly you’re staring at data loss. If you don’t have a backup strategy, it’s only a matter of time.

This guide walks through everything you need to back up your Docker Compose infrastructure properly.

What Needs Backing Up

Most people think “I’ll just backup the compose files” and call it done. That’s maybe 5% of what you actually need. Here’s the full picture:

1. Compose Files & Configuration

The basics:

  • docker-compose.yml / compose.yml
  • .env files (contain your secrets!)
  • Override files (docker-compose.override.yml)
  • Any config directories mounted into containers
# Simple config backup
tar czf compose-configs.tar.gz /opt/stacks/*/docker-compose.yml /opt/stacks/*/.env

2. Named Docker Volumes

This is where most of your actual data lives. Databases, application state, uploaded files — it’s all in volumes.

# List all volumes
docker volume ls

# Export a volume
docker run --rm -v myvolume:/source:ro -v $(pwd):/backup alpine \
  tar czf /backup/myvolume.tar.gz -C /source .

3. Bind Mounts

If you mount host directories into containers (like ./data:/app/data), those need backing up too. The good news: they’re just regular directories, so any file backup tool works.

4. Database Dumps

Raw volume backups of databases can be inconsistent if the database is running. For proper backups:

# PostgreSQL
docker exec my-postgres pg_dump -U user dbname > dump.sql

# MySQL/MariaDB
docker exec my-mysql mysqldump -u root -p"$PASS" --all-databases > dump.sql

# Redis
docker exec my-redis redis-cli BGSAVE

Always prefer logical dumps over raw file copies for databases.

Manual Backup Methods

The Quick and Dirty

Stop containers, copy everything, start them again:

cd /opt/stacks/nextcloud
docker compose down
tar czf /backups/nextcloud-$(date +%Y%m%d).tar.gz .
docker compose up -d

Downside: Your service is offline during backup. Fine for personal use, not great if others depend on it.

Volume Export Without Downtime

For read-heavy services, you can export volumes while containers are running:

docker run --rm \
  -v nextcloud_data:/source:ro \
  -v /backups:/dest \
  alpine tar czf /dest/nextcloud_data.tar.gz -C /source .

The :ro mount means the backup container only reads. For most applications this works fine. For databases, always use logical dumps instead.

Using compose-backup

Our compose-backup tool automates all of this:

# One command, everything backed up
compose-backup

# Just configs, skip volumes
compose-backup --no-volumes

# Exclude specific large volumes
compose-backup --exclude immich_media

Automated Backup Strategies

Strategy 1: Cron + compose-backup

The simplest automation:

# crontab -e
# Daily at 3am, keep 14 days
0 3 * * * /usr/local/bin/compose-backup -o /backups/compose 2>&1 | logger -t compose-backup
0 4 * * * find /backups/compose -name '*.tar.gz' -mtime +14 -delete

Strategy 2: Restic for Deduplication

If your backups are large, restic provides deduplication and encryption:

# Initialize repo (once)
restic init -r /backups/restic-repo

# Backup compose directories
restic -r /backups/restic-repo backup /opt/stacks

# Backup can also go to S3, B2, SFTP, etc.
restic -r s3:s3.amazonaws.com/my-backups backup /opt/stacks

Strategy 3: Database-Aware Backups

For stacks with databases, combine logical dumps with file backups:

#!/bin/bash
# pre-backup-hooks.sh

# Dump all databases first
for stack in /opt/stacks/*/; do
    name=$(basename "$stack")
    compose="$stack/docker-compose.yml"
    
    # Check for postgres
    if grep -q 'postgres' "$compose" 2>/dev/null; then
        container=$(docker compose -f "$compose" ps -q db 2>/dev/null)
        if [[ -n "$container" ]]; then
            docker exec "$container" pg_dumpall -U postgres > "$stack/db-dump.sql"
        fi
    fi
done

# Then run normal backup
compose-backup

Off-Site Backup

A backup on the same machine as your data isn’t really a backup. Here are simple off-site options:

rsync to Another Machine

# After compose-backup runs
rsync -az /backups/compose/ backup-server:/backups/compose/

rclone to Cloud Storage

# Configure once: rclone config
# Then sync
rclone sync /backups/compose/ b2:my-bucket/compose-backups/

Backblaze B2 is cheap ($0.005/GB/month) and works great for this.

Simple SCP

scp /backups/compose/latest.tar.gz user@remote:/backups/

The 3-2-1 Rule

The gold standard for backups:

  • 3 copies of your data
  • 2 different storage media
  • 1 off-site copy

For a self-hoster, this might look like:

  1. Live data on your server
  2. Local backup on a separate drive or NAS
  3. Cloud backup on B2, S3, or a remote VPS

Testing Your Backups

This is the part everyone skips. An untested backup is not a backup.

Test quarterly:

  1. Spin up a fresh VM or spare machine
  2. Copy your backup to it
  3. Try to restore everything
  4. Verify services actually work
# Quick restore test
mkdir /tmp/restore-test
compose-backup --restore latest
cd /tmp/restore-test
# Check that files look right
ls -la */docker-compose.yml

Common Mistakes

  1. Only backing up compose files — Your data is in volumes, not YAML files
  2. Backing up running databases by copying files — Use logical dumps
  3. No off-site copy — Disk dies, backup dies with it
  4. Never testing restores — You don’t have a backup until you’ve restored from it
  5. No retention policy — Backups fill up your disk, then everything breaks
WhatHow OftenRetentionTool
Compose configsDaily30 dayscompose-backup –no-volumes
Database dumpsDaily14 dayspg_dump / mysqldump
Volume snapshotsWeekly4 weekscompose-backup
Full backup + off-siteWeekly8 weeksrestic + rclone

Quick Start Checklist

  • Install compose-backup
  • Run compose-backup --dry-run to see what you have
  • Run your first backup: compose-backup
  • Add database dump scripts for any Postgres/MySQL stacks
  • Set up cron for daily automated backups
  • Set up off-site sync (rsync/rclone/scp)
  • Test a restore
  • Set a calendar reminder to test restores quarterly

Don’t wait for data loss to take backups seriously. The 20 minutes you spend setting this up now will save you days of pain later.

Check your server health too: selfhost-doctor gives you a one-command health check.