Self-Hosting Duplicati: Encrypted Cloud Backup with Docker

You have a self-hosted stack running. Databases, configuration files, media libraries, personal documents — all living on your server. Now ask yourself: if that drive died right now, what would you lose?

Duplicati is a free, open-source backup client that encrypts your data before sending it to cloud storage. It supports over 30 storage backends — S3, Backblaze B2, Google Drive, OneDrive, SFTP, WebDAV, and more — with AES-256 encryption, deduplication, and incremental backups. Everything runs through a clean web UI, so you schedule it once and forget about it.

The critical difference between Duplicati and just syncing files to the cloud: Duplicati encrypts before upload. Your cloud provider never sees your data in plain text. That matters when you’re backing up personal files, database dumps, and server configurations.

Why Duplicati?

Duplicati vs Other Backup Tools

FeatureDuplicatiResticKopiaBorgBackup
Web UI✅ Built-in❌ CLI only✅ Built-in❌ CLI only
EncryptionAES-256AES-256AES-256-GCMAES-256
DeduplicationBlock-levelContent-definedContent-definedChunk-based
Cloud backends30+ nativeS3/SFTP/rest-serverS3/SFTP/GCS/AzureSSH/rest only
SchedulingBuilt-inExternal (cron)Built-inExternal (cron)
LicenseLGPLBSD-2Apache 2.0BSD-3
Resource usageMediumLowLowMedium

Duplicati wins on accessibility. If you want a GUI, built-in scheduling, and native support for consumer cloud storage (Google Drive, OneDrive, Dropbox), Duplicati is the easiest path. If you prefer CLI-first and maximum performance, look at Kopia or Restic.

Prerequisites

  • Docker and Docker Compose installed
  • A cloud storage account (Backblaze B2, S3, Google Drive, etc.)
  • The data you want to back up accessible on the Docker host
  • 512MB+ RAM (more for large backup sets)

Quick Start with Docker Compose

Create your project directory:

mkdir -p ~/duplicati && cd ~/duplicati

Create docker-compose.yml:

services:
  duplicati:
    image: lscr.io/linuxserver/duplicati:latest
    container_name: duplicati
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=America/New_York
    volumes:
      - ./config:/config
      - ./backups:/backups
      - /home:/source/home:ro
      - /opt:/source/opt:ro
      - /etc:/source/etc:ro
    ports:
      - "8200:8200"
    restart: unless-stopped

Key points about the volume mounts:

  • /config — Duplicati’s database and settings (back this up separately!)
  • /backups — Local backup destination (optional, for local copies)
  • /source/* — Read-only mounts of directories you want to back up

Mount whatever directories matter to you. The :ro flag ensures Duplicati can only read your data, not modify it.

Start it up:

docker compose up -d

Access the web UI at http://your-server:8200.

First-Time Setup

On first launch, Duplicati asks whether to set the UI to single-user or multi-user mode. For a home server, pick single-user and set a strong password — anyone with access to port 8200 can see your backup configuration and storage credentials.

Creating Your First Backup Job

Click “Add backup”“Configure a new backup”:

  1. General — Name your backup, set an encryption passphrase. Write this passphrase down somewhere safe. Without it, your backups are unrecoverable. AES-256 is the default and the right choice.

  2. Destination — Pick your storage backend. For Backblaze B2:

    • Backend: B2 Cloud Storage
    • Bucket name: your bucket
    • Account ID and Application Key from Backblaze dashboard
    • Folder path: /server-backup (or whatever makes sense)
  3. Source Data — Browse the /source/ directory tree and select what to back up. Common selections:

    • /source/home/user/ — Home directory
    • /source/opt/docker/ — Docker volumes and configs
    • /source/etc/ — System configuration
  4. Schedule — Set when backups run. Daily at 3 AM is a solid default. Duplicati supports custom cron-style schedules too.

  5. Options — Set retention policy:

    • Keep all backups for 7 days
    • Keep one per week for 4 weeks
    • Keep one per month for 12 months
    • This is the “smart” retention — covers you for recent accidents and long-term recovery

Filters and Exclusions

In the advanced options, add exclusion filters to skip junk:

------eeeeeexxxxxxcccccclllllluuuuuuddddddeeeeee------eeeeeexxxxxxpppppprrrrrreeeeeessssssssssssiiiiiioooooonnnnnn======**./n.CtToca_mrdacppaechys_hechmea/ocdhuel_e_s//

This keeps your backup size manageable and speeds up the process significantly.

Backing Up Docker Volumes

For Docker containers with databases (PostgreSQL, MySQL, MongoDB), don’t just back up the raw volume files. Instead, dump the database first, then back up the dump:

Create a pre-backup script at ~/duplicati/pre-backup.sh:

#!/bin/bash
# Dump databases before Duplicati runs
DUMP_DIR="/opt/docker-dumps"
mkdir -p "$DUMP_DIR"

# PostgreSQL example
docker exec postgres pg_dumpall -U postgres > "$DUMP_DIR/postgres-all.sql"

# MySQL example
docker exec mariadb mariadb-dump -u root -p"$MYSQL_ROOT_PASSWORD" --all-databases > "$DUMP_DIR/mariadb-all.sql"

# MongoDB example
docker exec mongodb mongodump --out /tmp/mongodump
docker cp mongodb:/tmp/mongodump "$DUMP_DIR/mongodb/"

echo "Database dumps complete: $(date)"

In Duplicati’s backup job settings, under Advanced Options, add:

-run-script-before=/config/pre-backup.sh

Mount the script into the container by adding to your compose file:

    volumes:
      - ./pre-backup.sh:/config/pre-backup.sh:ro
      - /opt/docker-dumps:/source/docker-dumps:ro

Now Duplicati dumps your databases, then backs up the fresh dumps alongside everything else.

Storage Backend Deep Dive

B2 is the cheapest mainstream option at $0.006/GB/month for storage and $0.01/GB for downloads. For a typical home server backing up 100GB, that’s $0.60/month.

BBAAaucpcccpkkoleeuintncd:ta:tyIiBoDo2u:nrC-yKlboeoauyucr:dk-uaySpcot-cuoborruu-acnagktpee-pti-dkey

Create an application key with read/write access to only the backup bucket — don’t use the master key.

Amazon S3 / S3-Compatible

Works with AWS S3, MinIO, Wasabi, Cloudflare R2, and any S3-compatible storage:

BSBRAAaeueWWcrcgSSkvkieeeoAAnrtnccd:::cc:eesyussS3osss3.u-areIKCm-aDeoabs:ymzut:poc-yank1oytaeuoiwtrubs-rl.k-ecesoyemcreotryourprovider)

For Cloudflare R2 (free egress!), use:

  • Server: your-account-id.r2.cloudflarestorage.com
  • Storage class: leave default

SFTP / SSH

Back up to another server you control:

BSPPUSaeoasScrrteHkvthree::nKnraed:2/my:2be:ba:Sac(FckbpTkuaaPupcspskt(-/ueSsdpSeu-oHrpur)vlseieurcrp.aletoxiaa/dmpyloeu.rcopmrivatekey)

Google Drive

Duplicati supports Google Drive via OAuth. Click AuthID in the backend config and follow the Google login flow. Note: Google may revoke access periodically, so check your backup logs.

Reverse Proxy Configuration

Caddy

d}uplirceavteir.syeo_uprrdooxmyailno.ccaolmho{st:8200

Nginx

server {
    listen 443 ssl http2;
    server_name duplicati.yourdomain.com;

    ssl_certificate /etc/letsencrypt/live/duplicati.yourdomain.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/duplicati.yourdomain.com/privkey.pem;

    location / {
        proxy_pass http://localhost:8200;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

Important: If you expose Duplicati through a reverse proxy, use authentication (Authelia, Authentik, or at minimum HTTP basic auth). Duplicati’s built-in password is better than nothing, but defense in depth matters for a tool that holds your storage credentials.

Backup Verification and Restore

Verifying Backups

Duplicati can verify backup integrity on a schedule. Under your backup job → Options:

-backup-test-samples=3

This checks 3 random files after each backup to confirm they can be decrypted and restored. You can also run manual verification: click your backup job → Verify files.

Restoring Files

Click Restore in the left menu, select your backup job, then:

  1. Browse the backup contents by date
  2. Select files or directories to restore
  3. Choose restore location (original path or alternate)
  4. Click Restore

For disaster recovery where Duplicati itself is gone:

# Spin up a fresh Duplicati container
docker compose up -d

# Go to Restore → "Restore from configuration"
# Point it at your storage backend
# Enter your encryption passphrase
# Duplicati rebuilds its database from the backup metadata

This is why remembering your encryption passphrase is critical. The remote backup files contain everything needed to rebuild — but only if you can decrypt them.

Command-Line Restore

For headless recovery, use the Duplicati CLI inside the container:

docker exec duplicati mono /app/duplicati/Duplicati.CommandLine.exe restore \
  "b2://your-bucket/server-backup?auth-username=ACCOUNT_ID&auth-password=APP_KEY" \
  --passphrase="your-encryption-passphrase" \
  --restore-path=/tmp/restore \
  "home/user/important-file.txt"

Monitoring and Notifications

Email Notifications

Under SettingsDefault options, add:

------sssssseeeeeennnnnndddddd------mmmmmmaaaaaaiiiiiillllll------uftuplrrosaelo=esv=myrses=onwlmyuao=tormrWpu-edasre==r:-myyn/eaooi/miuunsalrrgmi@--,tlgeaEp@mmpr.gaaprgmii-omallprai.@a,ilcgsFl.omsa.cmawtcoioaomlrlm.d:c4o6m5

Use --send-mail-level=All if you want success notifications too.

Webhook Notifications

For services like ntfy, Gotify, or Healthchecks.io:

-run-script-after=/config/notify.sh

Example notify.sh for Healthchecks.io:

#!/bin/bash
if [ "$DUPLICATI__PARSED_RESULT" = "Success" ]; then
    curl -fsS --retry 3 https://hc-ping.com/your-check-uuid
else
    curl -fsS --retry 3 https://hc-ping.com/your-check-uuid/fail
fi

Troubleshooting

“Connection failed” to storage backend

  • Verify credentials in the backup configuration
  • Test the connection with the Test connection button
  • For B2: ensure the application key has the right bucket permissions
  • For S3: check the region matches your bucket’s actual region
  • For SFTP: verify SSH key format (Duplicati expects OpenSSH format)

Backup is very slow

  • Enable --asynchronous-concurrent-upload-limit=4 for parallel uploads
  • Check your upload bandwidth — B2 and S3 scale well, but your ISP may throttle
  • Large first backup is normal; incrementals are much faster
  • Exclude unnecessary directories (node_modules, caches)

“Database is locked” error

Duplicati uses SQLite internally. If a backup job crashes:

docker restart duplicati

If persistent, repair the database: AboutShow logDatabaseRepair.

Backup size keeps growing despite retention

  • Check that retention policy is correctly configured
  • Run Compact manually from the backup job menu
  • Verify --keep-versions or smart retention is set, not “keep forever”
  • Deduplication means changed files create new blocks but old blocks only disappear after retention expires

Restore fails with “Passphrase incorrect”

  • The passphrase is case-sensitive and space-sensitive
  • Copy-paste from a password manager instead of typing
  • If you changed the passphrase mid-backup-set, you may need the original passphrase for older versions

High memory usage

For large backup sets (1TB+), Duplicati’s in-memory block index can grow:

--bblloocckks-ihzaes=h1-MaBlgorithm=SHA256

Larger block sizes reduce memory usage at the cost of slightly less deduplication efficiency.

Power User Tips

  1. The 3-2-1 rule: 3 copies of data, 2 different media, 1 offsite. Duplicati handles the offsite copy; keep a local backup too (mount a USB drive to /backups).

  2. Rotate encryption keys annually: Create a new backup job with a new passphrase yearly. Keep old jobs until their retention expires, then delete.

  3. Back up Duplicati’s own config: The /config directory contains your job definitions and server database. Back this up separately (even a simple cron tar to another location).

  4. Use labels for Docker volumes: Instead of mounting raw paths, use Docker labels to document what each volume contains and whether it needs backup.

  5. Test restores quarterly: A backup you’ve never restored is a backup you hope works. Pick a random file, restore it, verify it. Put it on your calendar.

  6. Combine with Docker Volume Management: Use the volume backup scripts from that guide to create consistent snapshots before Duplicati runs.

  7. Bandwidth scheduling: Under advanced options, use --throttle-upload=5mb during work hours and remove the limit overnight with scheduled option overrides.

Conclusion

Duplicati fills a specific niche well: encrypted backups to commodity cloud storage with a web UI that doesn’t require terminal comfort. It’s not the fastest backup tool, and power users may prefer Kopia or Restic for their efficiency. But for “set it and forget it” encrypted cloud backups, Duplicati is hard to beat.

The real cost of not having backups isn’t the storage — it’s the irreplaceable data you lose when a drive fails. At $0.60/month for 100GB on Backblaze B2, there’s no excuse.

Set up Duplicati. Configure your backup jobs. Test a restore. Then stop thinking about it until you need it — and hope you never do.