Docker Compose Profiles: Managing Dev, Staging, and Production

You’ve got a self-hosted stack running in production. Now you want a staging copy to test updates before they hit your live services. Maybe a dev environment too, with debug tools and hot-reloading. The old way? Three separate docker-compose.yml files with 80% overlap, constantly drifting out of sync.

Docker Compose profiles solve this. One compose file, multiple environments. Services tagged with profiles only start when you explicitly activate that profile. Everything else runs by default.

How Profiles Work

Any service without a profiles key starts with every docker compose up. Services with a profiles key only start when that profile is activated.

services:
  app:
    image: myapp:latest
    # No profiles key — always starts

  debug-tools:
    image: busybox
    profiles: [dev]
    # Only starts when "dev" profile is active

Activate a profile with the --profile flag:

# Start only default services
docker compose up -d

# Start default services + dev profile
docker compose up -d --profile dev

A service can belong to multiple profiles:

services:
  adminer:
    image: adminer:latest
    profiles: [dev, staging]

This starts Adminer in both dev and staging, but not production.

The Environment Problem

Most self-hosters hit this wall: you want different behavior per environment. Different ports, different volumes, different image tags. Before profiles, your options were:

  1. Multiple compose files (docker-compose.yml + docker-compose.staging.yml) — works but gets messy with docker compose -f file1.yml -f file2.yml up
  2. Environment variables everywhere — flexible but hard to reason about
  3. Separate directories — copy-paste hell

Profiles give you a fourth option: everything in one file, clearly labeled.

Real-World Self-Hosting Setup

Here’s a practical example — a web app with database, reverse proxy, monitoring, and dev tools:

services:
  # === CORE (always runs) ===
  app:
    image: ${APP_IMAGE:-myapp:latest}
    restart: unless-stopped
    environment:
      - DATABASE_URL=postgres://app:${DB_PASS}@db:5432/app
      - NODE_ENV=${NODE_ENV:-production}
    depends_on:
      db:
        condition: service_healthy
    networks:
      - backend
      - frontend

  db:
    image: postgres:16-alpine
    restart: unless-stopped
    environment:
      POSTGRES_DB: app
      POSTGRES_USER: app
      POSTGRES_PASSWORD: ${DB_PASS}
    volumes:
      - db_data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U app"]
      interval: 10s
      timeout: 5s
      retries: 5
    networks:
      - backend

  # === PRODUCTION ONLY ===
  caddy:
    image: caddy:2-alpine
    profiles: [prod]
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile:ro
      - caddy_data:/data
    depends_on:
      - app
    networks:
      - frontend

  backup:
    image: prodrigestivill/postgres-backup-local:16
    profiles: [prod]
    restart: unless-stopped
    environment:
      POSTGRES_HOST: db
      POSTGRES_DB: app
      POSTGRES_USER: app
      POSTGRES_PASSWORD: ${DB_PASS}
      SCHEDULE: "@daily"
      BACKUP_KEEP_DAYS: 7
    volumes:
      - ./backups:/backups
    depends_on:
      - db
    networks:
      - backend

  # === STAGING ===
  caddy-staging:
    image: caddy:2-alpine
    profiles: [staging]
    restart: unless-stopped
    ports:
      - "8080:80"
    volumes:
      - ./Caddyfile.staging:/etc/caddy/Caddyfile:ro
    depends_on:
      - app
    networks:
      - frontend

  # === DEV TOOLS ===
  adminer:
    image: adminer:latest
    profiles: [dev, staging]
    ports:
      - "9090:8080"
    depends_on:
      - db
    networks:
      - backend

  mailpit:
    image: axllent/mailpit:latest
    profiles: [dev]
    ports:
      - "1025:1025"
      - "8025:8025"
    networks:
      - backend

  pgadmin:
    image: dpage/pgadmin4:latest
    profiles: [dev]
    environment:
      PGADMIN_DEFAULT_EMAIL: [email protected]
      PGADMIN_DEFAULT_PASSWORD: devpassword
    ports:
      - "5050:80"
    depends_on:
      - db
    networks:
      - backend

volumes:
  db_data:
  caddy_data:

networks:
  backend:
  frontend:

Running Each Environment

# Production — core services + reverse proxy + backups
docker compose --profile prod up -d

# Staging — core + staging proxy + adminer
docker compose --profile staging up -d

# Development — core + all dev tools
docker compose --profile dev up -d

# Multiple profiles at once
docker compose --profile dev --profile staging up -d

Using .env Files Per Environment

Combine profiles with environment-specific .env files for maximum flexibility:

# .env.production
APP_IMAGE=myapp:1.2.3
NODE_ENV=production
DB_PASS=super-secret-prod-password

# .env.staging
APP_IMAGE=myapp:staging
NODE_ENV=staging
DB_PASS=staging-password

# .env.dev
APP_IMAGE=myapp:dev
NODE_ENV=development
DB_PASS=devpassword

Run with the matching env file:

# Production
docker compose --env-file .env.production --profile prod up -d

# Staging
docker compose --env-file .env.staging --profile staging up -d

# Dev
docker compose --env-file .env.dev --profile dev up -d

Setting Default Profiles

Tired of typing --profile every time? Set it in your environment or in the .env file:

# In .env (default compose env file)
COMPOSE_PROFILES=prod

Or export it in your shell:

export COMPOSE_PROFILES=prod
docker compose up -d  # automatically activates "prod" profile

Multiple default profiles:

COMPOSE_PROFILES=dev,staging

Profile-Aware Commands

Profiles affect more than just up. They control which services are visible to all compose commands:

# Only shows running services for the active profile
docker compose --profile dev ps

# Logs for dev-profile services
docker compose --profile dev logs -f adminer mailpit

# Stop only dev-profile services (leaves core running)
docker compose --profile dev stop adminer mailpit pgadmin

# Pull images for a specific profile
docker compose --profile prod pull

Important: docker compose down without a profile flag only stops services without profiles (the default ones). To stop profile services too:

# Stop everything including prod-profile services
docker compose --profile prod down

Profiles vs. Multiple Compose Files

Both approaches work. Here’s when to use each:

ScenarioUse ProfilesUse Multiple Files
Same services, different extras per env
Radically different architectures
Team members need simple commands
Complex override chains
CI/CD pipeline simplicity
Shared base with small deltas

You can also combine them. Use profiles for service selection and override files for config differences:

docker compose -f compose.yml -f compose.prod.yml --profile prod up -d

Practical Patterns

Debug Sidecar

Attach a debug container to your app’s network namespace:

services:
  debug:
    image: nicolaka/netshoot
    profiles: [debug]
    network_mode: "service:app"
    command: sleep infinity

When something breaks: docker compose --profile debug up -d debug, then docker exec -it debug bash to poke around from inside the app’s network.

One-Off Tasks

Use profiles for maintenance tasks that shouldn’t run continuously:

services:
  db-migrate:
    image: myapp:latest
    profiles: [migrate]
    command: ["npm", "run", "migrate"]
    depends_on:
      db:
        condition: service_healthy
    networks:
      - backend

  db-seed:
    image: myapp:latest
    profiles: [seed]
    command: ["npm", "run", "seed"]
    depends_on:
      db:
        condition: service_healthy
    networks:
      - backend
# Run migration then exit
docker compose --profile migrate run --rm db-migrate

# Seed database
docker compose --profile seed run --rm db-seed

Monitoring Stack

Keep monitoring optional so lightweight deployments skip it:

services:
  prometheus:
    image: prom/prometheus:latest
    profiles: [monitoring]
    volumes:
      - ./prometheus.yml:/etc/prometheus/prometheus.yml:ro
      - prometheus_data:/prometheus
    ports:
      - "9091:9090"
    networks:
      - backend

  grafana:
    image: grafana/grafana:latest
    profiles: [monitoring]
    volumes:
      - grafana_data:/var/lib/grafana
    ports:
      - "3001:3000"
    depends_on:
      - prometheus
    networks:
      - backend
# Full stack with monitoring
docker compose --profile prod --profile monitoring up -d

Troubleshooting

“Service not found” errors

If docker compose stop myservice says the service doesn’t exist, it’s because the service has a profile and you didn’t activate it:

# Wrong — compose doesn't see profile services
docker compose stop adminer

# Right
docker compose --profile dev stop adminer

Dependencies across profiles

A service without a profile can’t depends_on a service with a profile — Docker Compose won’t know whether the dependency will be running. Keep core dependencies profile-free.

Profile services starting unexpectedly

Check for COMPOSE_PROFILES in your .env file or shell environment. This silently activates profiles:

# Check what's set
echo $COMPOSE_PROFILES
grep COMPOSE_PROFILES .env

Compose version

Profiles require Docker Compose V2 (the docker compose plugin, not the old docker-compose binary). Check your version:

docker compose version
# Docker Compose version v2.x.x ✓

Wrapping Up

Docker Compose profiles let you manage multiple environments from a single file without the complexity of override chains or the maintenance burden of duplicate configs. Tag services with the environments they belong to, activate profiles when you need them, and keep your default stack clean.

For most self-hosters, this means one compose.yml that handles production, a staging clone for testing updates, and a dev environment with database GUIs and debug tools — all without touching three separate files every time you add a service.

Start simple: add a dev profile with Adminer or pgAdmin to your existing stack. Once you see how clean it keeps things, you’ll want profiles everywhere.