Docker Networking Explained: Bridge, Host, and Macvlan
Networking is the part of Docker that trips up most self-hosters. Your containers need to talk to each other, to the host, and to the outside world — and Docker gives you several ways to wire that up.
The three modes you’ll actually use: bridge (the default), host (skip Docker’s network layer entirely), and macvlan (give containers their own IP on your LAN). Each has tradeoffs, and picking the wrong one leads to hours of debugging.
This guide explains when to use each, with real Docker Compose examples you can copy.
Prerequisites
- Docker and Docker Compose installed
- Basic familiarity with IP addresses and ports
- A Linux host (macvlan has quirks on macOS/Windows)
How Docker Networking Works
When Docker starts, it creates a virtual network interface called docker0. This is a software bridge — think of it as a virtual switch that containers plug into.
Every container gets its own network namespace with its own IP address, routing table, and network interfaces. Docker handles the plumbing to connect these namespaces to your host and the outside world.
# See Docker's default networks
docker network ls
# Inspect the default bridge
docker network inspect bridge
You’ll see three built-in networks: bridge, host, and none. Most self-hosted services use custom bridge networks — and for good reason.
Bridge Networking (The Default)
Bridge is what you get when you don’t specify anything. Containers connect to an internal virtual network and reach the outside world through NAT (Network Address Translation) on the host.
Default Bridge vs Custom Bridge
Docker’s default bridge network works, but custom bridges are better for self-hosting:
| Feature | Default Bridge | Custom Bridge |
|---|---|---|
| DNS resolution by container name | ❌ | ✅ |
| Automatic isolation | ❌ (all containers share it) | ✅ |
| Connect/disconnect live | ❌ | ✅ |
Always use custom bridge networks. The default bridge requires --link (deprecated) for container-to-container communication by name.
Docker Compose Example
services:
db:
image: postgres:16
environment:
POSTGRES_PASSWORD: supersecret
POSTGRES_DB: appdata
volumes:
- pgdata:/var/lib/postgresql/data
networks:
- backend
app:
image: your-app:latest
ports:
- "8080:3000"
environment:
DATABASE_URL: postgres://postgres:supersecret@db:5432/appdata
networks:
- backend
- frontend
nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
networks:
- frontend
volumes:
pgdata:
networks:
backend:
frontend:
Key points:
dbis only onbackend— Nginx can’t reach it directlyappbridges both networks — it talks to the database AND the reverse proxy- Container names are DNS names —
appconnects todb:5432, not an IP address - Only exposed ports reach the host —
8080and80/443are accessible from your LAN
When to Use Bridge
- Most self-hosted services — Jellyfin, Nextcloud, Paperless, Vaultwarden
- Multi-container stacks — apps that need databases, caches, or sidecars
- When you want port mapping — expose only what’s needed
- When isolation matters — keep services separated
Port Mapping Deep Dive
ports:
# host:container — bind to all interfaces
- "8080:80"
# Bind to localhost only (safer)
- "127.0.0.1:8080:80"
# Bind to a specific LAN IP
- "192.168.1.50:8080:80"
# Random host port (check with docker port)
- "80"
# UDP
- "51820:51820/udp"
Security tip: If your service sits behind a reverse proxy, bind to 127.0.0.1 so it’s not directly accessible from the network:
ports:
- "127.0.0.1:3000:3000"
Host Networking
Host mode removes the network isolation between the container and the host. The container uses the host’s network stack directly — no NAT, no port mapping, no virtual bridge.
Docker Compose Example
services:
pihole:
image: pihole/pihole:latest
network_mode: host
environment:
TZ: America/New_York
WEBPASSWORD: changeme
FTLCONF_dns_listeningMode: all
volumes:
- ./etc-pihole:/etc/pihole
- ./etc-dnsmasq.d:/etc/dnsmasq.d
restart: unless-stopped
With network_mode: host, Pi-hole binds directly to port 53 on your host’s IP. No port mapping needed — or allowed.
When to Use Host
- DNS servers (Pi-hole, AdGuard Home) — need to bind to port 53 without NAT complications
- Network monitoring (Netdata, Prometheus node-exporter) — need to see all host interfaces
- Performance-critical services — eliminates NAT overhead (marginal, but measurable at scale)
- Services that need mDNS/SSDP — multicast doesn’t cross bridge networks easily
- VPN servers (WireGuard) — sometimes simpler than mapping UDP + managing routes
When NOT to Use Host
- Most web apps — bridge with port mapping works fine
- When you need multiple services on the same port — they’ll conflict
- When isolation matters — every host port is accessible to the container
- On macOS/Windows — host networking doesn’t work as expected (Docker runs in a VM)
Tradeoffs
Macvlan: Containers with Real LAN IPs
Macvlan gives each container its own IP address on your physical network. To your router and other devices, the container looks like a separate machine.
This is powerful for self-hosters who want services accessible at dedicated IPs — like a Pi-hole that devices point to directly, or a NAS that shows up as its own device.
Setting Up a Macvlan Network
First, identify your host’s network interface and subnet:
# Find your interface name and IP
ip addr show
# Common names: eth0, enp3s0, eno1
Create the macvlan network:
networks:
homenet:
driver: macvlan
driver_opts:
parent: eth0 # Your host's physical interface
ipam:
config:
- subnet: 192.168.1.0/24
gateway: 192.168.1.1
ip_range: 192.168.1.224/27 # .224-.255 reserved for containers
Important: Reserve an IP range in your router’s DHCP settings so container IPs don’t conflict with other devices. The ip_range above carves out .224 through .255 for Docker.
Docker Compose Example
services:
adguard:
image: adguard/adguardhome:latest
container_name: adguard
volumes:
- ./work:/opt/adguardhome/work
- ./conf:/opt/adguardhome/conf
networks:
homenet:
ipv4_address: 192.168.1.225
restart: unless-stopped
networks:
homenet:
driver: macvlan
driver_opts:
parent: eth0
ipam:
config:
- subnet: 192.168.1.0/24
gateway: 192.168.1.1
ip_range: 192.168.1.224/27
AdGuard Home is now accessible at 192.168.1.225 — its own address, no port mapping needed, all ports available.
The Host-to-Container Problem
Here’s the gotcha that catches everyone: the host cannot communicate with macvlan containers directly. This is a Linux kernel limitation — traffic between a macvlan interface and its parent interface is blocked.
The fix is a macvlan shim on the host:
# Create a macvlan interface on the host
sudo ip link add macvlan-shim link eth0 type macvlan mode bridge
# Give it an IP in the container range
sudo ip addr add 192.168.1.224/32 dev macvlan-shim
sudo ip link set macvlan-shim up
# Route container IPs through the shim
sudo ip route add 192.168.1.225/32 dev macvlan-shim
To make this persist across reboots, add it to a systemd service or network config.
When to Use Macvlan
- DNS servers that need a dedicated IP (devices point to
192.168.1.225) - Services that need all ports without conflicts
- Home automation (Home Assistant) that needs mDNS/SSDP on the LAN
- NAS-like containers that should appear as standalone devices
- When you’ve exhausted port mappings on the host IP
When to Skip Macvlan
- Simple web services — bridge + reverse proxy is simpler
- When you’re not on Linux — macvlan barely works on macOS/Windows
- Dynamic environments — IP management gets tedious
- When the host needs to talk to containers frequently — the shim is annoying
Comparison Table
| Feature | Bridge | Host | Macvlan |
|---|---|---|---|
| Container gets own IP on LAN | ❌ | ❌ (uses host IP) | ✅ |
| Port mapping | ✅ | ❌ (not needed) | ❌ (not needed) |
| Container-to-container DNS | ✅ (custom) | ❌ | ❌ |
| Network isolation | ✅ | ❌ | ✅ |
| Performance | Good | Best | Good |
| Complexity | Low | Low | Medium |
| Host-to-container comms | ✅ | ✅ | ⚠️ (needs shim) |
| Works on macOS/Windows | ✅ | ❌ | ❌ |
Practical Patterns for Self-Hosters
Pattern 1: Reverse Proxy + Internal Services
The most common setup. Everything behind Nginx Proxy Manager or Traefik:
services:
proxy:
image: jc21/nginx-proxy-manager:latest
ports:
- "80:80"
- "443:443"
- "81:81"
networks:
- proxy
jellyfin:
image: jellyfin/jellyfin:latest
networks:
- proxy
# No ports exposed — proxy handles it
nextcloud:
image: nextcloud:latest
networks:
- proxy
- nextcloud-db
nextcloud-db:
image: mariadb:11
networks:
- nextcloud-db
networks:
proxy:
nextcloud-db:
Pattern 2: DNS Server with Macvlan
Dedicated IP for your network’s DNS:
services:
pihole:
image: pihole/pihole:latest
networks:
lan:
ipv4_address: 192.168.1.2
environment:
TZ: America/New_York
WEBPASSWORD: changeme
networks:
lan:
driver: macvlan
driver_opts:
parent: eth0
ipam:
config:
- subnet: 192.168.1.0/24
gateway: 192.168.1.1
ip_range: 192.168.1.2/32
Pattern 3: Mixed Networking
Some services need host networking while others use bridge:
services:
# Host networking for network monitoring
netdata:
image: netdata/netdata:latest
network_mode: host
cap_add:
- SYS_PTRACE
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
# Bridge for web apps
homepage:
image: ghcr.io/gethomepage/homepage:latest
ports:
- "3000:3000"
networks:
- apps
networks:
apps:
Troubleshooting
Container can’t reach the internet
# Check if the container has a default route
docker exec <container> ip route
# Check DNS resolution
docker exec <container> nslookup google.com
# Verify Docker's iptables rules
sudo iptables -t nat -L POSTROUTING -n -v
Common fix: restart Docker (sudo systemctl restart docker) — iptables rules sometimes get flushed by firewall managers like UFW.
Containers can’t talk to each other
# Verify they're on the same network
docker network inspect <network-name>
# Test connectivity
docker exec container1 ping container2
If using the default bridge, switch to a custom one. Default bridge doesn’t support DNS resolution between containers.
Port already in use
# Find what's using the port
sudo ss -tlnp | grep :80
Either stop the conflicting service, change the host port mapping, or switch to host/macvlan networking.
Macvlan container unreachable from host
You need the macvlan shim (see above). This is not a bug — it’s how macvlan works at the kernel level.
UFW blocking Docker ports
Docker manipulates iptables directly, bypassing UFW. If you use UFW, install ufw-docker or manage Docker’s iptables behavior:
// /etc/docker/daemon.json
{
"iptables": false
}
Warning: Disabling Docker’s iptables means you must manually manage all container networking rules.
Quick Decision Guide
- Is it a standard web app behind a reverse proxy? → Bridge
- Does it need to see all host network interfaces? → Host
- Does it need its own LAN IP? → Macvlan
- Not sure? → Bridge (you can always change later)
Bridge handles 90% of self-hosting scenarios. Start there and only reach for host or macvlan when bridge doesn’t fit.
Wrapping Up
Docker networking doesn’t have to be mystifying. Bridge networks with a reverse proxy cover most self-hosted services. Host networking is there when you need raw access to the host’s network stack. And macvlan is the power tool for services that need to be first-class citizens on your LAN.
The key insight: you can mix all three in the same Docker host. Your web apps run on bridge networks behind Traefik, your DNS server gets a macvlan IP, and your monitoring agent uses host networking — all on the same machine.
Start simple, and only add complexity when the simpler approach doesn’t work.