Self-Hosting Teable: Airtable Alternative with PostgreSQL Backend
Airtable changed how teams think about databases. A spreadsheet interface on top of a relational database — simple enough for anyone, powerful enough for real workflows. But Airtable’s pricing climbs fast, row limits sting, and your data lives on someone else’s servers.
Teable is the open-source answer. It gives you the same spreadsheet-like UI with grid, kanban, form, gallery, and calendar views — all backed by PostgreSQL. Your data stays in a real database you control, with no row limits beyond your hardware. It handles millions of rows without breaking a sweat.
This guide walks you through deploying Teable on your own server with Docker Compose.
What Makes Teable Different
Teable isn’t just another no-code database clone. A few things stand out:
- Real PostgreSQL backend — your data lives in standard Postgres, queryable with any SQL tool
- Millions of rows — no artificial limits, performance scales with your hardware
- Real-time collaboration — multiple users editing simultaneously, like Google Sheets
- Multiple views — grid, kanban, form, gallery, and calendar from the same data
- Formula support — spreadsheet-style formulas that feel familiar
- API access — RESTful API for every table, auto-generated
- Plugins and automations — extend functionality without touching code
It’s built with Next.js on the frontend and NestJS on the backend, using Prisma for database management.
Prerequisites
- A Linux server (Ubuntu 20.04+ recommended) with at least 4GB RAM and 2 CPU cores
- Docker and Docker Compose installed
- A domain name (optional, for HTTPS access)
- Basic terminal familiarity
Docker Compose Setup
Create a directory for your Teable deployment:
mkdir teable && cd teable
The Compose File
Create docker-compose.yaml:
services:
teable:
image: ghcr.io/teableio/teable:latest
restart: always
ports:
- '3000:3000'
volumes:
- teable-data:/app/.assets:rw
env_file:
- .env
environment:
- NEXT_ENV_IMAGES_ALL_REMOTE=true
networks:
- teable
depends_on:
teable-db:
condition: service_healthy
teable-cache:
condition: service_healthy
healthcheck:
test: ['CMD', 'curl', '-f', 'http://localhost:3000/health']
start_period: 5s
interval: 5s
timeout: 3s
retries: 3
teable-db:
image: postgres:15.4
restart: always
ports:
- '42345:5432'
volumes:
- teable-db:/var/lib/postgresql/data:rw
environment:
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
networks:
- teable
healthcheck:
test: ['CMD-SHELL', "sh -c 'pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}'"]
interval: 10s
timeout: 3s
retries: 3
teable-cache:
image: redis:7.2.4
restart: always
expose:
- '6379'
volumes:
- teable-cache:/data:rw
networks:
- teable
command: redis-server --appendonly yes --requirepass ${REDIS_PASSWORD}
healthcheck:
test: ['CMD', 'redis-cli', '--raw', 'incr', 'ping']
interval: 10s
timeout: 3s
retries: 3
networks:
teable:
name: teable-network
volumes:
teable-db: {}
teable-data: {}
teable-cache: {}
This gives you three containers: the Teable application, PostgreSQL for data storage, and Redis for caching.
Environment Variables
Create a .env file:
# Security — replace these with strong random passwords
POSTGRES_PASSWORD=your_strong_postgres_password
REDIS_PASSWORD=your_strong_redis_password
SECRET_KEY=your_random_secret_key_at_least_32_chars
# Public URL — set this to your domain or server IP
PUBLIC_ORIGIN=http://127.0.0.1:3000
# Postgres
POSTGRES_HOST=teable-db
POSTGRES_PORT=5432
POSTGRES_DB=teable
POSTGRES_USER=teable
# Redis
REDIS_HOST=teable-cache
REDIS_PORT=6379
REDIS_DB=0
# App
PRISMA_DATABASE_URL=postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}
BACKEND_CACHE_PROVIDER=redis
BACKEND_CACHE_REDIS_URI=redis://default:${REDIS_PASSWORD}@${REDIS_HOST}:${REDIS_PORT}/${REDIS_DB}
Generate strong passwords with:
openssl rand -hex 24
Start the Stack
docker compose up -d
The first start takes a minute or two while Teable runs database migrations. Watch the logs:
docker compose logs -f teable
Once you see the health check passing, open http://your-server-ip:3000 in your browser. You’ll be prompted to create your first admin account.
Adding S3-Compatible Storage (Optional)
By default, Teable stores file attachments on the local filesystem. For production use, S3-compatible object storage (MinIO, AWS S3, Cloudflare R2) gives you better durability and easier backups.
Add a MinIO service to your docker-compose.yaml:
teable-storage:
image: minio/minio:RELEASE.2024-02-17T01-15-57Z
restart: always
ports:
- '9000:9000'
- '9001:9001'
environment:
- MINIO_ACCESS_KEY=${MINIO_ACCESS_KEY}
- MINIO_SECRET_KEY=${MINIO_SECRET_KEY}
volumes:
- teable-storage:/data:rw
networks:
- teable
command: server /data --console-address ":9001"
createbuckets:
image: minio/mc
networks:
- teable
entrypoint: >
/bin/sh -c "
/usr/bin/mc alias set teable-storage http://teable-storage:9000 ${MINIO_ACCESS_KEY} ${MINIO_SECRET_KEY};
/usr/bin/mc mb teable-storage/public;
/usr/bin/mc anonymous set public teable-storage/public;
/usr/bin/mc mb teable-storage/private;
exit 0;
"
depends_on:
teable-storage:
condition: service_started
Add to your .env:
# MinIO
MINIO_ACCESS_KEY=teable_minio_access
MINIO_SECRET_KEY=your_minio_secret_key
# Storage config for Teable
BACKEND_STORAGE_PROVIDER=minio
BACKEND_STORAGE_PUBLIC_BUCKET=public
BACKEND_STORAGE_PRIVATE_BUCKET=private
BACKEND_STORAGE_MINIO_ENDPOINT=teable-storage
BACKEND_STORAGE_MINIO_PORT=9000
BACKEND_STORAGE_MINIO_ACCESS_KEY=${MINIO_ACCESS_KEY}
BACKEND_STORAGE_MINIO_SECRET_KEY=${MINIO_SECRET_KEY}
Add teable-storage: {} to the volumes section, then recreate:
docker compose up -d
The MinIO console is available at port 9001 for storage management.
Reverse Proxy with HTTPS
For production access, put Teable behind a reverse proxy. Here are configurations for the two most popular options.
Caddy (Automatic HTTPS)
That’s it. Caddy handles SSL certificates automatically.
Nginx
server {
listen 443 ssl http2;
server_name teable.yourdomain.com;
ssl_certificate /etc/letsencrypt/live/teable.yourdomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/teable.yourdomain.com/privkey.pem;
client_max_body_size 100M;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
The WebSocket headers (Upgrade and Connection) are important — Teable uses them for real-time collaboration.
After setting up your reverse proxy, update PUBLIC_ORIGIN in .env:
PUBLIC_ORIGIN=https://teable.yourdomain.com
Then restart:
docker compose restart teable
Working with Your Data
Creating Your First Base
After logging in, click “Create Base” to start a new database. Teable uses familiar concepts:
- Bases are databases (like an Airtable base)
- Tables live inside bases (like worksheets)
- Fields are columns with types: text, number, date, select, attachment, link, formula, and more
- Views are different ways to see the same table data
Importing Data
Teable supports CSV import directly in the UI. Click the dropdown on any table tab, select “Import,” and upload your file. Column types are auto-detected but can be adjusted after import.
Using the API
Every table automatically gets a REST API. Navigate to your profile settings to generate an API token, then:
# List records from a table
curl -H "Authorization: Bearer YOUR_API_TOKEN" \
https://teable.yourdomain.com/api/table/TABLE_ID/record
# Create a record
curl -X POST \
-H "Authorization: Bearer YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{"records": [{"fields": {"Name": "New Entry", "Status": "Active"}}]}' \
https://teable.yourdomain.com/api/table/TABLE_ID/record
This makes Teable a great backend for simple apps — build your data model in the UI, then consume it via API.
Backup and Restore
Since Teable uses standard PostgreSQL, backups are straightforward:
#!/bin/bash
# backup-teable.sh
BACKUP_DIR="/backups/teable"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
mkdir -p "$BACKUP_DIR"
# Database dump
docker compose exec -T teable-db \
pg_dump -U teable -d teable \
| gzip > "$BACKUP_DIR/teable-db-$TIMESTAMP.sql.gz"
# Asset files (skip if using S3 storage)
docker compose cp teable:/app/.assets "$BACKUP_DIR/assets-$TIMESTAMP"
# Retention: keep last 30 days
find "$BACKUP_DIR" -name "*.sql.gz" -mtime +30 -delete
echo "Backup completed: $TIMESTAMP"
To restore:
# Stop the app first
docker compose stop teable
# Restore database
gunzip -c backup-file.sql.gz | docker compose exec -T teable-db \
psql -U teable -d teable
# Start the app
docker compose start teable
Automate with a cron job:
0 3 * * * /path/to/backup-teable.sh >> /var/log/teable-backup.log 2>&1
Updating Teable
cd /path/to/teable
docker compose pull
docker compose up -d
Teable handles database migrations automatically on startup. Check the logs after an update to confirm everything applied cleanly:
docker compose logs teable | tail -50
Troubleshooting
Teable won’t start / restart loop
Check the logs for database connection issues:
docker compose logs teable | grep -i error
Common causes: wrong POSTGRES_PASSWORD in .env, database not ready yet (increase start_period in healthcheck), or port 42345 already in use.
Real-time collaboration not working
WebSocket connections need proper proxy headers. If you’re behind Nginx or another reverse proxy, ensure the Upgrade and Connection headers are forwarded (see the Nginx config above).
Slow with large datasets
Teable handles millions of rows, but performance depends on your PostgreSQL tuning. For datasets over 500K rows, consider adjusting shared_buffers and work_mem in PostgreSQL:
teable-db:
image: postgres:15.4
command: >
postgres
-c shared_buffers=1GB
-c work_mem=64MB
-c effective_cache_size=3GB
File uploads failing
If using the default local storage, check that the teable-data volume has adequate space. If using MinIO, verify the buckets were created and access keys match between the .env file and MinIO configuration.
Can’t access externally
Make sure PUBLIC_ORIGIN in your .env matches the URL you’re accessing Teable from. This affects CORS, WebSocket connections, and generated links. After changing it, restart the teable container.
Teable vs Alternatives
| Feature | Teable | NocoDB | Baserow | Airtable |
|---|---|---|---|---|
| Database backend | PostgreSQL | MySQL/PostgreSQL/SQLite | PostgreSQL | Proprietary |
| Row limit | Unlimited | Unlimited | Unlimited | 125K (Pro) |
| Real-time collab | ✅ | ✅ | ✅ | ✅ |
| Formula support | ✅ | ✅ | ✅ | ✅ |
| Kanban view | ✅ | ✅ | ✅ | ✅ |
| Calendar view | ✅ | ✅ | ❌ | ✅ |
| Gallery view | ✅ | ✅ | ✅ | ✅ |
| API access | ✅ | ✅ | ✅ | ✅ |
| Self-hosted | ✅ | ✅ | ✅ | ❌ |
| License | AGPL-3.0 | AGPL-3.0 | MIT | Proprietary |
| Performance at scale | Excellent | Good | Good | Good |
Teable’s key advantage is raw performance with large datasets and its tight PostgreSQL integration. If you already run Postgres infrastructure, Teable fits right in.
Wrapping Up
Teable gives you a polished Airtable experience on your own infrastructure. The PostgreSQL backend means your data is stored in a battle-tested database you can query, back up, and migrate with standard tools. The Docker deployment is clean — three containers, sensible defaults, and it just works.
For teams that need spreadsheet-style collaboration without SaaS pricing or data residency concerns, Teable is one of the strongest options in the self-hosted no-code space.
Useful links: