Docker Production Setup for SaaS

The essential playbook for implementing docker production setup for saas in your SaaS.

Intro

Use Docker in production to make deployment reproducible, isolate services, and reduce server drift. For a small SaaS, the practical baseline is:

  • one app container
  • one reverse proxy container
  • runtime environment variables
  • persistent storage only where needed
  • health checks
  • a safe restart and update workflow

For most MVPs, this is enough on a single VPS. Keep the database outside the app container when possible.

Related setup context:


Quick Fix / Quick Setup

Use this as a baseline compose.yaml for a single VPS deployment:

yaml
version: '3.9'

services:
  app:
    build: .
    restart: unless-stopped
    env_file:
      - .env
    command: gunicorn app.main:app -k uvicorn.workers.UvicornWorker --bind 0.0.0.0:8000 --workers 3
    expose:
      - "8000"
    volumes:
      - app_data:/app/data
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8000/health"]
      interval: 30s
      timeout: 5s
      retries: 3
      start_period: 20s

  nginx:
    image: nginx:stable-alpine
    restart: unless-stopped
    ports:
      - "80:80"
    volumes:
      - ./deploy/nginx.conf:/etc/nginx/conf.d/default.conf:ro
    depends_on:
      - app

volumes:
  app_data:

Good baseline for a single VPS. Put the database on a managed service or separate host when possible. Add HTTPS, backups, and monitoring before launch.

Minimal Nginx config

Create deploy/nginx.conf:

nginx
server {
    listen 80;
    server_name _;

    client_max_body_size 20m;

    location / {
        proxy_pass http://app:8000;
        proxy_http_version 1.1;

        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        proxy_read_timeout 60s;
        proxy_connect_timeout 5s;
        proxy_send_timeout 60s;
    }
}

Minimal Dockerfile baseline

dockerfile
FROM python:3.12-slim

ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1

WORKDIR /app

RUN apt-get update && apt-get install -y --no-install-recommends \
    curl build-essential \
    && rm -rf /var/lib/apt/lists/*

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY . .

RUN useradd -m appuser && chown -R appuser:appuser /app
USER appuser

CMD ["gunicorn", "app.main:app", "-k", "uvicorn.workers.UvicornWorker", "--bind", "0.0.0.0:8000", "--workers", "3"]

.dockerignore

gitignore
.git
.gitignore
.env
.venv
venv
__pycache__
*.pyc
node_modules
dist
build
tests
.pytest_cache
.mypy_cache
.coverage

Deploy commands

bash
docker compose build
docker compose up -d
docker compose ps
docker compose logs -f app
curl -I http://localhost
client
Nginx
app container
external database/storage

Process Flow


What’s happening

Docker packages your app and runtime into a predictable image so production matches what you tested.

In a small SaaS production setup:

  • the app runs inside a container with a fixed startup command
  • Nginx is the public entrypoint on ports 80 and later 443
  • the app container stays internal on the Docker network
  • secrets are injected at runtime using .env or a secret manager
  • only truly persistent app data is mounted to a volume
  • health checks and restart policies reduce manual recovery work
  • updates happen through repeatable image builds and container restarts

This avoids common VPS problems:

  • undocumented package installs on the host
  • inconsistent Python or Node versions
  • services started manually in shells
  • deployment changes that cannot be reproduced

For app structure guidance before containerizing, see:


Step-by-step implementation

1. Keep the server role simple

For an MVP or small SaaS, use one VPS with:

  • Docker Engine
  • Docker Compose plugin
  • your app container
  • Nginx container
  • external database
  • external object storage if needed

Avoid putting every dependency into one container.

2. Install Docker on the VPS

Example on Ubuntu:

bash
sudo apt-get update
sudo apt-get install -y ca-certificates curl gnupg

sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg

echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
  https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

Verify:

bash
docker --version
docker compose version

If the server itself is not ready yet, see Environment Setup on VPS.

3. Create the app image

Use a production Dockerfile:

  • small base image
  • deterministic dependency install
  • non-root user where possible
  • explicit startup command
  • no dev server

Example already shown above. Important points:

  • bind to 0.0.0.0, not 127.0.0.1
  • use Gunicorn or equivalent production server
  • keep the working directory fixed
  • avoid installing unnecessary OS packages

4. Define services in Compose

Use compose.yaml to define the app and reverse proxy.

Key rules:

  • use ports only for Nginx
  • use expose for the app
  • add restart: unless-stopped
  • mount only required persistent paths
  • load environment variables from .env

Example project layout:

text
/project
  /deploy
    nginx.conf
  /app
  Dockerfile
  compose.yaml
  .dockerignore
  .env
  requirements.txt

5. Store secrets outside the image

Put production secrets in a server-side .env file:

env
APP_ENV=production
SECRET_KEY=replace-me
DATABASE_URL=postgresql://user:pass@db-host:5432/appdb
REDIS_URL=redis://redis-host:6379/0
STRIPE_SECRET_KEY=sk_live_xxx

Rules:

  • do not commit production .env
  • do not COPY .env into the image
  • rotate credentials if they were ever committed
  • protect file permissions
bash
chmod 600 .env

6. Add a health endpoint

Your app should expose a simple endpoint like /health or /ready.

FastAPI example:

python
from fastapi import FastAPI

app = FastAPI()

@app.get("/health")
def health():
    return {"status": "ok"}

Flask example:

python
from flask import Flask, jsonify

app = Flask(__name__)

@app.get("/health")
def health():
    return jsonify({"status": "ok"}), 200

If startup depends on migrations or external services, use a readiness endpoint that reflects that state.

7. Put Nginx in front

Use Nginx as the public entrypoint. It should:

  • receive inbound traffic
  • proxy to app:8000
  • forward client headers
  • optionally serve static files
  • later terminate HTTPS

Do not set Nginx upstream to localhost inside Compose. Use the Docker service name:

nginx
proxy_pass http://app:8000;

8. Mount persistent storage only where needed

Use Docker volumes only for state that must survive container replacement:

  • uploaded user files
  • generated exports
  • local SQLite only if intentionally used
  • temporary data only if persistence is required

Example:

yaml
volumes:
  - app_data:/app/data

Do not mount the whole project directory in production unless there is a specific reason.

For static and media decisions, pair this with:

9. Use tagged releases

Do not rely on latest.

Build and tag explicitly:

bash
docker build -t your-registry/your-app:2026-04-20-1 .
docker push your-registry/your-app:2026-04-20-1

Then deploy by tag:

yaml
services:
  app:
    image: your-registry/your-app:2026-04-20-1

This makes rollback predictable.

10. Deploy safely

Two common deployment modes:

Rebuild on server

bash
git pull
docker compose up -d --build
docker compose ps
docker compose logs --tail=100 app

Pull prebuilt image

bash
docker compose pull
docker compose up -d
docker compose ps
docker compose logs --tail=100 app

After deploy, verify:

bash
curl -I http://localhost
docker compose exec app curl -f http://localhost:8000/health

11. Run migrations explicitly

Do not assume migrations happen automatically unless you intentionally wired that in.

Examples:

bash
docker compose exec app alembic upgrade head

or:

bash
docker compose exec app flask db upgrade

Prefer one of these workflows:

  • run migrations before traffic shift if backward-compatible
  • start app, run migration job, then validate health
  • block deploy if migrations fail

Avoid destructive schema changes without rollback planning.

12. Validate after each release

Check:

  • app container healthy
  • Nginx serving traffic
  • auth flow works
  • billing flow works
  • background jobs run
  • logs show no startup exceptions

Suggested validation commands:

bash
docker compose ps
docker compose logs --tail=200 app
docker compose logs --tail=200 nginx
curl -I http://localhost
curl -H 'Host: yourdomain.com' http://127.0.0.1

13. Add HTTPS and monitoring

Before real production traffic, add:

  • TLS termination
  • uptime checks
  • alerting
  • log retention
  • backups

Deployment pages should connect to monitoring and fixes. Next steps:

build
migrate
start
health check
validate
monitor

Process Flow


Common causes

Typical production Docker failures for small SaaS apps:

  • using a development server inside the container instead of Gunicorn/Uvicorn workers
  • binding the app to 127.0.0.1 instead of 0.0.0.0
  • publishing the app port publicly and bypassing the reverse proxy
  • missing secrets or wrong .env file path
  • database connectivity failures due to host, port, firewall, SSL, or credentials
  • Nginx upstream pointing to localhost instead of the Compose service name
  • no persistent volume for uploaded files
  • permission mismatch between container user and mounted volume owner
  • health check path missing or returning non-200
  • large image builds due to missing .dockerignore
  • migrations not run during deployment
  • wrong module path or working directory in the startup command

Debugging tips

Start with container state and logs:

bash
docker compose ps
docker compose logs -f app
docker compose logs -f nginx

Inspect runtime environment:

bash
docker compose exec app env | sort
docker compose exec app sh

Check app health from inside the container:

bash
docker compose exec app curl -I http://localhost:8000/health

Validate Nginx config:

bash
docker compose exec nginx nginx -t

Inspect the running container definition:

bash
docker inspect $(docker compose ps -q app)

Check host resource pressure:

bash
docker stats
df -h
free -m

Check listening ports:

bash
ss -tulpn

Test HTTP locally on the server:

bash
curl -I http://localhost
curl -H 'Host: yourdomain.com' http://127.0.0.1

What to verify when debugging:

  • app command matches the correct module path
  • app binds to 0.0.0.0:8000
  • Nginx points to app:8000
  • .env values are loaded as expected
  • volume mounts exist and have correct permissions
  • the health endpoint returns 200
  • migrations are applied
  • disk is not full
  • memory pressure is not causing restarts

If deployment symptoms continue after a release, validate against your launch requirements in SaaS Production Checklist.


Checklist

Use this before sending production traffic.

Checklist

  • Docker and Docker Compose are installed on the VPS
  • Image builds from a clean checkout
  • .dockerignore excludes secrets and unnecessary files
  • app container uses a production server, not a dev server
  • app binds to 0.0.0.0
  • only Nginx publishes public ports
  • Nginx proxies to the app service name, not localhost
  • environment variables load from server-side .env or secret manager
  • production secrets are not committed to git
  • health endpoint exists and returns success after startup
  • required persistent volume paths are defined
  • database is reachable from inside the container
  • migrations run as part of deployment
  • logs are accessible via docker compose logs
  • image tags are explicit for rollback
  • rollback path is documented
  • backups exist for database and file volumes
  • HTTPS is configured before launch
  • monitoring and alerts are enabled

Master launch list:


Related guides


FAQ

Should I use Docker Compose in production?

Yes. For a small SaaS on one VPS, Docker Compose is a practical choice. Keep the service list minimal and document deployments clearly.

Where should I store environment variables?

Store them on the server in a protected .env file or use a secret manager. Do not commit production secrets to git or bake them into the image.

How many containers do I need?

At minimum:

  • app
  • reverse proxy

Add worker and scheduler containers only if your app uses background jobs.

What should be outside Docker?

Prefer these outside the app container:

  • managed databases
  • object storage
  • email providers
  • third-party queues if needed

Keep the app runtime in Docker and push stateful infrastructure out when possible.

Should I run the database in Docker too?

For small internal setups you can, but production is usually safer with a managed database or a separately managed host.

Do I need Kubernetes?

No. For most MVPs and small SaaS products, Docker Compose on a VPS is enough.

Should I expose the app port publicly?

No. Expose only Nginx publicly and keep the app on the internal Docker network.

How do I deploy without rebuilding everything?

Build a tagged image once, push it to a registry, and pull that tag on the server:

bash
docker build -t your-registry/your-app:2026-04-20-1 .
docker push your-registry/your-app:2026-04-20-1
docker compose pull
docker compose up -d

Can I use Docker for background workers too?

Yes. Define separate worker and scheduler services using the same image with different commands.

Example:

yaml
services:
  app:
    image: your-registry/your-app:2026-04-20-1
    command: gunicorn app.main:app -k uvicorn.workers.UvicornWorker --bind 0.0.0.0:8000

  worker:
    image: your-registry/your-app:2026-04-20-1
    command: celery -A app.worker worker --loglevel=info

  scheduler:
    image: your-registry/your-app:2026-04-20-1
    command: celery -A app.worker beat --loglevel=info

How do I rollback a bad deploy?

Redeploy the previous image tag and rerun only backward-compatible steps.

Example:

bash
docker compose pull
docker compose up -d

If needed, update the app image tag in compose.yaml back to the previous known-good release, then redeploy. Avoid irreversible migrations without a rollback plan.


Final takeaway

A production Docker setup is not just a Dockerfile. It includes:

  • process management
  • reverse proxying
  • runtime env handling
  • persistent storage strategy
  • health checks
  • logging
  • explicit migrations
  • repeatable releases
  • rollback steps

For most indie SaaS deployments, keep the first version simple:

  • one VPS
  • one app container
  • one Nginx container
  • external database
  • explicit image tags
  • clear validation and rollback workflow

Before launch, verify the full release against SaaS Production Checklist.