CI/CD Pipeline for SaaS Deployment
The essential playbook for implementing ci/cd pipeline for saas deployment in your SaaS.
This page outlines a practical CI/CD pipeline for indie SaaS deployments. The goal is simple: every push should run tests, build artifacts, apply safe deployment steps, and fail fast before production breaks. Use this to automate deployments on a VPS, Docker host, or small cloud setup without adding enterprise-only complexity.
Quick Fix / Quick Setup
Use this as the smallest useful GitHub Actions pipeline for a VPS-based SaaS deploy:
name: Deploy SaaS
on:
push:
branches: [main]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Install deps
run: |
python -m venv venv
. venv/bin/activate
pip install -U pip
pip install -r requirements.txt
- name: Run tests
run: |
. venv/bin/activate
pytest -q
deploy:
needs: test
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main'
steps:
- uses: actions/checkout@v4
- name: Setup SSH
run: |
mkdir -p ~/.ssh
echo "${{ secrets.SSH_PRIVATE_KEY }}" > ~/.ssh/id_ed25519
chmod 600 ~/.ssh/id_ed25519
ssh-keyscan -H ${{ secrets.SERVER_HOST }} >> ~/.ssh/known_hosts
- name: Deploy over SSH
run: |
ssh ${{ secrets.SERVER_USER }}@${{ secrets.SERVER_HOST }} '
set -e
cd /var/www/app && \
git fetch origin main && \
git reset --hard origin/main && \
/var/www/app/venv/bin/pip install -r requirements.txt && \
/var/www/app/venv/bin/python manage.py migrate && \
sudo systemctl restart gunicorn && \
sudo systemctl reload nginx
'This is enough to automate test-first deploys. Before using it in production with real traffic, add:
- health checks
- deploy metadata logging
- rollback steps
- branch/tag restrictions
- staging validation
- worker restart steps if you run background jobs
Minimum required CI secrets:
SSH_PRIVATE_KEYSERVER_HOSTSERVER_USER
Recommended additional secrets/variables:
APP_PATHHEALTHCHECK_URLDEPLOY_ENV
A safer deploy command should include verification:
ssh "$SERVER_USER@$SERVER_HOST" '
set -euo pipefail
cd /var/www/app
git fetch origin main
PREV_SHA=$(git rev-parse HEAD)
git reset --hard origin/main
/var/www/app/venv/bin/pip install -r requirements.txt
/var/www/app/venv/bin/python manage.py migrate
sudo systemctl restart gunicorn
sudo systemctl reload nginx
curl -fsS http://127.0.0.1:8000/health
'Process Flow
What’s happening
CI/CD means code changes move through repeatable steps: validate, build, deploy, verify.
For a small SaaS app, the practical minimum is:
- Push code to a controlled branch
- Run tests
- Deploy to the target server
- Restart app services
- Verify a health endpoint
- Roll back if verification fails
This removes manual deploy drift. It also makes releases deterministic. If the same branch and same pipeline always produce the same deployment steps, production is easier to reason about.
For indie deployments, the best pipeline is usually not the most advanced one. It is the one that:
- fails fast
- uses explicit commands
- stores secrets in CI, not in repo files
- does not rely on manual server edits
- has a rollback path ready
Keep the pipeline boring. Avoid hidden shell aliases, local-only env files, and one-off manual fixes on the server.
Deployment Pipeline
Step-by-step implementation
1. Define environments
At minimum, separate staging and production. Even if staging is smaller, use it to catch config and migration issues.
Example branch rules:
develop-> auto deploy to stagingmain-> deploy to productionv*tag -> production release with approval
2. Add a health endpoint
Your app should return 200 OK only when the app is actually ready.
Example Flask endpoint:
from flask import Flask, jsonify
import psycopg2
import os
app = Flask(__name__)
@app.get("/health")
def health():
try:
conn = psycopg2.connect(os.environ["DATABASE_URL"])
conn.close()
return jsonify({"status": "ok"}), 200
except Exception as e:
return jsonify({"status": "error", "detail": str(e)}), 500Do not make /health depend on external services unless they are truly required for production readiness.
3. Add test steps
Your validate stage should be fast enough to run on every push.
Example:
python --version
pip --version
pytest -qOptional but useful:
ruff check .
mypy .
pytest tests/smoke -q4. Store secrets in CI
Do not commit SSH keys or production env files.
Use CI secret storage for:
- SSH private keys
- server host and username
- registry credentials
- API tokens
- environment-specific values
GitHub Actions examples:
SSH_PRIVATE_KEYSERVER_HOSTSERVER_USERAPP_PATHHEALTHCHECK_URL
5. Choose a deployment method
For small SaaS apps, use one of these:
Option A: SSH-based deploy
Best for a single VPS or simple app server.
Deploy flow:
- CI runner connects over SSH
- fetches latest code
- installs dependencies
- runs migrations
- restarts services
- verifies health
Option B: Image-based deploy
Best if you already use Docker.
Deploy flow:
- CI builds image
- pushes image to registry
- server pulls immutable tag
- restarts compose stack
- verifies health
If you are already on Docker, see Docker Production Setup for SaaS.
6. Make deploys idempotent
A rerun should not leave the server in a strange state.
Safe deploy shell example:
set -euo pipefail
cd /var/www/app
git fetch origin main
git reset --hard origin/main
/var/www/app/venv/bin/pip install -r requirements.txt
/var/www/app/venv/bin/python manage.py migrate
sudo systemctl restart gunicorn
sudo systemctl reload nginx
curl -fsS http://127.0.0.1:8000/health7. Handle migrations carefully
For small apps, automatic migrations are usually acceptable. For risky schema changes:
- deploy additive schema changes first
- deploy compatible app code second
- remove old schema later
Avoid deploys where old code and new schema are incompatible at any intermediate step.
8. Add a post-deploy health check
Do not mark deploy success immediately after systemctl restart.
Use a local and public check if possible:
curl -fsS http://127.0.0.1:8000/health
curl -fsS https://yourdomain.com/healthThis helps separate app-level failures from proxy or DNS issues.
9. Record release metadata
Store:
- commit SHA
- deployment timestamp
- branch/tag
- deploy actor
Example:
git rev-parse HEAD > /var/www/app/REVISION
date -u +"%Y-%m-%dT%H:%M:%SZ" > /var/www/app/DEPLOYED_AT10. Restrict production deployment triggers
Only deploy production from approved branches or release tags.
GitHub Actions branch check:
if: github.ref == 'refs/heads/main'Tag-only release example:
on:
push:
tags:
- 'v*'11. Add manual approval if needed
If tests are still maturing, require a production approval gate in your CI provider before the deploy job runs.
12. Keep rollback ready
Rollback should be a known command, not a future idea.
Examples:
- previous release directory symlink
- previous Docker image tag
- previous git SHA
Basic git rollback example:
cd /var/www/app
git reset --hard <previous_sha>
/var/www/app/venv/bin/pip install -r requirements.txt
sudo systemctl restart gunicorn
curl -fsS http://127.0.0.1:8000/healthIf downtime matters, also review Zero Downtime Deployment.
Common causes
Most CI/CD failures come from configuration drift, permissions, or assumptions about production state.
Typical causes:
- Missing or invalid SSH private key in CI secrets
- Production server user cannot run
systemctl,docker, or file write commands mainbranch deploy trigger is misconfigured or runs on unintended branches- App requires environment variables that are not present on the server
- Database migration errors caused by incompatible schema changes
- Service restart succeeds but app process crashes immediately after boot
- Nginx reload works but upstream app socket or port is wrong
- Health check endpoint is missing, cached, or blocked by auth or middleware
- CI runner uses different runtime versions than production
- Worker processes are not redeployed, causing code/version mismatch
Additional common issues:
- disk full during pip install, image pull, or build
- stale
known_hostsentry after server replacement - old Python virtualenv incompatible with new lockfile
- migrations applied twice across parallel jobs
- app restarted but worker/scheduler left on old code
If you deploy to a Gunicorn/Nginx stack, also review Deploy SaaS with Nginx + Gunicorn when available in your docs set.
Debugging tips
Start by isolating the failing stage:
- validate
- build
- deploy
- restart
- health check
- rollback
Do not debug everything at once.
Useful local and remote commands:
git rev-parse HEAD
python --version && pip --version
pytest -q
ssh user@server 'whoami && hostname && pwd'
ssh user@server 'cd /var/www/app && git status && git rev-parse HEAD'
ssh user@server 'sudo systemctl status gunicorn --no-pager -l'
ssh user@server 'sudo journalctl -u gunicorn -n 200 --no-pager'
ssh user@server 'sudo nginx -t && sudo systemctl status nginx --no-pager -l'
ssh user@server 'curl -I http://127.0.0.1:8000/health || true'
ssh user@server 'curl -I https://yourdomain.com/health || true'
ssh user@server 'df -h && free -m'
ssh user@server 'printenv | sort'
docker ps
docker compose ps
docker compose logs --tail=200 web
docker images --digests | headPractical debugging sequence:
If tests fail in CI
python --version
pip install -r requirements.txt
pytest -qCheck version mismatches between CI and production.
If SSH setup fails
Check:
ssh -v user@serverCommon issues:
- malformed private key secret
- wrong server hostname
- user not allowed to log in
- host key mismatch
If deploy commands fail on server
Run them manually once over SSH:
ssh user@server '
set -euxo pipefail
cd /var/www/app
git fetch origin main
git reset --hard origin/main
/var/www/app/venv/bin/pip install -r requirements.txt
'If services restart but app is still down
Check process and logs:
sudo systemctl status gunicorn --no-pager -l
sudo journalctl -u gunicorn -n 200 --no-pager
sudo nginx -t
sudo systemctl status nginx --no-pager -lIf local health passes but public health fails
Compare:
curl -I http://127.0.0.1:8000/health
curl -I https://yourdomain.com/healthThat usually indicates:
- Nginx routing issue
- TLS issue
- wrong upstream socket
- host header mismatch
- firewall issue
For broader post-deploy diagnostics, see Debugging Production Issues.
troubleshooting decision tree grouped by pipeline stage.
Checklist
Use this before enabling automatic production deploys:
Checklist
- ✓ CI runs on every push and pull request
- ✓ Production deploy only triggers from approved branch or tag
- ✓ Secrets are stored in CI secret manager
- ✓ Tests run before deploy
- ✓ Health endpoint exists and is checked after deploy
- ✓ Migrations are part of the deploy plan
- ✓ Rollback path is documented and tested
- ✓ App, worker, and scheduler services are all updated
- ✓ Deploy logs include commit SHA and timestamp
- ✓ Staging and production configs are separated
- ✓ CI runtime version matches production closely
- ✓
set -euo pipefailor equivalent shell safety is used - ✓ No interactive deploy steps exist
- ✓ Server user has only the required deploy permissions
- ✓ Disk and memory are sufficient for builds and restarts
For pre-launch validation, use Deployment Checklist and SaaS Production Checklist.
Related guides
- Docker Production Setup for SaaS
- Zero Downtime Deployment
- Debugging Production Issues
- Deployment Checklist
- SaaS Production Checklist
FAQ
What is the simplest CI/CD setup for a small SaaS?
A GitHub Actions workflow that runs tests on push to main, then deploys to a VPS over SSH, restarts services, and checks a health endpoint.
Should CI/CD handle database migrations automatically?
Yes for most small SaaS apps, but design migrations to be backward-compatible and test them on staging first when possible.
How do I avoid breaking production during deploys?
Run tests before deploy, use health checks after deploy, keep rollback commands ready, and restrict production deploys to controlled branches or tags.
Do I need separate pipelines for web and workers?
Not always separate pipelines, but you should have separate deploy steps so web, workers, and schedulers are all updated consistently.
Example restart block:
sudo systemctl restart gunicorn
sudo systemctl restart celery
sudo systemctl restart celerybeatOr with Docker Compose:
docker compose pull
docker compose up -d web worker scheduler
docker compose psWhen should I add zero-downtime deployment?
Add it once downtime from restarts becomes visible to users or when you need safer deploys during active traffic periods. See Zero Downtime Deployment.
Do I need Docker for CI/CD?
No. SSH deploys are enough for many small SaaS apps. Docker is useful when you want immutable images and cleaner runtime consistency.
Can I skip staging?
You can, but production risk increases. Even a minimal staging environment catches env, migration, and proxy config issues early.
Final takeaway
A useful CI/CD pipeline for a small SaaS does not need to be complex. It needs to be consistent:
- test
- deploy
- verify
- roll back if needed
Start with SSH deploys if that fits your stack. Then add health checks, staging, release metadata, and rollback before adding more complexity. Reliable CI/CD is mostly about removing manual variation and making failure states obvious.