Logging Setup (Application + Server)
The essential playbook for implementing logging setup (application + server) in your SaaS.
Reliable production logging for a small SaaS does not require a full observability stack. You need one consistent application log stream, web server access/error logs, and retention that does not break under traffic. The goal is fast tracing of requests, exceptions, auth issues, and infrastructure failures.
Quick Fix / Quick Setup
For most MVPs, start with application logs to stdout/stderr, enable Gunicorn access and error logs, keep Nginx access and error logs on disk, and let systemd or Docker collect process output.
# Python app logging example (works for Flask/FastAPI)
import logging
import sys
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s %(levelname)s %(name)s %(message)s',
handlers=[logging.StreamHandler(sys.stdout)]
)
logger = logging.getLogger("app")
logger.info("application started")
# Gunicorn example
# gunicorn app:app \
# --workers 3 \
# --bind 127.0.0.1:8000 \
# --access-logfile - \
# --error-logfile - \
# --log-level info
# Nginx access/error logs
# access_log /var/log/nginx/access.log;
# error_log /var/log/nginx/error.log warn;If you already have a server running, verify these immediately:
journalctl -u gunicorn -n 200 --no-pager
journalctl -u nginx -n 200 --no-pager
tail -n 200 /var/log/nginx/access.log
tail -n 200 /var/log/nginx/error.log
nginx -tWhat’s happening
Production logging usually breaks for one of these reasons:
- Application logs are only configured for development.
- Gunicorn request logs are disabled.
- Nginx logs exist, but upstream app/process logs are missing.
- Logs go to multiple untracked files.
- Rotation or retention is not configured, so logs vanish or disks fill.
- There is no request correlation between Nginx and the app.
A good baseline setup is:
- app logs to
stdout - Gunicorn logs to
stdout/stderr systemdor Docker captures process logs- Nginx keeps access and error logs
- request IDs are added across layers
- retention and disk limits are configured
Process Flow
Step-by-step implementation
1) Choose one primary collection path
For app processes, prefer stdout/stderr under systemd or Docker.
Use this model:
- App logs:
stdout - Gunicorn access log:
stdout - Gunicorn error log:
stderr - Nginx access/error logs: files in
/var/log/nginx/
Do not mix random file-based app logs with journald unless you have a clear reason.
2) Configure Python application logging
Use a consistent format. JSON is useful, but plain text is acceptable if it includes enough context.
Minimal example:
import logging
import sys
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s %(levelname)s %(name)s %(message)s",
handlers=[logging.StreamHandler(sys.stdout)],
)
logger = logging.getLogger("app")Better example with request context support:
import logging
import sys
class RequestContextFilter(logging.Filter):
def filter(self, record):
if not hasattr(record, "request_id"):
record.request_id = "-"
if not hasattr(record, "user_id"):
record.user_id = "-"
return True
handler = logging.StreamHandler(sys.stdout)
handler.addFilter(RequestContextFilter())
formatter = logging.Formatter(
"%(asctime)s %(levelname)s %(name)s request_id=%(request_id)s user_id=%(user_id)s %(message)s"
)
handler.setFormatter(formatter)
logger = logging.getLogger()
logger.setLevel(logging.INFO)
logger.handlers = [handler]Use levels correctly:
DEBUG: local debugging onlyINFO: normal startup, request lifecycle summaries, job statesWARNING: recoverable issues, retries, suspicious auth eventsERROR: failed operations without tracebackEXCEPTION: failed operations with tracebackCRITICAL: process-level failure
Always use traceback logging for exceptions:
try:
do_work()
except Exception:
logging.getLogger("app").exception("background job failed")3) Add request correlation
You need one request identifier visible in both Nginx and app logs.
Nginx config
Set or forward X-Request-ID:
proxy_set_header X-Request-ID $request_id;Use a custom log format that includes the request ID:
log_format main_ext '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" '
'request_id=$request_id '
'upstream_response_time=$upstream_response_time '
'request_time=$request_time';
access_log /var/log/nginx/access.log main_ext;
error_log /var/log/nginx/error.log warn;App usage
Read X-Request-ID in middleware and attach it to log records.
FastAPI example:
from fastapi import FastAPI, Request
import logging
app = FastAPI()
logger = logging.getLogger("app")
@app.middleware("http")
async def log_context_middleware(request: Request, call_next):
request_id = request.headers.get("X-Request-ID", "-")
response = await call_next(request)
logger.info(
"request handled",
extra={
"request_id": request_id,
"user_id": "-",
},
)
return responsesequence diagram showing request_id flowing from Nginx to app log lines.
4) Enable Gunicorn access and error logs
If Gunicorn is managed by systemd, send logs to stdout/stderr.
Example command:
gunicorn app:app \
--workers 3 \
--bind 127.0.0.1:8000 \
--access-logfile - \
--error-logfile - \
--log-level infoExample systemd service:
[Unit]
Description=Gunicorn app
After=network.target
[Service]
User=www-data
Group=www-data
WorkingDirectory=/srv/app
Environment="PATH=/srv/app/venv/bin"
ExecStart=/srv/app/venv/bin/gunicorn app:app \
--workers 3 \
--bind 127.0.0.1:8000 \
--access-logfile - \
--error-logfile - \
--log-level info
Restart=always
RestartSec=5
[Install]
WantedBy=multi-user.targetThen reload and verify:
sudo systemctl daemon-reload
sudo systemctl restart gunicorn
sudo systemctl status gunicorn
journalctl -u gunicorn -n 100 --no-pager5) Keep Nginx access and error logs enabled
Do not disable Nginx logs in production unless another system fully replaces them.
Example server block:
server {
listen 80;
server_name yourdomain.com;
access_log /var/log/nginx/access.log main_ext;
error_log /var/log/nginx/error.log warn;
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Request-ID $request_id;
}
}Validate and reload:
sudo nginx -t
sudo systemctl reload nginx6) Configure log rotation and retention
If Nginx writes to files, configure logrotate.
Example:
/var/log/nginx/*.log {
daily
rotate 14
compress
delaycompress
missingok
notifempty
create 0640 www-data adm
sharedscripts
postrotate
[ -s /run/nginx.pid ] && kill -USR1 `cat /run/nginx.pid`
endscript
}Check current rotation:
logrotate -d /etc/logrotate.d/nginx
ls -lah /var/log/nginx/
du -sh /var/log/nginx/*For journald, verify disk use:
journalctl --disk-usageSet limits in /etc/systemd/journald.conf if needed:
SystemMaxUse=500M
RuntimeMaxUse=200M
MaxRetentionSec=7dayThen restart journald:
sudo systemctl restart systemd-journald7) Filter or separate noisy traffic
Health checks, bots, and static assets can overwhelm useful logs.
Options:
- disable access logging for
/health - split static asset logs into a separate file
- sample very high-volume internal endpoints
- reduce bot noise at the edge
Example:
location = /health {
access_log off;
return 200 'ok';
add_header Content-Type text/plain;
}8) Avoid logging sensitive data
Do not log:
- passwords
- tokens
- raw
Authorizationheaders - session cookies
- full payment payloads
- full webhook bodies unless required
- unnecessary personal data
If your app currently dumps request bodies for debugging, remove or redact that behavior before production.
Basic redaction example:
SENSITIVE_KEYS = {"password", "token", "authorization", "secret"}
def redact_payload(payload: dict) -> dict:
cleaned = {}
for k, v in payload.items():
cleaned[k] = "[REDACTED]" if k.lower() in SENSITIVE_KEYS else v
return cleaned9) Test the full logging path
Trigger expected events and verify where each appears.
Test cases:
- app startup log
404request- forced
500 - auth failure
- upstream failure / stopped app process
Useful commands:
curl -I https://yourdomain.com
grep ' 500 ' /var/log/nginx/access.log | tail -n 50
grep 'upstream' /var/log/nginx/error.log | tail -n 50
ps aux | grep gunicorn
docker logs <container_name> --tail 200You should be able to correlate one failing request across:
- Nginx access log
- Nginx error log
- Gunicorn log
- app log
10) Optional next step: add hosted error tracking
Logs help with request and server tracing. Error tracking helps with grouped exceptions, stack traces, and alerting.
For most small SaaS deployments, a strong baseline is:
- logs for request and infrastructure visibility
- Sentry for exceptions
Related guide: Error Tracking with Sentry
Common causes
These are the most common reasons a production logging setup fails:
- Application logs are only configured for development and do not emit in production.
- Gunicorn access logs are disabled, so request traces are missing.
- Nginx error logs are present but app/Gunicorn logs are not collected.
- Logs are written to files with incorrect permissions or nonexistent directories.
- No log rotation is configured, causing disks to fill.
- Structured fields like request ID or user ID are missing, making cross-layer tracing difficult.
- Sensitive data is logged accidentally through raw request dumping.
- Container or systemd retention limits are too low, so logs disappear before investigation.
Debugging tips
When logs are not helping, check each layer in order.
Check process manager logs
journalctl -u gunicorn -n 200 --no-pager
journalctl -u nginx -n 200 --no-pagerCheck Nginx logs directly
tail -n 200 /var/log/nginx/access.log
tail -n 200 /var/log/nginx/error.log
grep ' 500 ' /var/log/nginx/access.log | tail -n 50
grep 'upstream' /var/log/nginx/error.log | tail -n 50Check service health
ps aux | grep gunicorn
nginx -t
curl -I https://yourdomain.comCheck file presence and growth
ls -lah /var/log/nginx/
du -sh /var/log/nginx/*What to look for
502in Nginx access log usually means upstream app failure or timeout.connect() failedorupstream prematurely closed connectionin Nginx error log points to Gunicorn/app issues.- No Gunicorn output in
journalctloften means the service command or output routing is wrong. - Empty log files with active traffic often means wrong path, permissions, or logging disabled.
- Missing stack traces usually means code uses
logger.error()instead oflogger.exception().
If timestamps are inconsistent, standardize on UTC across app, host, and database.
Related guide: Debugging Production Issues
Checklist
- ✓ Application logs enabled in production
- ✓ Gunicorn access logs enabled
- ✓ Gunicorn error logs enabled
- ✓ Nginx access logs enabled
- ✓ Nginx error logs enabled
- ✓ Consistent timestamp format configured
- ✓ UTC used across services where possible
- ✓ Request ID present across Nginx and app logs
- ✓ Secrets and PII redaction reviewed
- ✓ Log rotation configured for file-based logs
- ✓ Journald or Docker retention limits reviewed
- ✓ Tested
404,500, auth failure, and upstream error paths - ✓ Disk usage monitored for log growth
- ✓ Error tracking connected for exceptions
- ✓ Incident response path documented
For a broader pre-launch review, use the SaaS Production Checklist.
Additional foundation pages:
FAQ
What should I log in a small SaaS app?
Log request metadata, exceptions, auth events, background job lifecycle events, webhook handling, and infrastructure errors. Do not log secrets or unnecessary personal data.
Is JSON logging required?
No. JSON helps searching and aggregation, but consistent plain text with timestamps, levels, and request IDs is enough for many MVPs.
Why am I seeing 502 errors with no app traceback?
Usually one of these is true:
- Gunicorn or the app process failed before logging
- logs are routed to the wrong destination
- process manager output is not being collected
- permissions or retention are misconfigured
Check Nginx error logs and journalctl together.
Should Nginx access logs include every request?
Usually yes at first. If traffic grows, filter or separate health checks, static assets, and bot traffic so useful request logs remain visible.
Final takeaway
A solid small-SaaS logging setup is simple:
- structured or consistent app logs to
stdout - Gunicorn access and error logs enabled
- Nginx access and error logs enabled
- request IDs across layers
- retention and rotation configured
- exceptions sent to error tracking
That is enough to debug most production incidents quickly without building a full logging platform on day one.