502 Bad Gateway Fix Guide
The essential playbook for implementing 502 bad gateway fix guide in your SaaS.
Use this page when Nginx or another reverse proxy returns 502 Bad Gateway and your app is unreachable. In most small SaaS deployments, a 502 means the proxy cannot talk to the upstream app process at all, or it gets an invalid response. The fastest path is to verify the app process, confirm the upstream socket or port, and inspect proxy and app logs together.
Quick Fix / Quick Setup
# 1) Check Nginx config and upstream target
sudo nginx -t
sudo grep -R "proxy_pass\|upstream\|uwsgi_pass\|fastcgi_pass" /etc/nginx/sites-enabled /etc/nginx/conf.d
# 2) Confirm app process is running
ps aux | grep -E "gunicorn|uvicorn|python|docker"
sudo systemctl status gunicorn
sudo systemctl status nginx
# 3) If using systemd + Gunicorn, restart both
sudo systemctl restart gunicorn
sudo systemctl restart nginx
# 4) Check logs immediately after reproducing the error
sudo journalctl -u gunicorn -n 100 --no-pager
sudo journalctl -u nginx -n 100 --no-pager
sudo tail -n 100 /var/log/nginx/error.log
# 5) Verify upstream responds locally
curl -I http://127.0.0.1:8000
curl --unix-socket /run/gunicorn.sock http://localhost/
# 6) If using Docker, verify container health and port mapping
docker ps
docker logs --tail=100 <app_container>
docker inspect <app_container> | grep -A 20 -E 'Ports|IPAddress'
# 7) Reload Nginx after fixing socket/port mismatch
sudo nginx -t && sudo systemctl reload nginxMost 502 incidents come from one of five issues: app process crashed, wrong upstream socket or port, socket permission problem, upstream timeout, or the app is out of memory and gets killed.
What’s happening
A 502 Bad Gateway means the reverse proxy received an invalid response or no usable response from the upstream application server.
Typical small SaaS request path:
Client -> Nginx -> Gunicorn/Uvicorn or container -> App -> Database
Failure points that commonly produce 502:
- Nginx points to the wrong upstream address
- upstream process is not running
- socket file does not exist
- socket permissions block Nginx
- app starts, then crashes during request handling
- upstream closes connection early
- process is killed by OOM
- Docker container is unhealthy or not exposing the expected port
a simple request flow diagram showing client -> proxy -> app -> db with failure markers at proxy-to-app and app startup.
Step-by-step implementation
1) Validate Nginx config first
Check syntax before changing anything:
sudo nginx -tIf syntax is valid, inspect the active upstream target:
sudo grep -R "proxy_pass\|upstream\|uwsgi_pass\|fastcgi_pass" /etc/nginx/sites-enabled /etc/nginx/conf.dTypical examples:
location / {
proxy_pass http://127.0.0.1:8000;
}Or Unix socket:
location / {
proxy_pass http://unix:/run/gunicorn.sock;
}If Nginx points somewhere your app is not listening, 502 is expected.
2) Confirm the app process is actually running
For systemd-managed apps:
sudo systemctl status gunicorn
sudo systemctl status nginxGeneral process checks:
ps aux | grep -E "gunicorn|uvicorn|python|docker"
ss -ltnp
sudo lsof -i -P -n | grep LISTENYou want to confirm:
- Gunicorn/Uvicorn exists
- it is not crash-looping
- it is listening on the exact port or socket Nginx expects
Example expected output for port binding:
LISTEN 0 4096 127.0.0.1:8000 0.0.0.0:* users:(("gunicorn",pid=1234,fd=5))3) Test the upstream directly from the server
If Nginx fails but the upstream responds locally, the issue is usually in proxy config.
For TCP:
curl -I http://127.0.0.1:8000For Unix socket:
curl --unix-socket /run/gunicorn.sock http://localhost/Interpretation:
- local curl fails: app side is broken
- local curl succeeds: inspect Nginx routing, host config, headers, TLS termination, or stale upstream definition
4) Check logs side by side
Reproduce once, then immediately inspect logs.
sudo journalctl -u nginx -n 200 --no-pager
sudo journalctl -u gunicorn -n 200 --no-pager
sudo tail -n 200 /var/log/nginx/error.logCommon Nginx log patterns:
connect() failed (111: Connection refused) while connecting to upstreamUsually means:
- app process is down
- wrong port
- wrong IP family
- service not listening
connect() to unix:/run/gunicorn.sock failed (2: No such file or directory)Usually means:
- socket path mismatch
- app did not create socket
- socket deleted after reboot
- service startup failed
connect() to unix:/run/gunicorn.sock failed (13: Permission denied)Usually means:
- Nginx cannot access socket
- wrong user/group/mode
upstream prematurely closed connection while reading response header from upstreamUsually means:
- app crashed during request
- worker killed
- timeout or invalid upstream behavior
For broader production debugging workflow, see Debugging Production Issues.
5) Verify systemd service configuration
If you use Gunicorn with systemd, check the unit file:
sudo cat /etc/systemd/system/gunicorn.serviceTypical example:
[Unit]
Description=gunicorn daemon
After=network.target
[Service]
User=www-data
Group=www-data
WorkingDirectory=/var/www/myapp
EnvironmentFile=/var/www/myapp/.env
ExecStart=/var/www/myapp/venv/bin/gunicorn \
--workers 3 \
--bind unix:/run/gunicorn.sock \
myapp.wsgi:application
[Install]
WantedBy=multi-user.targetVerify:
WorkingDirectoryexists- virtualenv path is correct
- module path is correct
EnvironmentFileexists- bind target matches Nginx
- service user can access the app files
Reload systemd after changes:
sudo systemctl daemon-reload
sudo systemctl restart gunicorn
sudo systemctl status gunicornIf your stack is based on Nginx + Gunicorn, also see Deploy SaaS with Nginx + Gunicorn.
6) Fix Unix socket problems
If you use a socket, confirm it exists:
ls -lah /run/gunicorn.sockCheck permissions on the socket and parent directory:
namei -l /run/gunicorn.sockTypical Nginx config:
location / {
proxy_pass http://unix:/run/gunicorn.sock;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}Typical systemd service with socket bind:
ExecStart=/var/www/myapp/venv/bin/gunicorn \
--bind unix:/run/gunicorn.sock \
--workers 3 \
myapp.wsgi:applicationIf Nginx runs as www-data, the socket must be accessible by that user or group.
Common fixes:
- align service
UserandGroup - set socket path to a shared accessible location
- fix directory permissions
- ensure service recreates socket on boot
Then reload:
sudo systemctl restart gunicorn
sudo nginx -t && sudo systemctl reload nginx7) Fix localhost port mismatch problems
A common failure is Nginx targeting 127.0.0.1:8000 while the app listens elsewhere.
Check listeners:
ss -ltnpWrong combinations that produce 502:
- Nginx uses
127.0.0.1:8000, app listens on127.0.0.1:5000 - Nginx uses IPv4, app listens only on IPv6
- Nginx uses port 8000, app moved to 8080 after deploy
Example Gunicorn TCP bind:
ExecStart=/var/www/myapp/venv/bin/gunicorn \
--bind 127.0.0.1:8000 \
--workers 3 \
myapp.wsgi:applicationAfter updating either side, reload services:
sudo systemctl restart gunicorn
sudo nginx -t && sudo systemctl reload nginx8) Check application startup failures
Many 502s are app boot failures surfaced through Nginx.
Check recent app logs:
sudo journalctl -u gunicorn -n 200 --no-pagerLook for:
- missing env vars
- invalid secrets
- import errors
- package version problems
- migration failures
- database connection errors
- missing files
- bad working directory
Examples:
ModuleNotFoundError: No module named 'psycopg2'django.core.exceptions.ImproperlyConfigured: SECRET_KEY not setOperationalError: could not connect to server: Connection refusedIf this started after deploy, compare with the last known good release or roll back. See App Crashes on Deployment.
9) Check Docker-specific failures
If Nginx proxies to a Docker container, verify the container is healthy and exposing the expected port.
docker ps
docker logs --tail=200 <app_container>
docker inspect <app_container>Common mistakes:
- app listens on
127.0.0.1inside container instead of0.0.0.0 - Nginx points to host port, but mapping changed
- wrong Docker Compose service name
- container restarts in a loop
- internal app port differs from exposed port
Inside container, the app should usually bind to all interfaces:
gunicorn --bind 0.0.0.0:8000 myapp.wsgi:applicationExample Compose fragment:
services:
app:
build: .
ports:
- "8000:8000"
command: gunicorn --bind 0.0.0.0:8000 myapp.wsgi:applicationFor a full container production setup, see Docker Production Setup for SaaS.
10) Check resource exhaustion and OOM kills
If restart fixes the issue temporarily, suspect resource pressure.
Commands:
free -m
top
dmesg | grep -i -E "killed process|out of memory|oom"Typical signal:
Out of memory: Killed process 1234 (gunicorn) total-vm:...Common responses:
- reduce memory usage
- lower worker count if memory is constrained
- increase VPS memory
- move heavy work to background jobs
- optimize slow routes and queries
If requests are hanging long enough to tie up all workers, 502 may appear when upstreams crash or drop connections.
11) Review timeout settings carefully
Do not increase timeouts until you confirm the app is healthy.
Relevant Nginx settings:
proxy_connect_timeout 5s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
send_timeout 60s;If uploads are involved, also verify:
client_max_body_size 25M;Gunicorn timeout example:
gunicorn --bind 127.0.0.1:8000 --workers 3 --timeout 60 myapp.wsgi:applicationUse larger timeouts only when justified by actual request behavior. Otherwise they hide slow queries, blocked workers, or poor request design.
12) Reload and verify recovery
After fixing the root cause:
sudo systemctl restart gunicorn
sudo nginx -t && sudo systemctl reload nginx
curl -I http://127.0.0.1:8000
curl -I https://yourdomain.comAlso test:
- homepage
- authenticated route
- upload route
- API route
- health endpoint
Then confirm monitoring and checklist coverage in SaaS Production Checklist.
a decision tree for curl upstream fails vs curl upstream works but public route fails.
Common causes
- Gunicorn/Uvicorn process is stopped or crash-looping
- Nginx upstream points to the wrong port or socket path
- Unix socket exists but Nginx does not have permission to access it
- application failed to start because of missing environment variables or bad secrets
- database connection failure prevents the app from booting
- recent deployment introduced import errors, dependency issues, or failed migrations
- Docker container is unhealthy or app listens on the wrong interface inside the container
- upstream timeout is too low for slow requests or cold starts
- server ran out of memory and the app worker was killed by the OOM killer
- Nginx site config was changed but not reloaded, leaving stale upstream settings
Debugging tips
Use these commands directly during incident response:
sudo nginx -t
sudo systemctl status nginx
sudo systemctl status gunicorn
sudo journalctl -u nginx -n 200 --no-pager
sudo journalctl -u gunicorn -n 200 --no-pager
sudo tail -n 200 /var/log/nginx/error.log
ps aux | grep -E "gunicorn|uvicorn|python"
ss -ltnp
sudo lsof -i -P -n | grep LISTEN
curl -I http://127.0.0.1:8000
curl --unix-socket /run/gunicorn.sock http://localhost/
free -m
top
dmesg | grep -i -E "killed process|out of memory|oom"
docker ps
docker logs --tail=200 <app_container>
docker inspect <app_container>
sudo cat /etc/systemd/system/gunicorn.service
sudo grep -R "proxy_pass\|upstream" /etc/nginx/sites-enabled /etc/nginx/conf.dRules that help:
- reproduce once, then inspect logs immediately
- verify the app process before assuming Nginx is broken
- compare proxy target with actual bind address
- if local upstream works, focus on proxy routing and config
- if restart helps briefly, suspect memory, worker starvation, or leak
- if issue started after deploy, inspect release-specific changes first
- use
journalctl -fandtail -fduring live tests
Checklist
- ✓ Nginx config validates with no syntax errors
- ✓ proxy upstream matches the real app socket or port
- ✓ app process is running and stable after restart
- ✓ direct local curl to upstream succeeds
- ✓ Nginx error log and app log show no current upstream connection errors
- ✓ socket file exists and permissions are correct if using Unix sockets
- ✓ database and required external services are reachable
- ✓ environment variables are loaded in the production process
- ✓ containers are healthy and mapped to the expected ports if using Docker
- ✓ timeouts and body size settings match real request patterns
- ✓ resource usage is within safe limits
- ✓ a health check endpoint and monitoring are in place
Product CTA
Use an operations dashboard or internal admin tool to expose health checks, deployment metadata, background job status, and recent error events in one place.
For small SaaS teams, this reduces time to isolate whether 502 is caused by deploys, config drift, app crashes, or dependency failures. The main benefit is faster recovery without relying on SSH-heavy debugging for every incident.
Related guides
- Deploy SaaS with Nginx + Gunicorn
- Docker Production Setup for SaaS
- App Crashes on Deployment
- Debugging Production Issues
- SaaS Production Checklist
FAQ
How do I know if the issue is Nginx or the app?
Test the upstream directly from the server. If curl to the app port or socket fails, the app side is broken. If direct curl works but the public endpoint fails, inspect Nginx config, routing, TLS, or headers.
Why does restarting fix it temporarily?
Temporary recovery usually points to resource pressure, worker crashes, memory leaks, deadlocks, or long-running requests exhausting available workers.
Can a bad Nginx config still pass nginx -t and cause 502?
Yes. Syntax can be valid while the upstream target is wrong, stale, or unreachable.
Should I increase timeouts to fix 502?
Only after verifying the app is healthy. Raising timeouts can hide slow queries, blocked workers, or poor request handling.
What should I check first after a fresh deployment?
Check service startup logs, environment variable loading, database migrations, bind address, and whether the app is listening on the expected socket or port.
Why does Nginx return 502 instead of 500?
Because the failure is usually between Nginx and the upstream app, not inside Nginx itself.
What is the difference between 502 and 504?
502 usually means invalid or failed upstream response. 504 usually means the upstream did not respond in time.
Can a database outage cause 502?
Yes, indirectly. If the app fails to boot or crashes while handling requests because the database is unavailable, Nginx may surface it as 502.
Should I use a Unix socket or TCP port?
Either works. Unix sockets are common on single VPS setups. TCP is often simpler in containerized environments.
Why does it happen only after deploy?
Common reasons are failed migrations, missing environment variables, dependency changes, wrong bind address, or the new release crashing on startup.
Final takeaway
Treat 502 as an upstream connectivity problem first, not a generic web error.
Verify the app process, confirm the exact bind target, and read Nginx and app logs together. Most fixes come down to process startup, socket or port mismatch, permissions, timeouts, or resource exhaustion.
After recovery, add health checks, logging, deployment validation, and checklist coverage so the next 502 is shorter and easier to resolve.