Media Uploads Not Working
The essential playbook for implementing media uploads not working in your SaaS.
Use this page when uploads work in development but fail in staging or production, return 403 / 413 / 500, never appear on disk, or generate broken media URLs.
Goal: isolate whether the failure is caused by request limits, filesystem permissions, reverse proxy config, app storage logic, container volume setup, or object storage credentials.
Quick Fix / Quick Setup
Run the fastest checks first:
# 1) Verify upload directory exists and is writable
mkdir -p /var/www/app/media
chown -R www-data:www-data /var/www/app/media
chmod -R 775 /var/www/app/media
# 2) Check Nginx body size limit
sudo nginx -T | grep -i client_max_body_size
# If missing or too small, set for your server/location:
# client_max_body_size 25M;
# 3) Test app user write access
sudo -u www-data sh -c 'touch /var/www/app/media/.write_test && rm /var/www/app/media/.write_test'
# 4) If using Docker, confirm volume mount
docker inspect <container_name> | grep -A 20 Mounts
# 5) If using S3, verify credentials and bucket access
aws s3 ls s3://YOUR_BUCKET --region YOUR_REGION
# 6) Reload services after config changes
sudo nginx -t && sudo systemctl reload nginx
sudo systemctl restart gunicorn
# 7) Re-test with curl
curl -i -X POST https://your-app.com/upload \
-F 'file=@/tmp/test.png'Fastest path:
- Confirm request size limits.
- Confirm the app process can write to storage.
- Confirm generated media URLs point to a served location.
- Then check container mounts or S3 permissions.
What’s happening
Uploads usually fail at one of these layers:
- Reverse proxy
- Application validation
- Storage backend
- Media file serving
Common status meanings:
413usually means the request is blocked before the app handles it.403usually points to permissions, bucket policy, or CSRF/auth enforcement.500usually means the app accepted the request but failed while saving the file, generating a path, or writing metadata.- Upload succeeds but media URL breaks: usually path/URL misconfiguration or missing media serving rules.
Request flow diagram: Browser -> Nginx -> Flask/FastAPI app -> local disk/S3 -> returned media URL
Step-by-step implementation
1) Confirm the exact failure mode
Separate these cases:
- Request rejected before app handles it
- Request reaches app, file not saved
- File saved, but inaccessible
- URL returned is wrong
- Background job needed to finalize upload but worker is down
Check browser DevTools Network tab:
- HTTP status
- Response body
- Request size
- Endpoint path
- Redirect behavior
- Cookies and auth headers
Also reproduce from CLI:
curl -i -X POST https://your-app.com/upload \
-F 'file=@/tmp/test.png'For authenticated endpoints, include your session or token.
2) Check reverse proxy request size limits
If uploads fail with 413 Request Entity Too Large, start with Nginx.
Inspect current config:
sudo nginx -t
sudo nginx -T | grep -i client_max_body_size
grep -R "client_max_body_size" /etc/nginxSet an explicit limit:
server {
server_name your-app.com;
client_max_body_size 25M;
location / {
proxy_pass http://127.0.0.1:8000;
}
}Or for a specific upload path:
location /upload {
client_max_body_size 25M;
proxy_pass http://127.0.0.1:8000;
}Reload Nginx:
sudo nginx -t && sudo systemctl reload nginxIf you use a CDN, WAF, or load balancer, check limits there too.
3) Check app-level upload limits
Flask/FastAPI apps may enforce file limits in app code or middleware.
Example Flask config:
app.config["MAX_CONTENT_LENGTH"] = 25 * 1024 * 1024 # 25 MBExample FastAPI validation pattern:
from fastapi import FastAPI, File, UploadFile, HTTPException
app = FastAPI()
MAX_SIZE = 25 * 1024 * 1024
@app.post("/upload")
async def upload(file: UploadFile = File(...)):
data = await file.read()
if len(data) > MAX_SIZE:
raise HTTPException(status_code=413, detail="File too large")
return {"filename": file.filename}If large files fail but small files work, compare:
- Nginx limit
- App validation limit
- Upstream timeout
- Disk space
- CDN/WAF limits
4) Verify local storage path exists and is writable
Check the actual upload directory:
ls -lah /var/www/app/media
stat /var/www/app/media
namei -l /var/www/app/media
df -h
df -iCreate and test with the same runtime user:
sudo -u www-data sh -c 'touch /var/www/app/media/.write_test && rm /var/www/app/media/.write_test'Fix ownership and permissions:
mkdir -p /var/www/app/media
chown -R www-data:www-data /var/www/app/media
chmod -R 775 /var/www/app/mediaDo not rely on relative paths like ./uploads in production. Use absolute paths.
Bad:
UPLOAD_DIR = "./uploads"Better:
UPLOAD_DIR = "/var/www/app/media"If running under systemd, Gunicorn, or Docker, relative paths often resolve differently than local development.
5) Confirm your app writes where you think it writes
Print or log the final resolved path before saving.
Example Flask:
from pathlib import Path
from werkzeug.utils import secure_filename
UPLOAD_DIR = Path("/var/www/app/media")
def save_file(file_storage):
UPLOAD_DIR.mkdir(parents=True, exist_ok=True)
filename = secure_filename(file_storage.filename)
target = UPLOAD_DIR / filename
file_storage.save(target)
return str(target)Example FastAPI:
from pathlib import Path
from fastapi import FastAPI, UploadFile, File
app = FastAPI()
UPLOAD_DIR = Path("/var/www/app/media")
@app.post("/upload")
async def upload(file: UploadFile = File(...)):
UPLOAD_DIR.mkdir(parents=True, exist_ok=True)
target = UPLOAD_DIR / file.filename
with open(target, "wb") as f:
while chunk := await file.read(1024 * 1024):
f.write(chunk)
return {"path": str(target)}Prefer generated filenames over raw user filenames.
Example:
import uuid
from pathlib import Path
from werkzeug.utils import secure_filename
def build_name(original_name: str) -> str:
ext = Path(secure_filename(original_name)).suffix.lower()
return f"{uuid.uuid4().hex}{ext}"6) Check media URL generation and serving
A saved file is not enough. The returned URL must match your serving configuration.
If using local disk and Nginx, configure a media path explicitly.
Example Nginx media mapping:
location /media/ {
alias /var/www/app/media/;
access_log off;
expires 7d;
add_header Cache-Control "public";
}Important:
- Use
aliaswhen mapping a URL prefix to a filesystem directory. - Ensure trailing slash usage is correct.
- Confirm the app returns URLs like
/media/example.png, not filesystem paths.
Test directly:
curl -I https://your-app.com/media/test.pngIf upload API returns success but file URL gives 404, compare:
- App-generated URL
- Nginx
location aliaspath- Actual saved file path
For related file-serving setup, see Static and Media File Handling.
7) Check Docker volume persistence
If uploads work until restart or deploy, treat it as a persistence issue.
Inspect mounts:
docker ps
docker inspect <container_name>
docker exec -it <container_name> sh
docker exec -it <container_name> ls -lah /app/media
docker logs <container_name> --tail 200You need a persistent volume or bind mount.
Example Docker Compose:
services:
web:
image: your-app:latest
volumes:
- ./media:/app/mediaOr named volume:
services:
web:
image: your-app:latest
volumes:
- media_data:/app/media
volumes:
media_data:If your app writes to /app/media but your mount is attached elsewhere, files will go to the container filesystem and disappear on replacement.
For broader container setup, see Docker Production Setup for SaaS.
8) Check S3 or S3-compatible object storage
If using S3, verify:
- Access key
- Secret key
- Bucket name
- Region
- Endpoint
- Prefix/path
- URL generation mode
- IAM permissions
Validate from CLI:
aws s3 ls s3://YOUR_BUCKET --region YOUR_REGION
aws s3 cp /tmp/test.png s3://YOUR_BUCKET/test.png --region YOUR_REGION
env | egrep 'S3|AWS|MEDIA|UPLOAD|BUCKET'Required permissions usually include:
s3:PutObjects3:GetObject- optionally
s3:DeleteObject
If using pre-signed URLs:
- Check server clock drift
- Check expiration time
- Check endpoint/region mismatch
- Check whether the client sends the exact headers used in signature generation
If uploads happen directly from browser to object storage, verify CORS.
Example S3 CORS:
[
{
"AllowedHeaders": ["*"],
"AllowedMethods": ["GET", "PUT", "POST"],
"AllowedOrigins": ["https://your-app.com"],
"ExposeHeaders": ["ETag"]
}
]For storage tradeoffs, see File Storage Strategy (Local vs S3).
9) Check auth, CSRF, and multipart handling
If upload requests return 403 or fail only for logged-in users:
- Check CSRF protection for multipart forms
- Check cookie
SameSiteand secure settings - Check auth middleware behavior on large or streaming requests
- Check whether your frontend includes credentials correctly
Typical issues:
- Missing CSRF token
- Token/header stripped by frontend code
- Session cookie not sent over HTTPS-only rules
- API route protected differently in production
For broader auth failure patterns, see Common Auth Bugs and Fixes.
10) Check background jobs if uploads require processing
If the upload request returns success but the final asset never appears, background processing may be failing.
Examples:
- Thumbnail generation
- Virus scanning
- Metadata extraction
- Video transcoding
- Post-save object move/copy
Check worker processes:
ps aux | egrep 'gunicorn|uvicorn|celery|rq'Inspect logs:
sudo journalctl -u gunicorn -n 200 --no-pager
docker logs <container_name> --tail 200If the web app and worker use different paths or different storage credentials, uploads can partially succeed and never become accessible.
Also check the fix page for worker issues: Background Jobs Not Running.
11) Check metadata/database consistency
Sometimes the file saves correctly but metadata insert fails.
Symptoms:
- File exists on disk/S3
- API returns error
- UI cannot find uploaded asset
- Duplicate orphaned files accumulate
Log both operations:
- upload start
- validation result
- final storage key/path
- DB insert/update
- returned URL
- exception stack trace
Prefer one of these patterns:
- Save file, then DB row, and clean up file on DB failure
- Create pending DB row, save file, then mark complete
- Use background processing with explicit status fields
12) Add structured logging around upload flow
At minimum, log:
- request ID
- authenticated user ID
- original filename
- generated filename
- content length
- storage backend
- final save path/object key
- returned URL
- exception stack trace
Use this during debugging and incidents. For production incident workflow, see Debugging Production Issues and Error Tracking with Sentry.
Decision tree: 413 vs 403 vs 500 vs saved-but-not-accessible
Common causes
- Nginx
client_max_body_sizeis too small, causing413 Request Entity Too Large. - Upload directory does not exist or is not writable by the app process user.
- Docker container writes files to ephemeral filesystem instead of a mounted volume.
MEDIA_ROOTor equivalent storage path points to the wrong directory in production.- Nginx media
alias/rootconfiguration is wrong, so saved files cannot be served. - S3 credentials, bucket name, region, or endpoint are incorrect.
- Bucket policy or IAM permissions do not allow
PutObjectorGetObject. - Application code uses relative paths that break under systemd, Gunicorn, or container working directories.
- Background worker that finalizes uploads or thumbnails is down.
- CSRF, session, or auth middleware blocks multipart upload requests.
- Filename sanitization or content-type validation rejects files unexpectedly.
- SELinux/AppArmor policy blocks filesystem writes despite Unix permissions looking correct.
Common local disk setup checks
- Set a dedicated media directory outside the release directory if deployments replace app files.
- Map
/media/URLs to the filesystem path in Nginx usingalias, notroot, when appropriate. - Keep upload directories persistent across deploys and container restarts.
- Do not rely on app-relative paths like
./uploadsin production unless the working directory is fixed and persistent. - Ensure SELinux/AppArmor policies are not blocking writes on hardened servers.
Common S3/object storage setup checks
- Verify access key, secret key, bucket, region, and endpoint values match the target environment.
- Confirm the IAM user or role has
s3:PutObject,s3:GetObject, and if neededs3:DeleteObjectfor the bucket prefix. - Check whether your app expects public URLs, signed URLs, or proxied downloads, and keep that strategy consistent.
- Validate CORS if uploads happen directly from the browser to object storage.
- Check for clock drift or invalid signature errors when using pre-signed URLs.
Debugging tips
Use these commands directly.
Nginx and process checks
sudo nginx -t
sudo nginx -T | grep -i client_max_body_size
grep -R "client_max_body_size" /etc/nginx
sudo systemctl status nginx
sudo systemctl status gunicorn
sudo journalctl -u nginx -n 200 --no-pager
sudo journalctl -u gunicorn -n 200 --no-pager
ps aux | egrep 'gunicorn|uvicorn|celery|rq'Filesystem checks
ls -lah /var/www/app/media
stat /var/www/app/media
namei -l /var/www/app/media
sudo -u www-data sh -c 'touch /var/www/app/media/.write_test && rm /var/www/app/media/.write_test'
df -h
df -i
python -c "from pathlib import Path; p=Path('/var/www/app/media'); print(p.exists(), p.is_dir())"Docker checks
docker ps
docker inspect <container_name>
docker exec -it <container_name> sh
docker exec -it <container_name> ls -lah /app/media
docker logs <container_name> --tail 200Request and URL checks
curl -i -X POST https://your-app.com/upload -F 'file=@/tmp/test.png'
curl -I https://your-app.com/media/test.pngS3 checks
aws s3 ls s3://YOUR_BUCKET --region YOUR_REGION
aws s3 cp /tmp/test.png s3://YOUR_BUCKET/test.png --region YOUR_REGION
env | egrep 'S3|AWS|MEDIA|UPLOAD|BUCKET'Practical rules:
- If response is
413, fix proxy or upstream request limits before changing app code. - If response is
500, inspect app logs immediately after a test upload and capture the exact exception. - If upload endpoint returns
200but file access fails, inspect generated URL and compare it to media serving config. - If files disappear after deploy or restart, treat it as a persistence problem.
- If uploads work in dev but not production, compare env vars, file paths, mounts, and proxy config.
- If only large files fail, compare Nginx, Gunicorn/Uvicorn, app validation, and CDN/WAF limits.
- If authenticated users fail intermittently, inspect session, CSRF, and cookie behavior on multipart requests.
Checklist
- ✓ Upload endpoint returns expected status code and structured error messages.
- ✓ Reverse proxy request size limits match product requirements.
- ✓ App process can write to the configured storage backend.
- ✓ Upload paths are absolute, consistent, and environment-specific.
- ✓ Media URLs resolve to a reachable file or signed object URL.
- ✓ Storage is persistent across deploys, restarts, and container replacement.
- ✓ File metadata writes and storage writes succeed together or are rolled back safely.
- ✓ Logs capture storage backend exceptions and request identifiers.
- ✓ Background workers are running if post-processing is required.
- ✓ Security checks exist for content type, extension, size, and filename sanitization.
For final rollout verification, use SaaS Production Checklist.
Related guides
- Static and Media File Handling
- File Storage Strategy (Local vs S3)
- Docker Production Setup for SaaS
- Environment Setup on VPS
- Background Jobs Not Running
- Debugging Production Issues
- Error Tracking with Sentry
- SaaS Production Checklist
FAQ
Why are uploaded files returning 404 after a successful upload?
The file was likely saved, but the returned URL does not match the server path exposed by Nginx or your app. Check media URL prefix, alias/root config, and storage backend URL generation.
Why do uploads fail only in Docker production?
Usually because the app writes to a container path without a persistent volume mount, or the runtime path differs from what the app expects.
Should the app serve uploaded files directly?
For local disk, small SaaS apps often let Nginx serve media directly. For object storage, return signed or public URLs depending on your access model.
Why do image uploads work but video uploads fail?
Large files commonly hit Nginx body size limits, app validation limits, upstream timeout limits, or disk space issues.
Can background jobs break uploads?
Yes. If uploads require thumbnail generation, virus scanning, or metadata extraction in Celery/RQ, the initial request may succeed while the final asset never becomes available.
Why do uploads work locally but fail in production?
Production adds more moving parts: Nginx, stricter filesystem permissions, containers, environment variables, non-persistent storage, and different auth/cookie behavior.
Why does the API return success but the image URL 404s?
The file may have saved correctly, but the media URL mapping or Nginx alias is wrong.
Should I store uploads on local disk or S3?
Local disk is simpler for early MVPs on one server. S3-compatible storage is safer for scaling, backups, and multi-instance deployments.
Why do uploads disappear after deployment?
The upload directory is likely inside the app release path or container filesystem and is being replaced on deploy.
Why do only large files fail?
Most often due to client_max_body_size or an app-level size limit.
Final takeaway
Treat media upload failures as a chain problem:
- request acceptance
- app validation
- storage write
- persistence
- file serving
Start with status code and logs, then verify write access and media URL mapping before changing application code.
For production SaaS apps, persistent storage and explicit media serving rules prevent most recurring upload issues.