Disclaimer: This code is provided as a sample/template for educational and reference purposes only. It is provided "as-is" without warranty of any kind. See the LICENSE file for details. You are responsible for reviewing, testing, and adapting this code for your own use case before deploying to production.
A Cloudflare Worker that proxies requests to a private AWS S3 bucket hosting a static web application. The bucket never needs to be made public; the Worker signs every request using AWS Signature V4 via aws4fetch.
┌─────────────────────┐
│ AWS S3 Bucket │
│ (Private) │
│ │
│ - Block public │
┌──────────┐ ┌─────────────────┐ │ access: ON │
│ Client │--->│ Cloudflare Edge │---->│ │
│ (Browser)│<---│ │<----│ IAM policy: │
└──────────┘ │ - TLS │ │ GetObject only │
│ - Caching │ │ │
│ - Worker │ └─────────────────────┘
│ (signs req) │
└─────────────────┘
How it works:
- Client requests
https://your-worker.workers.dev/assets/app.js - Cloudflare edge receives the request and invokes the Worker
- Worker checks edge cache; if miss, signs request with AWS credentials
- Worker fetches from S3 using the signed request
- Worker sanitizes response headers and caches at the edge
- Client receives the file with Cloudflare's CDN benefits
- Cloudflare account with Workers enabled
- AWS account with an S3 bucket containing your static site
- AWS IAM credentials scoped to the specific bucket (see IAM Policy below)
- Node.js 18+ and npm
cd output/aws4fetch-worker-proxy
npm installEdit wrangler.jsonc and set your bucket name and region:
Copy the example file and add your AWS credentials:
cp .dev.vars.example .dev.varsEdit .dev.vars:
AWS_ACCESS_KEY_ID=AKIA...your-key...
AWS_SECRET_ACCESS_KEY=...your-secret...
Never commit .dev.vars to version control.
npm run devOpen http://localhost:8787 in your browser.
First, add your secrets to Cloudflare:
npx wrangler secret put AWS_ACCESS_KEY_ID
# Paste your access key when prompted
npx wrangler secret put AWS_SECRET_ACCESS_KEY
# Paste your secret key when promptedThen deploy:
npm run deployCreate a dedicated IAM user (or role) with the minimum required permissions. Never use broad s3:* permissions.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowGetObject",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::your-bucket-name/*"
},
{
"Sid": "AllowListBucket",
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": "arn:aws:s3:::your-bucket-name"
}
]
}Replace your-bucket-name with your actual bucket name.
- Go to AWS IAM Console > Users > Create User
- Name it something like
cloudflare-s3-proxy-readonly - Attach the policy above (create it as an inline policy or managed policy)
- Create an access key under Security Credentials
- Save the Access Key ID and Secret Access Key securely
All configuration is in wrangler.jsonc:
| Variable | Description | Default |
|---|---|---|
S3_BUCKET |
S3 bucket name (not the full URL) | my-private-bucket |
S3_REGION |
AWS region (e.g., us-east-1, eu-west-1) |
us-east-1 |
SPA_MODE |
Serve index.html for 404s (for React/Vue/Angular apps) |
true |
CACHE_MAX_AGE |
Edge cache duration in seconds | 3600 |
When SPA_MODE is true:
- Requests to
/some/routethat return 404 from S3 will serve/index.htmlinstead - This enables client-side routing in single-page applications
- Direct file requests (e.g.,
/assets/app.js) work normally
Set SPA_MODE to false for traditional static sites where every URL maps to an actual file.
- Path traversal prevention: Blocks
..in paths - Method restriction: Only GET and HEAD allowed (405 for others)
- Header sanitization: Strips all
x-amz-*headers from responses - Error masking: Does not leak S3 error details to clients
- Security headers: Adds
X-Content-Type-Options: nosniff
-
Rate limiting: Configure Cloudflare Rate Limiting rules or use the Rate Limiting API in the Worker
-
Access control (if the site should not be public):
- Add Cloudflare Access (Zero Trust) in front of the Worker
- Or implement JWT/token validation in the Worker
- Or add IP allowlisting via Cloudflare WAF
-
Custom domain: Deploy to a custom domain instead of
*.workers.dev:{ "routes": [ { "pattern": "app.example.com/*", "zone_name": "example.com" } ] } -
Monitoring: Enable Cloudflare Workers Analytics and set up alerts
Secrets are set per-Worker using wrangler secret put. This is simple but requires setting secrets for each Worker separately.
For multiple Workers or team environments, use Cloudflare Secrets Store:
- Create a secrets store and add your AWS credentials there
- Bind the secrets to your Worker in
wrangler.jsonc:
{
"secrets_store_secrets": [
{
"binding": "AWS_ACCESS_KEY_ID",
"store_id": "your-store-id",
"secret_name": "aws-access-key-id"
},
{
"binding": "AWS_SECRET_ACCESS_KEY",
"store_id": "your-store-id",
"secret_name": "aws-secret-access-key"
}
]
}This allows centralized credential management and rotation without redeploying Workers.
The Worker uses two layers of caching:
-
Cloudflare Edge Cache: Responses are cached at Cloudflare's edge using the Cache API. The
CACHE_MAX_AGEsetting controls how long content stays cached. -
Browser Cache: The
Cache-Controlheader tells browsers to cache responses locally.
To purge cached content after deploying new files to S3:
# Purge everything (use sparingly)
curl -X POST "https://api.cloudflare.com/client/v4/zones/{zone_id}/purge_cache" \
-H "Authorization: Bearer {api_token}" \
-H "Content-Type: application/json" \
--data '{"purge_everything":true}'
# Purge specific URLs
curl -X POST "https://api.cloudflare.com/client/v4/zones/{zone_id}/purge_cache" \
-H "Authorization: Bearer {api_token}" \
-H "Content-Type: application/json" \
--data '{"files":["https://app.example.com/index.html"]}'Or use the Cloudflare dashboard: Caching > Configuration > Purge Cache.
npm run dev
# In another terminal:
curl -i http://localhost:8787/
curl -i http://localhost:8787/index.html
curl -i http://localhost:8787/assets/app.js
curl -i http://localhost:8787/nonexistent # Should return index.html in SPA mode
curl -X POST http://localhost:8787/ # Should return 405curl -i https://s3-proxy.your-subdomain.workers.dev/
curl -I https://s3-proxy.your-subdomain.workers.dev/ # HEAD requestnpm run tail
# or
npx wrangler tail- Verify
S3_REGIONmatches the actual bucket region - Check that credentials are correct (no extra whitespace)
- Ensure the bucket name is exact (case-sensitive)
- Verify the IAM policy allows
s3:GetObjecton the correct bucket ARN - Check that the IAM user/role has the policy attached
- Ensure the bucket does not have a bucket policy that denies access
- Verify the bucket contains the expected files
- Check that file paths in S3 match the requested URLs
- Try disabling SPA mode to see actual 404s vs fallback
- Purge the Cloudflare cache (see Caching Strategy above)
- Reduce
CACHE_MAX_AGEduring development
Consider these alternatives:
If you can migrate data to Cloudflare R2:
- Zero-latency access from Workers (same network)
- No credentials to manage (bindings are automatic)
- S3-compatible API for easy migration
// wrangler.jsonc with R2 binding
{
"r2_buckets": [
{ "binding": "BUCKET", "bucket_name": "my-bucket" }
]
}// Worker code with R2
const object = await env.BUCKET.get(key);
return new Response(object.body);If authenticated signing is not required:
- Make the bucket publicly readable
- Add a bucket policy that only allows Cloudflare IP ranges
- Use Cloudflare DNS + CDN (no Worker needed)
- Simpler, but the bucket is technically "public"
For S3 buckets behind a VPC endpoint (not internet-accessible):
- Deploy
cloudflaredin your VPC - Create a Workers VPC Service
- Worker accesses S3 through the tunnel
- No IAM credentials in the Worker
See: Workers VPC Private S3 Bucket
For enterprise customers:
- Use dedicated egress IPs
- Whitelist those IPs in S3 bucket policy
- Route traffic through Cloudflare Gateway
See: Protect S3 with Zero Trust
If files are large and you want to avoid streaming through the Worker:
- Worker generates a short-lived presigned URL (e.g., 60 seconds)
- Worker returns a 302 redirect to the presigned URL
- Client downloads directly from S3
- Reduces Worker CPU time and avoids body size limits
// Generate presigned URL and redirect
const signedUrl = await generatePresignedUrl(key, 60);
return Response.redirect(signedUrl, 302);- aws4fetch on npm
- Cloudflare Workers documentation
- Wrangler configuration
- Workers secrets
- Cloudflare Secrets Store
- AWS S3 REST API
- AWS Signature Version 4
This project is licensed under the MIT License. See the LICENSE file for the full text.
This code is provided as a sample/template for educational purposes. It is offered "as-is" without any warranty, express or implied. The authors and contributors accept no liability for any damages, security issues, or costs arising from the use of this code. You are solely responsible for:
- Reviewing and understanding the code before use
- Testing thoroughly in your own environment
- Adapting the code to meet your specific security and compliance requirements
- Managing your own AWS credentials and Cloudflare configuration securely
- Any costs incurred from AWS, Cloudflare, or other services
{ "vars": { "S3_BUCKET": "your-actual-bucket-name", "S3_REGION": "eu-west-1", // or your region "SPA_MODE": "true", "CACHE_MAX_AGE": "3600" } }