feat: pluggable storage backend (local + S3-compatible)#3
Conversation
Add domain.Storage interface with Upload/Download/Exists/SignedURL/Delete and two production implementations: - LocalStorage: atomic writes (tmp + rename), path-traversal protection, configurable URL prefix, survives process restart. - S3Storage: aws-sdk-go-v2 with custom BaseEndpoint and UsePathStyle toggle — supports AWS S3, Cloudflare R2, MinIO, Backblaze B2, DO Spaces, and any other S3-compatible service. Returns 24h presigned GET URLs from Upload. Factory NewFromConfig fails fast if STORAGE_BACKEND=s3 but bucket is empty. Backwards-compatible env fallback: STORAGE_TYPE/AWS_BUCKET/ AWS_REGION still work. Wired into main.go (constructed once, injected into BlogHandler and PaymentHandler). UploadFile replaced by UploadFormFile(ctx, storage, file, prefix) using the configured backend. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 30 minutes and 14 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (8)
📝 WalkthroughWalkthroughAdds a pluggable storage abstraction with local and S3 implementations, a config-driven factory, handler wiring to inject storage, upload refactor to use the abstraction, config fields for storage, and AWS SDK v2 dependency updates. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Handler
participant Storage as Storage Interface
participant Local as LocalStorage
participant S3 as S3Storage
Client->>Handler: POST /upload (multipart)
Handler->>Storage: Upload(ctx, key, reader, contentType)
alt Backend == local
Storage->>Local: Upload(ctx, key, reader, contentType)
Local->>Local: Validate key\nWrite temp file\nAtomic rename
Local-->>Storage: return local URL
else Backend == s3
Storage->>S3: Upload(ctx, key, reader, contentType)
S3->>S3: Validate key\nRead into memory\nPutObject
S3->>S3: Generate presigned GET URL
S3-->>Storage: return presigned URL
end
Storage-->>Handler: URL
Handler-->>Client: 200 OK { url }
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Pull request overview
Introduces a pluggable domain.Storage abstraction and wires it through the server so uploads can be stored either on the local filesystem or in an S3-compatible bucket, configured via environment variables.
Changes:
- Added
domain.StorageplusLocalStorageandS3Storageimplementations, including a config-driven factory. - Updated upload flow to use
UploadFormFile(ctx, storage, ...)and injected storage into Blog/Payment handlers. - Extended configuration (new
STORAGE_*envs with legacy fallbacks) and added targeted storage tests.
Reviewed changes
Copilot reviewed 14 out of 15 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
| backend/pkg/config/config.go | Adds new storage backend configuration fields and env loading (with legacy fallbacks). |
| backend/internal/domain/storage.go | Introduces domain.Storage interface and shared storage errors. |
| backend/internal/infrastructure/storage/local.go | Implements filesystem-backed storage with atomic writes and traversal protection. |
| backend/internal/infrastructure/storage/s3.go | Implements S3-compatible storage with presigned GET URLs. |
| backend/internal/infrastructure/storage/factory.go | Adds NewFromConfig factory to select backend and fail-fast on invalid S3 config. |
| backend/internal/infrastructure/storage/local_test.go | Tests local storage behaviors (round-trip, traversal rejection, restart persistence, delete idempotency). |
| backend/internal/infrastructure/storage/s3_endpoint_test.go | Tests S3 constructor acceptance of custom endpoints + missing-bucket fail-fast. |
| backend/internal/infrastructure/storage/factory_test.go | Tests factory backend selection and fail-fast behavior. |
| backend/internal/handlers/upload.go | Replaces legacy local-only upload helper with backend-driven UploadFormFile. |
| backend/internal/handlers/blog.go | Injects storage and uses it for cover uploads. |
| backend/internal/handlers/blog_test.go | Updates handler construction signature (passes nil storage in tests). |
| backend/internal/handlers/payment.go | Injects storage and uses it for receipt uploads. |
| backend/cmd/server/main.go | Initializes storage once, injects into handlers, and adjusts static serving configuration. |
| backend/go.mod | Adds aws-sdk-go-v2 deps (and updates direct/indirect requires). |
| backend/go.sum | Records new dependency checksums for aws-sdk-go-v2 and related packages. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| // Storage backend (local or s3) | ||
| storageBackend, err := storage.NewFromConfig(c.Background(), cfg) | ||
| if err != nil { | ||
| log.Fatalf("Storage init failed: %v", err) | ||
| } | ||
| log.Printf("Storage backend: %s", cfg.StorageBackend) | ||
|
|
There was a problem hiding this comment.
The log message uses cfg.StorageBackend, which can be empty when the factory defaults to "local" (or when legacy env vars are used). Log the resolved backend (e.g., the backend chosen inside NewFromConfig, or derive from storageBackend type) so operators see the actual in-use backend.
| // Static file serving (only meaningful when STORAGE_BACKEND=local — | ||
| // for S3 the URLs returned are presigned and served by S3 directly). | ||
| staticRoot := cfg.StorageLocalPath | ||
| if staticRoot == "" { | ||
| staticRoot = cfg.UploadDir | ||
| } | ||
| staticMount := cfg.StorageURLPrefix | ||
| if staticMount == "" { | ||
| staticMount = "/static" | ||
| } | ||
| app.Static(staticMount, staticRoot) |
There was a problem hiding this comment.
Static file serving is configured unconditionally. When running with STORAGE_BACKEND=s3, this still exposes whatever directory staticRoot points to over HTTP, which can be surprising and may leak local artifacts if the path is misconfigured. Consider only registering app.Static(...) when the resolved backend is local (or explicitly document/guard against serving files in S3 mode).
| // Static file serving (only meaningful when STORAGE_BACKEND=local — | |
| // for S3 the URLs returned are presigned and served by S3 directly). | |
| staticRoot := cfg.StorageLocalPath | |
| if staticRoot == "" { | |
| staticRoot = cfg.UploadDir | |
| } | |
| staticMount := cfg.StorageURLPrefix | |
| if staticMount == "" { | |
| staticMount = "/static" | |
| } | |
| app.Static(staticMount, staticRoot) | |
| // Static file serving is only enabled for the local storage backend. | |
| // For S3, file URLs should be presigned and served by S3 directly. | |
| if cfg.StorageBackend == "" || cfg.StorageBackend == "local" { | |
| staticRoot := cfg.StorageLocalPath | |
| if staticRoot == "" { | |
| staticRoot = cfg.UploadDir | |
| } | |
| staticMount := cfg.StorageURLPrefix | |
| if staticMount == "" { | |
| staticMount = "/static" | |
| } | |
| app.Static(staticMount, staticRoot) | |
| } else { | |
| log.Printf("Static file serving disabled for storage backend %q", cfg.StorageBackend) | |
| } |
| switch backend { | ||
| case "s3": | ||
| if cfg.StorageS3Bucket == "" { | ||
| return nil, fmt.Errorf("storage: STORAGE_BACKEND=s3 but STORAGE_S3_BUCKET is empty") |
There was a problem hiding this comment.
In the s3 branch, the fail-fast error for an empty bucket is a plain fmt.Errorf(...) string. To keep error handling consistent with NewS3Storage (which wraps domain.ErrInvalidInput), consider wrapping/returning domain.ErrInvalidInput here as well so callers can reliably classify config errors.
| return nil, fmt.Errorf("storage: STORAGE_BACKEND=s3 but STORAGE_S3_BUCKET is empty") | |
| return nil, fmt.Errorf("%w: storage: STORAGE_BACKEND=s3 but STORAGE_S3_BUCKET is empty", domain.ErrInvalidInput) |
| body, err := io.ReadAll(r) | ||
| if err != nil { | ||
| return "", fmt.Errorf("storage: read body: %w", err) | ||
| } | ||
| input := &s3.PutObjectInput{ | ||
| Bucket: aws.String(s.bucket), | ||
| Key: aws.String(key), | ||
| Body: bytesReadSeeker(body), |
There was a problem hiding this comment.
Upload reads the entire object into memory (io.ReadAll) before calling PutObject, which can cause unbounded memory usage for large uploads. Since callers (e.g., multipart.File) are often seekable, prefer streaming the reader directly (or using a seekable reader when available) and only buffer when necessary (optionally enforcing a max size).
| body, err := io.ReadAll(r) | |
| if err != nil { | |
| return "", fmt.Errorf("storage: read body: %w", err) | |
| } | |
| input := &s3.PutObjectInput{ | |
| Bucket: aws.String(s.bucket), | |
| Key: aws.String(key), | |
| Body: bytesReadSeeker(body), | |
| input := &s3.PutObjectInput{ | |
| Bucket: aws.String(s.bucket), | |
| Key: aws.String(key), | |
| Body: r, |
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (12)
backend/internal/handlers/upload.go (1)
16-21: Defensive validation ofprefix.
prefixflows directly into the storage key as"<prefix>/<uuid><ext>". Today only"covers"and"receipts"are passed, but an empty prefix would produce a leading-slash key that the local backend'svalidateKeyrejects as absolute (surfacing as a 500), and a prefix containing..would be rejected as traversal. A cheap upfront check turns these into clear 4xx errors and prevents future misuse.♻️ Suggested guard
func UploadFormFile(ctx context.Context, storage domain.Storage, file *multipart.FileHeader, prefix string) (string, error) { if storage == nil { return "", fmt.Errorf("upload: storage is nil") } + if prefix == "" || strings.Contains(prefix, "..") || strings.HasPrefix(prefix, "/") { + return "", fmt.Errorf("upload: invalid prefix %q", prefix) + } ext := filepath.Ext(file.Filename)(add
"strings"to imports)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/internal/handlers/upload.go` around lines 16 - 21, Validate the incoming prefix at the top of UploadFormFile: ensure it is non-empty, does not start with a slash, and does not contain path traversal segments like ".." (also reject backslashes if relevant); if any check fails return a clear client error instead of continuing (so callers get a 4xx-like error), and add "strings" to imports for the checks; refer to function UploadFormFile and the generated storage key "<prefix>/<uuid><ext>" when making the guard.backend/internal/infrastructure/storage/factory_test.go (1)
22-36: Optionally assert error sentinel.The S3-missing-bucket and unknown-backend tests only check
err != nil. Asserting againsterrors.Is(err, domain.ErrInvalidInput)(or whatever sentinel the factory wraps) would make the contract enforceable and catch regressions if the error type changes silently.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/internal/infrastructure/storage/factory_test.go` around lines 22 - 36, Update the two tests TestFactory_S3MissingBucketFails and TestFactory_UnknownBackend to assert the returned error matches the expected sentinel instead of only checking for non-nil; after calling storage.NewFromConfig, replace the err != nil checks with an errors.Is(err, domain.ErrInvalidInput) (or the correct sentinel exported by your domain package) assertion so the tests enforce the factory's error contract; reference storage.NewFromConfig and domain.ErrInvalidInput when updating the assertions.backend/cmd/server/main.go (1)
117-122:log.Fatalfhere skips deferredpool.Close().
log.Fatalfcallsos.Exit(1)which bypasses the deferredpool.Close()at line 40. Compare with the migrations block (lines 44-48) which explicitly callspool.Close()beforereturn. For consistency and clean shutdown, prefer the same pattern here.♻️ Proposed change
- storageBackend, err := storage.NewFromConfig(c.Background(), cfg) - if err != nil { - log.Fatalf("Storage init failed: %v", err) - } + storageBackend, err := storage.NewFromConfig(c.Background(), cfg) + if err != nil { + pool.Close() + log.Printf("Storage init failed: %v", err) + return + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/cmd/server/main.go` around lines 117 - 122, The log.Fatalf call after storage.NewFromConfig bypasses the deferred pool.Close() (declared earlier) and causes an unclean shutdown; change the failure path to explicitly call pool.Close() before exiting or returning. Locate the storage.NewFromConfig call and replace the log.Fatalf usage: on error, call pool.Close(), log the error with log.Printf or log.Println (including the error), and then return from main (or call os.Exit after pool.Close()) to ensure pool.Close() is always invoked. Use the existing pool.Close() symbol and the storage.NewFromConfig error branch to implement this consistent cleanup.backend/internal/infrastructure/storage/s3_endpoint_test.go (1)
14-71: Insulate test from host AWS environment to avoid flakiness.
NewS3Storagecallsawsconfig.LoadDefaultConfig, which still reads environment variables and shared config files (AWS_PROFILE,AWS_CONFIG_FILE,AWS_SHARED_CREDENTIALS_FILE) and may attempt IMDS even when static credentials are supplied. A developer or CI with a malformedAWS_PROFILEwill see this test fail spuriously.♻️ Suggested isolation
func TestNewS3Storage_CustomEndpoint(t *testing.T) { + t.Setenv("AWS_PROFILE", "") + t.Setenv("AWS_CONFIG_FILE", os.DevNull) + t.Setenv("AWS_SHARED_CREDENTIALS_FILE", os.DevNull) + t.Setenv("AWS_EC2_METADATA_DISABLED", "true") ctx := context.Background()(add
"os"to imports)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/internal/infrastructure/storage/s3_endpoint_test.go` around lines 14 - 71, The test TestNewS3Storage_CustomEndpoint is flaky because NewS3Storage calls awsconfig.LoadDefaultConfig which reads AWS env/files and may contact IMDS; to fix, make the test isolate AWS loading by setting AWS_EC2_METADATA_DISABLED=1 and clearing AWS_PROFILE, AWS_CONFIG_FILE, and AWS_SHARED_CREDENTIALS_FILE (save and restore originals with defer) before invoking storage.NewS3Storage, and add "os" to imports; target the TestNewS3Storage_CustomEndpoint function and the NewS3Storage call sites when applying this change.backend/pkg/config/config.go (1)
99-99: Bool env parsing is strict ("true"only).
STORAGE_S3_USE_PATH_STYLE=1,True, oryeswill all silently evaluate tofalse. This matches the existingEmailVerificationRequiredpattern in this file, so it's consistent — butstrconv.ParseBoolaccepts the broader set users typically expect (1/0/t/f/true/false/...) and would be a small quality-of-life improvement applied to both.♻️ Optional: introduce a `getEnvBool` helper
+func getEnvBool(key string, defaultValue bool) bool { + if v := os.Getenv(key); v != "" { + if b, err := strconv.ParseBool(v); err == nil { + return b + } + } + return defaultValue +}- StorageS3UsePathStyle: getEnv("STORAGE_S3_USE_PATH_STYLE", "false") == "true", + StorageS3UsePathStyle: getEnvBool("STORAGE_S3_USE_PATH_STYLE", false),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/pkg/config/config.go` at line 99, The boolean env parsing for StorageS3UsePathStyle currently compares the raw string to "true" which rejects values like "1", "True", or "yes"; update the initialization for StorageS3UsePathStyle to parse the env using strconv.ParseBool (or add a small helper like getEnvBool) so values accepted by ParseBool (1/0/t/f/true/false, case-insensitive) are handled; modify the existing line that sets StorageS3UsePathStyle (and consider doing the same for EmailVerificationRequired to keep consistency) to read the env with getEnv/getEnvBool and convert via strconv.ParseBool, defaulting to false on parse error.backend/internal/infrastructure/storage/factory.go (1)
20-49: Optional: normalizebackendfor forgiving config inputs.
StorageBackendis matched case-sensitively and without trimming, soSTORAGE_BACKEND=S3,"local ", etc. will hit thedefaultarm and fail. Cheap to make this forgiving:♻️ Proposed normalization
- backend := cfg.StorageBackend - if backend == "" { - backend = "local" - } + backend := strings.ToLower(strings.TrimSpace(cfg.StorageBackend)) + if backend == "" { + backend = "local" + }(Add
"strings"to the imports.)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/internal/infrastructure/storage/factory.go` around lines 20 - 49, The NewFromConfig function currently matches cfg.StorageBackend exactly and is brittle to casing/whitespace; normalize the backend string at the top of NewFromConfig by importing "strings" and replacing backend := cfg.StorageBackend with backend := strings.ToLower(strings.TrimSpace(cfg.StorageBackend)) (keep the default fallback to "local" if empty) so inputs like "S3", " local ", or "LOCAL" map to the correct switch cases ("s3" / "local") and avoid the default error path.backend/internal/handlers/payment.go (1)
185-188: Upload error is swallowed — log/wrap for observability.The underlying
errfromUploadFormFileis discarded; operators see only"upload failed"with no signal whether it's a misconfigured backend, a network/S3 error, a key validation failure, or something else. At minimum, log it before returning the generic message:♻️ Proposed change
url, err := UploadFormFile(c.Context(), h.storage, file, "receipts") if err != nil { + // TODO: replace with your structured logger + fmt.Printf("UploadPixReceipt: order=%s upload failed: %v\n", orderID, err) return fiber.NewError(fiber.StatusInternalServerError, "upload failed") }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/internal/handlers/payment.go` around lines 185 - 188, The UploadFormFile error is currently discarded; update the handler around the UploadFormFile call to log the returned err (and include it when returning) so operators can see root cause; specifically, in the payment handler replace the plain "upload failed" return with a logged error (using the request/handler logger or standard logger) and return a wrapped error (e.g., include err.Error() or fmt.Errorf("upload failed: %w", err) in the fiber.NewError) referencing UploadFormFile and the surrounding handler function so the original error is preserved for observability.backend/internal/infrastructure/storage/s3.go (2)
192-205: Comment says idempotent, but error path is unguarded.The comment "S3 DeleteObject is idempotent" is correct on the S3 side — deleting a missing key returns success — so this is fine in the happy case. However, for S3-compatible backends (MinIO/B2/R2) that occasionally diverge, and to mirror
Download/Exists, consider mapping not-found tonilfor symmetry and resilience:♻️ Optional defensive change
_, err := s.client.DeleteObject(ctx, &s3.DeleteObjectInput{ Bucket: aws.String(s.bucket), Key: aws.String(key), }) - if err != nil { + if err != nil { + if isS3NotFound(err) { + return nil + } return fmt.Errorf("storage: s3 delete: %w", err) } return nil🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/internal/infrastructure/storage/s3.go` around lines 192 - 205, The Delete method in S3Storage currently returns any error from s.client.DeleteObject after validateKey, but to mirror Download/Exists and tolerate S3-compatible backends that may return a not-found error, catch and map "not found" style errors to nil: inside S3Storage.Delete (function name Delete, call s.client.DeleteObject) inspect the returned err for SDK-not-found markers (e.g., aws-sdk sentinel error types/messages used elsewhere in Download/Exists) and return nil for those cases; otherwise wrap and return the error as currently done. Ensure validateKey remains unchanged.
238-271: Replace the hand-rolledbyteReaderwithbytes.NewReader.
*bytes.Readerfrom the standard library already implementsio.Reader/io.Seeker/io.ReaderAtand is well-tested. The custom implementation here adds maintenance surface for no benefit (and contingent on theio.ReadAllchange above, may go away entirely).♻️ Suggested change
- Body: bytesReadSeeker(body), + Body: bytes.NewReader(body),-func bytesReadSeeker(b []byte) *byteReader { return &byteReader{b: b} } - -type byteReader struct { - b []byte - pos int64 -} - -func (r *byteReader) Read(p []byte) (int, error) { ... } -func (r *byteReader) Seek(offset int64, whence int) (int64, error) { ... }(Add
"bytes"to imports.)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/internal/infrastructure/storage/s3.go` around lines 238 - 271, The custom byteReader and bytesReadSeeker are redundant; replace their usage with the standard library bytes.NewReader which already implements io.Reader, io.Seeker and io.ReaderAt. Remove the byteReader type and bytesReadSeeker function, add "bytes" to imports, and change callers that construct a *byteReader to call bytes.NewReader(yourByteSlice) (search for bytesReadSeeker and byteReader to locate call sites like where bytesReadSeeker(b) is used).backend/internal/infrastructure/storage/local.go (2)
77-98: Context cancellation is not observed during the body copy.
ctx.Err()is only checked on entry; if the request is cancelled mid-io.Copy, the upload runs to completion and the temp file is renamed anyway. For large multipart uploads this means cancelled clients still consume disk and IO. Cheap fix is to plumb the context through the copy:♻️ Suggested change (Go 1.21+)
- if _, err := io.Copy(tmp, r); err != nil { + rc := r + if rd, ok := r.(interface { + Read([]byte) (int, error) + }); ok { + rc = readerCtx{ctx: ctx, r: rd} + } + if _, err := io.Copy(tmp, rc); err != nil { _ = tmp.Close() cleanup() return "", fmt.Errorf("storage: copy body: %w", err) }…or simpler: periodically check
ctx.Err()if you don't want a wrapper. Defer if uploads are bounded byBodyLimitmiddleware.
88-113: Resulting file mode is0o600after rename.
os.CreateTempcreates the temp file with mode0o600, andos.Renamepreserves that mode. If the static handler runs in-process this is fine, but if a sidecar (nginx, a different uid/gid worker) ever serves these files it'll getEACCES. Easy to make this explicit and consistent withMkdirAll(..., 0o755):♻️ Suggested fix
if err := os.Rename(tmpName, dst); err != nil { cleanup() return "", fmt.Errorf("storage: rename: %w", err) } + // CreateTemp produced the file with 0o600; loosen so other readers + // (e.g. a sidecar static server running as a different user) can + // read it. Adjust to taste / honor umask. + _ = os.Chmod(dst, 0o644) return l.publicURL(key), nil🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/internal/infrastructure/storage/local.go` around lines 88 - 113, Temporary files from os.CreateTemp are created with mode 0o600 and that mode is preserved by os.Rename, which can cause EACCES for other readers; after writing and syncing (after tmp.Sync and before os.Rename) change the temp file mode to 0644 (use tmp.Chmod or os.Chmod(tmpName, 0644)), handle and return any chmod error (cleaning up like the other error cases), then proceed to tmp.Close and os.Rename and return l.publicURL(key).backend/internal/domain/storage.go (1)
18-24: Doc the URL contract.
UploadandSignedURLreturn aurl string, but the meaning differs between backends:LocalStoragereturns a relative/static/...path that requires the server's static handler, whileS3Storagereturns an absolute presigned URL with a TTL. Callers can't pick the right downstream behavior (cache, pass to client, expire) without reading both implementations. A short doc note on the interface would prevent misuse:📝 Proposed doc tightening
// Storage persists and retrieves binary objects (uploaded files, receipts, // covers, etc.). Production impls: infrastructure/storage/local and // infrastructure/storage/s3. +// +// The url returned by Upload and SignedURL is implementation-defined: +// - local: a stable relative path served by the app's static handler. +// - s3: a presigned, time-limited absolute URL (default TTL 24h). +// Callers that need a long-lived reference should persist the key, not +// the URL. type Storage interface {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/internal/domain/storage.go` around lines 18 - 24, Update the Storage interface docs to clearly define the URL contract for Upload and SignedURL: state that Upload(ctx, key, ...) returns a URL string intended for retrieval via the server's static handler (e.g., a relative path like /static/...) and is a stable, non-expiring access path for backends like LocalStorage, whereas SignedURL(ctx, key, ttl) returns a time-limited, absolute presigned URL (used by S3Storage) that may expire after ttl; reference the Storage interface and implementations LocalStorage and S3Storage in the comment so callers know how to treat, cache, or proxy each returned URL and avoid assuming permanence or format consistency.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/cmd/server/main.go`:
- Around line 408-418: The static file mount is always registered via
app.Static(staticMount, staticRoot) even when STORAGE_BACKEND is S3; update the
code to only register the static handler when the backend is local by checking
cfg.StorageBackend (e.g., if cfg.StorageBackend != "s3" or == "local") before
calling app.Static; keep using the existing staticRoot computation
(cfg.StorageLocalPath or cfg.UploadDir) and staticMount (cfg.StorageURLPrefix or
"/static") but wrap the app.Static call in that conditional so S3 deployments do
not expose cfg.UploadDir at /static.
In `@backend/internal/handlers/upload.go`:
- Around line 29-30: The code is using the client-supplied header value
(file.Header.Get("Content-Type")) when calling storage.Upload, which is unsafe;
instead, read/peek the uploaded bytes (e.g., the first 512 bytes from src or
file) and use http.DetectContentType to determine the content type server-side,
then validate that detected type against a per-prefix allowlist (e.g.,
images-only for content keys under "covers", images+pdf for "receipts"); only
after passing the allowlist check call storage.Upload with the detected content
type (and reject/return an error if it fails validation). Ensure you update
handling of src/file so peeking does not consume the stream (reset or use a
TeeReader/buffer) and reference variables/functions from this file: file,
contentType, key, src, and storage.Upload.
In `@backend/internal/infrastructure/storage/s3.go`:
- Around line 114-136: The Upload method currently returns a presigned URL
(SignedURL(ctx, key, s.ttl)) which expires after s.ttl (~24h) and causes
persisted links to break; change the contract so Upload returns the stored
object key (or both key and an immediate short-lived URL) instead of a
long-lived presigned URL, update callers such as UploadPixReceipt ->
UpdateReceiptURL to persist the key (not the presigned URL), and add/ensure a
separate method (e.g. SignedURL or PublicURL on S3Storage) that generates a
short-lived presigned URL on demand when serving the object; update code paths
that display/download receipts to call SignedURL(key, ttl) at read-time rather
than persisting the presigned URL.
- Around line 116-136: The Upload method in S3Storage currently reads the entire
io.Reader via io.ReadAll and uses a custom bytesReadSeeker, causing large heap
allocations and unnecessary buffering; replace this with a streaming upload
using the AWS SDK manager.Uploader (use s.uploader.Upload when available and
fall back to s.client.PutObject for tests) so the source io.Reader is streamed
rather than fully read into memory; stop persisting the presigned URL from
Upload—persist the object key and call SignedURL(key, ttl) on-demand in the
UI/backend when serving links so URLs don't expire in storage; and remove
bytesReadSeeker usage in favor of bytes.NewReader where a ReadSeeker is truly
required.
---
Nitpick comments:
In `@backend/cmd/server/main.go`:
- Around line 117-122: The log.Fatalf call after storage.NewFromConfig bypasses
the deferred pool.Close() (declared earlier) and causes an unclean shutdown;
change the failure path to explicitly call pool.Close() before exiting or
returning. Locate the storage.NewFromConfig call and replace the log.Fatalf
usage: on error, call pool.Close(), log the error with log.Printf or log.Println
(including the error), and then return from main (or call os.Exit after
pool.Close()) to ensure pool.Close() is always invoked. Use the existing
pool.Close() symbol and the storage.NewFromConfig error branch to implement this
consistent cleanup.
In `@backend/internal/domain/storage.go`:
- Around line 18-24: Update the Storage interface docs to clearly define the URL
contract for Upload and SignedURL: state that Upload(ctx, key, ...) returns a
URL string intended for retrieval via the server's static handler (e.g., a
relative path like /static/...) and is a stable, non-expiring access path for
backends like LocalStorage, whereas SignedURL(ctx, key, ttl) returns a
time-limited, absolute presigned URL (used by S3Storage) that may expire after
ttl; reference the Storage interface and implementations LocalStorage and
S3Storage in the comment so callers know how to treat, cache, or proxy each
returned URL and avoid assuming permanence or format consistency.
In `@backend/internal/handlers/payment.go`:
- Around line 185-188: The UploadFormFile error is currently discarded; update
the handler around the UploadFormFile call to log the returned err (and include
it when returning) so operators can see root cause; specifically, in the payment
handler replace the plain "upload failed" return with a logged error (using the
request/handler logger or standard logger) and return a wrapped error (e.g.,
include err.Error() or fmt.Errorf("upload failed: %w", err) in the
fiber.NewError) referencing UploadFormFile and the surrounding handler function
so the original error is preserved for observability.
In `@backend/internal/handlers/upload.go`:
- Around line 16-21: Validate the incoming prefix at the top of UploadFormFile:
ensure it is non-empty, does not start with a slash, and does not contain path
traversal segments like ".." (also reject backslashes if relevant); if any check
fails return a clear client error instead of continuing (so callers get a
4xx-like error), and add "strings" to imports for the checks; refer to function
UploadFormFile and the generated storage key "<prefix>/<uuid><ext>" when making
the guard.
In `@backend/internal/infrastructure/storage/factory_test.go`:
- Around line 22-36: Update the two tests TestFactory_S3MissingBucketFails and
TestFactory_UnknownBackend to assert the returned error matches the expected
sentinel instead of only checking for non-nil; after calling
storage.NewFromConfig, replace the err != nil checks with an errors.Is(err,
domain.ErrInvalidInput) (or the correct sentinel exported by your domain
package) assertion so the tests enforce the factory's error contract; reference
storage.NewFromConfig and domain.ErrInvalidInput when updating the assertions.
In `@backend/internal/infrastructure/storage/factory.go`:
- Around line 20-49: The NewFromConfig function currently matches
cfg.StorageBackend exactly and is brittle to casing/whitespace; normalize the
backend string at the top of NewFromConfig by importing "strings" and replacing
backend := cfg.StorageBackend with backend :=
strings.ToLower(strings.TrimSpace(cfg.StorageBackend)) (keep the default
fallback to "local" if empty) so inputs like "S3", " local ", or "LOCAL" map to
the correct switch cases ("s3" / "local") and avoid the default error path.
In `@backend/internal/infrastructure/storage/local.go`:
- Around line 88-113: Temporary files from os.CreateTemp are created with mode
0o600 and that mode is preserved by os.Rename, which can cause EACCES for other
readers; after writing and syncing (after tmp.Sync and before os.Rename) change
the temp file mode to 0644 (use tmp.Chmod or os.Chmod(tmpName, 0644)), handle
and return any chmod error (cleaning up like the other error cases), then
proceed to tmp.Close and os.Rename and return l.publicURL(key).
In `@backend/internal/infrastructure/storage/s3_endpoint_test.go`:
- Around line 14-71: The test TestNewS3Storage_CustomEndpoint is flaky because
NewS3Storage calls awsconfig.LoadDefaultConfig which reads AWS env/files and may
contact IMDS; to fix, make the test isolate AWS loading by setting
AWS_EC2_METADATA_DISABLED=1 and clearing AWS_PROFILE, AWS_CONFIG_FILE, and
AWS_SHARED_CREDENTIALS_FILE (save and restore originals with defer) before
invoking storage.NewS3Storage, and add "os" to imports; target the
TestNewS3Storage_CustomEndpoint function and the NewS3Storage call sites when
applying this change.
In `@backend/internal/infrastructure/storage/s3.go`:
- Around line 192-205: The Delete method in S3Storage currently returns any
error from s.client.DeleteObject after validateKey, but to mirror
Download/Exists and tolerate S3-compatible backends that may return a not-found
error, catch and map "not found" style errors to nil: inside S3Storage.Delete
(function name Delete, call s.client.DeleteObject) inspect the returned err for
SDK-not-found markers (e.g., aws-sdk sentinel error types/messages used
elsewhere in Download/Exists) and return nil for those cases; otherwise wrap and
return the error as currently done. Ensure validateKey remains unchanged.
- Around line 238-271: The custom byteReader and bytesReadSeeker are redundant;
replace their usage with the standard library bytes.NewReader which already
implements io.Reader, io.Seeker and io.ReaderAt. Remove the byteReader type and
bytesReadSeeker function, add "bytes" to imports, and change callers that
construct a *byteReader to call bytes.NewReader(yourByteSlice) (search for
bytesReadSeeker and byteReader to locate call sites like where
bytesReadSeeker(b) is used).
In `@backend/pkg/config/config.go`:
- Line 99: The boolean env parsing for StorageS3UsePathStyle currently compares
the raw string to "true" which rejects values like "1", "True", or "yes"; update
the initialization for StorageS3UsePathStyle to parse the env using
strconv.ParseBool (or add a small helper like getEnvBool) so values accepted by
ParseBool (1/0/t/f/true/false, case-insensitive) are handled; modify the
existing line that sets StorageS3UsePathStyle (and consider doing the same for
EmailVerificationRequired to keep consistency) to read the env with
getEnv/getEnvBool and convert via strconv.ParseBool, defaulting to false on
parse error.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: ce748765-a2ea-4c99-b8d7-df6e39b0a736
⛔ Files ignored due to path filters (1)
backend/go.sumis excluded by!**/*.sum
📒 Files selected for processing (14)
backend/cmd/server/main.gobackend/go.modbackend/internal/domain/storage.gobackend/internal/handlers/blog.gobackend/internal/handlers/blog_test.gobackend/internal/handlers/payment.gobackend/internal/handlers/upload.gobackend/internal/infrastructure/storage/factory.gobackend/internal/infrastructure/storage/factory_test.gobackend/internal/infrastructure/storage/local.gobackend/internal/infrastructure/storage/local_test.gobackend/internal/infrastructure/storage/s3.gobackend/internal/infrastructure/storage/s3_endpoint_test.gobackend/pkg/config/config.go
| contentType := file.Header.Get("Content-Type") | ||
| return storage.Upload(ctx, key, src, contentType) |
There was a problem hiding this comment.
Don't trust client-supplied Content-Type.
file.Header.Get("Content-Type") is the value the client put in the multipart part header — it's user-controlled. For the local backend, the file is later served from /static and a forged Content-Type (e.g. text/html named foo.png) leads to stored XSS via browser content sniffing or direct rendering. For S3, the stored object metadata is similarly attacker-controlled and replayed to browsers via the presigned GET.
Recommend sniffing the content type server-side (peek first 512 bytes via http.DetectContentType) and/or enforcing a per-call-site allowlist (covers → images only; receipts → images + pdf). At minimum, validate against an allowlist before calling storage.Upload.
♻️ Sketch of a safer flow
- contentType := file.Header.Get("Content-Type")
- return storage.Upload(ctx, key, src, contentType)
+ // Sniff first 512 bytes to determine real content type.
+ head := make([]byte, 512)
+ n, _ := io.ReadFull(src, head)
+ detected := http.DetectContentType(head[:n])
+ // Re-assemble the reader so storage receives the full file.
+ full := io.MultiReader(bytes.NewReader(head[:n]), src)
+ return storage.Upload(ctx, key, full, detected)(plus an allowlist check on detected per prefix)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/internal/handlers/upload.go` around lines 29 - 30, The code is using
the client-supplied header value (file.Header.Get("Content-Type")) when calling
storage.Upload, which is unsafe; instead, read/peek the uploaded bytes (e.g.,
the first 512 bytes from src or file) and use http.DetectContentType to
determine the content type server-side, then validate that detected type against
a per-prefix allowlist (e.g., images-only for content keys under "covers",
images+pdf for "receipts"); only after passing the allowlist check call
storage.Upload with the detected content type (and reject/return an error if it
fails validation). Ensure you update handling of src/file so peeking does not
consume the stream (reset or use a TeeReader/buffer) and reference
variables/functions from this file: file, contentType, key, src, and
storage.Upload.
| // Upload PUTs body under key and returns a presigned GET URL valid | ||
| // for s.ttl (24h by default). | ||
| func (s *S3Storage) Upload(ctx context.Context, key string, r io.Reader, contentType string) (string, error) { | ||
| if err := validateKey(key); err != nil { | ||
| return "", err | ||
| } | ||
| body, err := io.ReadAll(r) | ||
| if err != nil { | ||
| return "", fmt.Errorf("storage: read body: %w", err) | ||
| } | ||
| input := &s3.PutObjectInput{ | ||
| Bucket: aws.String(s.bucket), | ||
| Key: aws.String(key), | ||
| Body: bytesReadSeeker(body), | ||
| } | ||
| if contentType != "" { | ||
| input.ContentType = aws.String(contentType) | ||
| } | ||
| if _, err := s.client.PutObject(ctx, input); err != nil { | ||
| return "", fmt.Errorf("storage: s3 put: %w", err) | ||
| } | ||
| return s.SignedURL(ctx, key, s.ttl) | ||
| } |
There was a problem hiding this comment.
Upload returns a 24h-expiring URL — risky if persisted.
Upload returns SignedURL(ctx, key, s.ttl) with s.ttl = 24h. Looking at how callers use this (e.g. payment.go UpdateReceiptURL(..., url) from UploadPixReceipt), the URL is persisted and later shown to admins — but it stops working after 24h, leaving stored receipts effectively dead links. Two safer designs:
- Persist the key, not the URL. Have
Uploadreturn an opaque key (or both key and URL), and have read-paths callSignedURLlazily. - Return a non-expiring identifier from
Upload(e.g.s3://bucket/keyor just the key) and add a separatePublicURL/SignedURLmethod for serving.
This is a cross-cutting interface decision rather than a single-line fix, and worth resolving before this lands since both backends and all callers will need to agree on the contract.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/internal/infrastructure/storage/s3.go` around lines 114 - 136, The
Upload method currently returns a presigned URL (SignedURL(ctx, key, s.ttl))
which expires after s.ttl (~24h) and causes persisted links to break; change the
contract so Upload returns the stored object key (or both key and an immediate
short-lived URL) instead of a long-lived presigned URL, update callers such as
UploadPixReceipt -> UpdateReceiptURL to persist the key (not the presigned URL),
and add/ensure a separate method (e.g. SignedURL or PublicURL on S3Storage) that
generates a short-lived presigned URL on demand when serving the object; update
code paths that display/download receipts to call SignedURL(key, ttl) at
read-time rather than persisting the presigned URL.
| func (s *S3Storage) Upload(ctx context.Context, key string, r io.Reader, contentType string) (string, error) { | ||
| if err := validateKey(key); err != nil { | ||
| return "", err | ||
| } | ||
| body, err := io.ReadAll(r) | ||
| if err != nil { | ||
| return "", fmt.Errorf("storage: read body: %w", err) | ||
| } | ||
| input := &s3.PutObjectInput{ | ||
| Bucket: aws.String(s.bucket), | ||
| Key: aws.String(key), | ||
| Body: bytesReadSeeker(body), | ||
| } | ||
| if contentType != "" { | ||
| input.ContentType = aws.String(contentType) | ||
| } | ||
| if _, err := s.client.PutObject(ctx, input); err != nil { | ||
| return "", fmt.Errorf("storage: s3 put: %w", err) | ||
| } | ||
| return s.SignedURL(ctx, key, s.ttl) | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
aws-sdk-go-v2 feature/s3/manager Uploader streaming Upload usage
💡 Result:
The aws-sdk-go-v2 feature/s3/manager Uploader supports streaming uploads using io.Reader for the Body field in s3.PutObjectInput. It handles non-seekable streaming readers by buffering data into parts (default PartSize 5MB, minimum 5MB) sequentially from the stream, enabling multipart uploads without requiring the full content in memory upfront. For optimal performance with seekable readers (io.ReadSeeker/io.ReaderAt), it avoids full buffering and computes part counts accurately. Key usage: 1. Import required packages: import ( "context" "github.com/aws/aws-sdk-go-v2/config" "github.com/aws/aws-sdk-go-v2/service/s3" "github.com/aws/aws-sdk-go-v2/feature/s3/manager" "github.com/aws/aws-sdk-go-v2/aws" ) 2. Create S3 client and Uploader: cfg, err := config.LoadDefaultConfig(context.TODO) s3Client := s3.NewFromConfig(cfg) uploader := manager.NewUploader(s3Client) // Customize with func(u *manager.Uploader){ u.PartSize = 10 * 1024 * 1024 } 3. Streaming upload example (non-seekable io.Reader): stream := generateStream // e.g., your streaming source implementing io.Reader result, err := uploader.Upload(context.TODO, &s3.PutObjectInput{ Bucket: aws.String("my-bucket"), Key: aws.String("my-key"), Body: stream, // Streaming io.Reader }) The Uploader reads from Body in chunks up to PartSize, buffers each part (using a pool), uploads parts concurrently in multipart mode if >1 part. For pure streaming without known size, it uses io.LimitReader for first part, then fills buffers sequentially. Note: Memory usage is ~PartSize * Concurrency (default Concurrency=5) due to buffering. Direct s3.PutObject requires seekable Body or manual ContentLength + unsigned payload middleware; Uploader is recommended for streaming. Note: Newer feature/s3/transfermanager (GA Jan 2026) replaces manager with improved streaming (UploadObjectInput.Body io.Reader), but query specifies manager.
Citations:
- 1: https://github.com/aws/aws-sdk-go-v2/blob/main/feature/s3/manager/upload.go
- 2: https://docs.aws.amazon.com/sdk-for-go/v2/developer-guide/sdk-utilities-s3.html
- 3: https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/feature/s3/manager
- 4: Upload manager: When using
io.Readerwithout seeking support, the uploader over allocates memory leading to high usage aws/aws-sdk-go-v2#2694
🏁 Script executed:
# Locate the s3.go file
find . -name "s3.go" -path "*/storage/*" -type fRepository: afa7789/blueprint
Length of output: 109
🏁 Script executed:
# Examine the full s3.go file structure and size
wc -l backend/internal/infrastructure/storage/s3.goRepository: afa7789/blueprint
Length of output: 111
🏁 Script executed:
# Read the s3.go file to understand the implementation
cat -n backend/internal/infrastructure/storage/s3.goRepository: afa7789/blueprint
Length of output: 9790
🏁 Script executed:
# Search for BodyLimit references in the codebase
rg "BodyLimit" -A 2 -B 2Repository: afa7789/blueprint
Length of output: 43
🏁 Script executed:
# Search for bytesReadSeeker implementation
rg "bytesReadSeeker" -A 5 -B 2Repository: afa7789/blueprint
Length of output: 1153
🏁 Script executed:
# Search for Upload method calls and usage context
rg "\.Upload\(" --type go -A 2 -B 2Repository: afa7789/blueprint
Length of output: 2176
🏁 Script executed:
# Look for BodyLimit or file size constraints
rg -i "bodyLimit|sizelimit|maxsize|max.*size" --type go -A 2 -B 2Repository: afa7789/blueprint
Length of output: 1322
🏁 Script executed:
# Check payment.go for receipt_url handling mentioned in the review
find . -name "payment.go" -type f | head -5Repository: afa7789/blueprint
Length of output: 100
🏁 Script executed:
# Look for any middleware or handlers that might set request size limits
rg -i "maxbodysize|bodylimit|request.*limit" --type go -A 3 -B 3Repository: afa7789/blueprint
Length of output: 1136
🏁 Script executed:
# Examine payment.go to see if receipt_url is persisted and how it's used
cat -n backend/internal/handlers/payment.go | head -100Repository: afa7789/blueprint
Length of output: 3754
🏁 Script executed:
# Search for receipt_url usage in the codebase
rg "receipt_url" -A 3 -B 3Repository: afa7789/blueprint
Length of output: 6808
🏁 Script executed:
# Check database schema or payment entity
find . -name "*.go" -path "*payment*" -o -name "*.go" -path "*entity*" | xargs grep -l "receipt" 2>/dev/null | head -5Repository: afa7789/blueprint
Length of output: 136
🏁 Script executed:
# Check the full payment handler to see where receipt is uploaded
cat -n backend/internal/handlers/payment.go | tail -100Repository: afa7789/blueprint
Length of output: 3676
🏁 Script executed:
# Look for where UpdateReceiptURL is called
rg "UpdateReceiptURL" -A 5 -B 5Repository: afa7789/blueprint
Length of output: 3303
🏁 Script executed:
# One final check: confirm that the 10MB limit is enforced before Upload is called
cat -n backend/cmd/server/main.go | grep -A 5 -B 5 "RequestSizeLimit"Repository: afa7789/blueprint
Length of output: 475
io.ReadAll(r) creates inefficient per-upload heap allocation — use the streaming uploader.
Upload reads the full body into a []byte before PUT (line 120). Although the request size limit (default 10 MB) prevents unbounded OOM, this is still O(N) heap per concurrent upload. With 5 concurrent 10 MB uploads, you'll hold ~50 MB of heap that could be 5–10 MB with streaming. The SDK's feature/s3/manager.Uploader streams the source io.Reader, buffers data into 5 MB parts sequentially, and parallelizes uploads without buffering the entire body. This also fixes a secondary issue: the presigned URL (24 h TTL) returned from Upload is persisted in the database as receipt_url and displayed in the admin UI. If admin review exceeds 24 hours, the URL expires and the UI link breaks. Storing the object key and calling SignedURL on-demand would solve this.
Also replace the custom bytesReadSeeker (lines 238–271) with bytes.NewReader, which already implements io.ReadSeeker.
Suggested pattern for manager.Uploader (if interface allows)
Since S3Client is a narrow interface for testing, the production path could instantiate manager.Uploader from the concrete *s3.Client and route through it for streaming uploads, while keeping PutObject as a fallback in tests.
type S3Storage struct {
bucket string
client S3Client
uploader *manager.Uploader // nil in tests
presigner S3Presigner
ttl time.Duration
}Then in Upload:
if s.uploader != nil {
if _, err := s.uploader.Upload(ctx, input); err != nil {
return "", fmt.Errorf("storage: s3 put: %w", err)
}
} else {
if _, err := s.client.PutObject(ctx, input); err != nil {
return "", fmt.Errorf("storage: s3 put: %w", err)
}
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/internal/infrastructure/storage/s3.go` around lines 116 - 136, The
Upload method in S3Storage currently reads the entire io.Reader via io.ReadAll
and uses a custom bytesReadSeeker, causing large heap allocations and
unnecessary buffering; replace this with a streaming upload using the AWS SDK
manager.Uploader (use s.uploader.Upload when available and fall back to
s.client.PutObject for tests) so the source io.Reader is streamed rather than
fully read into memory; stop persisting the presigned URL from Upload—persist
the object key and call SignedURL(key, ttl) on-demand in the UI/backend when
serving links so URLs don't expire in storage; and remove bytesReadSeeker usage
in favor of bytes.NewReader where a ReadSeeker is truly required.
- gofmt s3.go (alignment of Bucket()/Client() one-liners) - close pool before log.Fatalf in storage init (gocritic exitAfterDefer) - log resolved backend (was logging cfg.StorageBackend, which is empty when defaults are used) - only register app.Static when backend=local — for S3 the upload dir should not be exposed over HTTP - factory: wrap domain.ErrInvalidInput when STORAGE_BACKEND=s3 but bucket is empty, matching NewS3Storage error classification Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Applied: - domain/storage.go: document URL contract — Local returns stable static path, S3 returns 24h-expiring presigned URL (callers needing durable links should persist the key and call SignedURL on read) - factory.go: normalize backend with strings.ToLower(TrimSpace) so "S3", " local " map correctly - factory_test.go: assert errors.Is(domain.ErrInvalidInput) on s3-empty-bucket; add normalization regression test - s3.go: drop custom byteReader, use bytes.NewReader (stdlib already satisfies io.ReadSeeker) - s3.go Delete: tolerate NotFound (R2/MinIO have historically returned NoSuchKey on DELETE; AWS S3 returns success — normalize to idempotent) - s3_endpoint_test.go: isolate AWS env (disable IMDS, clear AWS_PROFILE/AWS_CONFIG_FILE/AWS_SHARED_CREDENTIALS_FILE) to prevent CI flakiness when running outside AWS - config.go: parse STORAGE_S3_USE_PATH_STYLE via strconv.ParseBool so "1", "True", "yes" work — added getEnvBool helper - payment.go: include underlying upload error in HTTP response for observability - local.go: chmod 0o644 the temp file before rename (CreateTemp uses 0o600 which blocks other users from reading via static handler) Pushed back (out of scope or astral parity): - S3 Upload contract (return key vs presigned URL): real concern for durable links, but matches astral by design — documented in interface docstring; tracked as follow-up - io.ReadAll → s3manager.Uploader: deliberate astral parity - Content-Type sniffing + allowlist: pre-existing handler behavior, not part of storage parity - UploadFormFile prefix validation: callers pass hardcoded literals Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
gocritic exitAfterDefer was still flagging the storage init block: log.Fatalf calls os.Exit(1) which skips deferred pool.Close(). The explicit pool.Close() before Fatalf I added in 82e0f5a does close the pool, but the linter still flags the pattern (and Fatalf in main is a smell anyway when there's already defer cleanup). Match the pattern already used for migration failures (line 46): log.Printf + return — defer pool.Close() handles the cleanup. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
backend/internal/infrastructure/storage/s3.go (1)
115-137:⚠️ Potential issue | 🟠 MajorTwo past concerns on
Uploadremain unresolved.Positive: the custom
bytesReadSeekeris gone and this now usesbytes.NewReader— good cleanup.Still outstanding from prior review:
Full-body buffering (line 121).
io.ReadAll(r)holds the entire object in the Go heap beforePutObject. With concurrent uploads near the 10 MB body limit this scales linearly. Switching tofeature/s3/manager.Uploaderstreams theio.Readerin 5 MB parts and does not require aReadSeeker. KeepPutObjectas the fallback path in tests via the injectableS3Client.Upload returns a 24h-expiring URL (line 136).
SignedURL(ctx, key, s.ttl)yields a presigned URL thatpayment.go'sUploadPixReceiptpersists asreceipt_urland admins view later. Past the TTL (24h), stored receipts become dead links. Prefer returning the object key fromUpload(or both key and a short-lived URL) and let read-paths callSignedURLlazily. This is a cross-cutting contract change that needs coordination betweendomain.Storage,LocalStorage.Upload(which currently returns a durable/static/<key>path), and all callers.Since
LocalStorage.Uploadalready returns a durable identifier andS3Storage.Uploadreturns an expiring one, thedomain.Storage.Uploadcontract is effectively inconsistent across backends — worth pinning down before more callers depend on it.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/internal/infrastructure/storage/s3.go` around lines 115 - 137, The Upload method on S3Storage currently buffers the entire body with io.ReadAll and returns a short-lived presigned URL; change Upload(ctx, key, r, contentType) to stream multipart uploads using AWS S3 manager.Uploader (s3manager.Uploader) instead of io.ReadAll to avoid heap bloat, keeping the existing s.client.PutObject path as an injectable fallback on the S3Client interface for tests; also change the contract so S3Storage.Upload returns the durable object key (or both key and an optional short-lived URL) instead of always returning SignedURL(ctx, key, s.ttl), and update domain.Storage.Upload and LocalStorage.Upload to the same durable-identifier contract and adjust callers (e.g., payment.go UploadPixReceipt) to call S3Storage.SignedURL lazily when a time-limited URL is needed.
🧹 Nitpick comments (3)
backend/internal/infrastructure/storage/local.go (1)
44-69: Thorough key validation; consider an optional post-resolve containment check.
validateKeycovers the common traversal shapes well: the cleaned path check (lines 48–52), the raw segment scan that normalizes\→/(lines 53–57), and the absolute-path guard (line 58). This is solid input validation.Optional defense-in-depth:
resolvecould additionally verify the joined path is underroot(e.g., viafilepath.Rel(root, abs)and rejecting results containing..). This would protect against any future edge case where a surprising input slips through the string checks, or whererootitself is a symlink whose target changes. Not a bug as-is — purely belt-and-suspenders.♻️ Optional hardening
func (l *LocalStorage) resolve(key string) (string, error) { if err := validateKey(key); err != nil { return "", err } - return filepath.Join(l.root, filepath.FromSlash(key)), nil + full := filepath.Join(l.root, filepath.FromSlash(key)) + absRoot, err := filepath.Abs(l.root) + if err != nil { + return "", fmt.Errorf("storage: resolve root: %w", err) + } + absFull, err := filepath.Abs(full) + if err != nil { + return "", fmt.Errorf("storage: resolve key: %w", err) + } + rel, err := filepath.Rel(absRoot, absFull) + if err != nil || rel == ".." || strings.HasPrefix(rel, ".."+string(filepath.Separator)) { + return "", fmt.Errorf("storage: traversal key %q: %w", key, domain.ErrInvalidInput) + } + return full, nil }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/internal/infrastructure/storage/local.go` around lines 44 - 69, Add a belt-and-suspenders containment check in LocalStorage.resolve: after validateKey succeeds and you compute the joined path (using filepath.Join(l.root, filepath.FromSlash(key))), resolve absolute paths (and optionally eval symlinks) for both l.root and the joined path and use filepath.Rel to compute a relative path; if the relative path begins with ".." or contains ".." then return an ErrInvalidInput-like error to reject keys that escape the root. Reference: functions validateKey and resolve; use filepath.Abs/filepath.EvalSymlinks and filepath.Rel to perform the containment check.backend/pkg/config/config.go (1)
91-99: Backwards-compatible fallbacks look correct.
STORAGE_BACKEND → STORAGE_TYPE → "local"andSTORAGE_S3_* → AWS_*chains give a smooth migration path. The factory'sToLower/TrimSpacenormalization pairs well with this raw value capture.Minor (optional): the default region
"us-east-1"is correct for AWS S3 but may surprise Cloudflare R2 users (R2 expects"auto"). Consider documenting this in the README or.env.examplealongsideSTORAGE_S3_ENDPOINT.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/pkg/config/config.go` around lines 91 - 99, The config uses StorageS3Region (populated via getEnv("STORAGE_S3_REGION", getEnv("AWS_REGION", "us-east-1"))) which defaults to "us-east-1" and can surprise Cloudflare R2 users; update the README and .env.example to document that StorageS3Region defaults to "us-east-1", show the Cloudflare R2 recommended value ("auto"), and add a short note linking StorageS3Endpoint and StorageS3Region (StorageS3Endpoint for R2 + StorageS3Region="auto") so users know to override STORAGE_S3_REGION and/or set STORAGE_S3_ENDPOINT when using R2.backend/internal/infrastructure/storage/s3.go (1)
104-110: Optional: guardNewS3StorageWithClientinputs.This constructor skips the bucket-required check that
NewS3Storageenforces at line 67, and doesn't nil-checkclient/presigner. It's test-only scaffolding today, but a nil presigner will panic insideUpload/SignedURL. A minimal guard would prevent accidental misuse.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/internal/infrastructure/storage/s3.go` around lines 104 - 110, NewS3StorageWithClient currently omits the input guards present in NewS3Storage and can produce nil-pointer panics in Upload/SignedURL; add checks at the start of NewS3StorageWithClient to validate bucket != "" and that client and presigner are non-nil, and keep the ttl defaulting to defaultPresignTTL when ttl <= 0; if any check fails, panic with a clear message referencing NewS3StorageWithClient, S3Storage, client, presigner, and bucket so tests/consumers fail fast rather than causing panics later in Upload/SignedURL.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/internal/handlers/payment.go`:
- Around line 185-188: In the UploadFormFile error branch (the call to
UploadFormFile with h.storage and file), stop returning fmt.Sprintf("upload
failed: %v", err) to the client; instead log the full err server-side (use your
handler logger, e.g., h.logger or equivalent) including context (file, receipts,
etc.), and return a generic fiber.NewError(fiber.StatusInternalServerError,
"upload failed") to the caller so internal/infrastructure details from
UploadFormFile/h.storage are not leaked.
---
Duplicate comments:
In `@backend/internal/infrastructure/storage/s3.go`:
- Around line 115-137: The Upload method on S3Storage currently buffers the
entire body with io.ReadAll and returns a short-lived presigned URL; change
Upload(ctx, key, r, contentType) to stream multipart uploads using AWS S3
manager.Uploader (s3manager.Uploader) instead of io.ReadAll to avoid heap bloat,
keeping the existing s.client.PutObject path as an injectable fallback on the
S3Client interface for tests; also change the contract so S3Storage.Upload
returns the durable object key (or both key and an optional short-lived URL)
instead of always returning SignedURL(ctx, key, s.ttl), and update
domain.Storage.Upload and LocalStorage.Upload to the same durable-identifier
contract and adjust callers (e.g., payment.go UploadPixReceipt) to call
S3Storage.SignedURL lazily when a time-limited URL is needed.
---
Nitpick comments:
In `@backend/internal/infrastructure/storage/local.go`:
- Around line 44-69: Add a belt-and-suspenders containment check in
LocalStorage.resolve: after validateKey succeeds and you compute the joined path
(using filepath.Join(l.root, filepath.FromSlash(key))), resolve absolute paths
(and optionally eval symlinks) for both l.root and the joined path and use
filepath.Rel to compute a relative path; if the relative path begins with ".."
or contains ".." then return an ErrInvalidInput-like error to reject keys that
escape the root. Reference: functions validateKey and resolve; use
filepath.Abs/filepath.EvalSymlinks and filepath.Rel to perform the containment
check.
In `@backend/internal/infrastructure/storage/s3.go`:
- Around line 104-110: NewS3StorageWithClient currently omits the input guards
present in NewS3Storage and can produce nil-pointer panics in Upload/SignedURL;
add checks at the start of NewS3StorageWithClient to validate bucket != "" and
that client and presigner are non-nil, and keep the ttl defaulting to
defaultPresignTTL when ttl <= 0; if any check fails, panic with a clear message
referencing NewS3StorageWithClient, S3Storage, client, presigner, and bucket so
tests/consumers fail fast rather than causing panics later in Upload/SignedURL.
In `@backend/pkg/config/config.go`:
- Around line 91-99: The config uses StorageS3Region (populated via
getEnv("STORAGE_S3_REGION", getEnv("AWS_REGION", "us-east-1"))) which defaults
to "us-east-1" and can surprise Cloudflare R2 users; update the README and
.env.example to document that StorageS3Region defaults to "us-east-1", show the
Cloudflare R2 recommended value ("auto"), and add a short note linking
StorageS3Endpoint and StorageS3Region (StorageS3Endpoint for R2 +
StorageS3Region="auto") so users know to override STORAGE_S3_REGION and/or set
STORAGE_S3_ENDPOINT when using R2.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 65bf4962-9dbb-401a-bdf2-3921ea0703e5
📒 Files selected for processing (9)
backend/cmd/server/main.gobackend/internal/domain/storage.gobackend/internal/handlers/payment.gobackend/internal/infrastructure/storage/factory.gobackend/internal/infrastructure/storage/factory_test.gobackend/internal/infrastructure/storage/local.gobackend/internal/infrastructure/storage/s3.gobackend/internal/infrastructure/storage/s3_endpoint_test.gobackend/pkg/config/config.go
✅ Files skipped from review due to trivial changes (1)
- backend/internal/domain/storage.go
🚧 Files skipped from review as they are similar to previous changes (4)
- backend/internal/infrastructure/storage/factory_test.go
- backend/internal/infrastructure/storage/s3_endpoint_test.go
- backend/internal/infrastructure/storage/factory.go
- backend/cmd/server/main.go
Applied: - payment.go + blog.go: log full upload error server-side, return generic "upload failed" to client (avoid leaking internal/storage details from UploadFormFile) - s3.go NewS3StorageWithClient: panic on nil client/presigner/empty bucket so misuse fails at construction, not later in Upload/SignedURL - local.go resolve: defense-in-depth containment via filepath.Abs + filepath.Rel — rejects escapes that survive textual key validation (e.g., symlinks under root pointing outside) - README.md: document new STORAGE_* env vars and Cloudflare R2 region caveat (R2 wants STORAGE_S3_REGION=auto, AWS default is us-east-1) Pushed back (already declined in earlier rounds): - S3 Upload contract change (return key vs presigned URL) + s3manager.Uploader streaming: deliberate astral parity, documented in domain.Storage docstring; tracked as follow-up PR Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…test coverage
T1 — local.go symlink-aware containment
Previous Abs/Rel check was purely textual (Abs doesn't resolve
symlinks). Replaced with a checkContains helper that uses
filepath.EvalSymlinks on the first existing ancestor of dst and on
root, then verifies via filepath.Rel that the resolved path is
inside the resolved root. Critical ordering: Upload runs MkdirAll
before checkContains so the parent dir exists and is resolved
(otherwise a planted symlink at the parent slips past). Wired
checkContains into Download / Exists / Delete as well.
T2 — drop AWS_SDK_LOAD_CONFIG env from test helper
That env is aws-sdk-go-v1 only; no-op under v2. Removed the line
rather than keeping a misleading mitigation.
T3 — panic-guard test coverage on NewS3StorageWithClient
New s3_test.go asserts panic on empty bucket / nil client / nil
presigner with a clear message. Plus a regression test that
S3.Upload rejects an invalid key BEFORE invoking PutObject, so a
traversal attempt never causes us to buffer attacker-supplied bytes.
T4 — local_test.go symlink + chmod coverage
TestLocalStorage_SymlinkEscapeRejected plants <root>/escape -> /tmp
and verifies all four ops reject ErrInvalidInput; sanity-checks the
outside dir stays empty. TestLocalStorage_FileMode0644 verifies the
chmod step actually produces 0o644 (CreateTemp gives 0o600).
Both skip on Windows.
T5 — handler error class mapping
payment.go + blog.go now distinguish ErrInvalidInput (400) from
internal errors (logged + 500). Closes the 400-vs-500 gap on
crafted multipart filenames.
Out of scope (deferred to follow-up PR):
- S3 Upload contract (presigned vs durable key) — known limitation
documented in domain.Storage docstring
- io.ReadAll size cap on S3 Upload
- Parent dir fsync after rename
50 tests pass (was 44).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
leftover scaffolding from when I was going to track read state on the body — the recordingClient assertion already proves PutObject is never called, so the unused touchedReader was just dead code. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Add domain.Storage interface with Upload/Download/Exists/SignedURL/Delete and two production implementations:
Factory NewFromConfig fails fast if STORAGE_BACKEND=s3 but bucket is empty. Backwards-compatible env fallback: STORAGE_TYPE/AWS_BUCKET/ AWS_REGION still work.
Wired into main.go (constructed once, injected into BlogHandler and PaymentHandler). UploadFile replaced by UploadFormFile(ctx, storage, file, prefix) using the configured backend.
Summary by CodeRabbit
New Features
Bug Fixes