Conversation
There was a problem hiding this comment.
Pull request overview
This PR adds a Lambda function to generate S3 pre-signed URLs for file uploads larger than 10MB, which is a common workaround for Lambda's payload size limit.
Changes:
- Adds S3UploadUrlGenerator component with configurable bucket, file validation, and key prefix strategies
- Implements Lambda handler with security features (filename sanitization, MIME type validation, size checks)
- Includes comprehensive test suite with Vitest
Reviewed changes
Copilot reviewed 8 out of 10 changed files in this pull request and generated 14 comments.
Show a summary per file
| File | Description |
|---|---|
| vitest.config.ts | Configures Vitest test runner with coverage reporting |
| tsup.config.ts | Adds Lambda handler as build entry point and marks AWS SDK as external |
| tsconfig.json | Adds Node.js types for Lambda development |
| src/sst/aws/s3-upload-url-generator/index.ts | Component definition with bucket and Lambda configuration |
| src/sst/aws/s3-upload-url-generator/handler.ts | Lambda handler implementing presigned URL generation with validation |
| src/sst/aws/s3-upload-url-generator/handler.test.ts | Comprehensive test suite covering validation and sanitization |
| src/index.ts | Exports new component and related types |
| package.json | Adds AWS SDK, Lambda types, Vitest, and test scripts |
| .github/workflows/test.yml | GitHub Actions workflow for running tests on PRs |
Files not reviewed (1)
- pnpm-lock.yaml: Language not supported
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| "@aws-sdk/client-s3", // AWS SDK provided by Lambda runtime | ||
| "@aws-sdk/s3-request-presigner", | ||
| "aws-lambda", // AWS Lambda types |
There was a problem hiding this comment.
The comment states "AWS SDK provided by Lambda runtime" but this is only true for AWS SDK v2. AWS SDK v3 (which this code uses with @aws-sdk/client-s3) is NOT provided by the Lambda runtime and must be bundled. These packages should not be marked as external, otherwise the Lambda function will fail at runtime with module not found errors.
| "@aws-sdk/client-s3", // AWS SDK provided by Lambda runtime | |
| "@aws-sdk/s3-request-presigner", | |
| "aws-lambda", // AWS Lambda types | |
| "aws-lambda", // AWS Lambda types only (dev-time, not needed at runtime) |
| // Validate required fields | ||
| if (!body.fileName || !body.fileSize || !body.mimeType) { | ||
| return { | ||
| statusCode: 400, | ||
| headers, | ||
| body: JSON.stringify({ | ||
| error: "fileName, fileSize, and mimeType are required", | ||
| code: "INVALID_REQUEST", | ||
| }), | ||
| }; | ||
| } |
There was a problem hiding this comment.
The validation only checks that fileSize, fileName, and mimeType are present, but doesn't validate that fileSize is a positive number. A negative or zero fileSize should be rejected as invalid. Consider adding validation: if (body.fileSize <= 0) to ensure the file size is reasonable.
| mimeType: "application/pdf" as const, | ||
| }, | ||
| PATH_TRAVERSAL: { | ||
| fileName: "../../../etc/passwd", | ||
| fileSize: 1024, | ||
| mimeType: "image/png" as const, | ||
| }, | ||
| LEADING_SLASHES: { | ||
| fileName: "///test.png", | ||
| fileSize: 1024, | ||
| mimeType: "image/png" as const, |
There was a problem hiding this comment.
Test coverage is missing for edge cases with invalid fileSize values (zero, negative, extremely large numbers). These should be tested to ensure the handler properly validates fileSize input.
| ALGORITHM: "X-Amz-Algorithm", | ||
| SIGNATURE: "X-Amz-Signature", | ||
| } as const; | ||
|
|
||
| // Stub environment variables before importing handler | ||
| vi.stubEnv("AWS_REGION", "us-east-1"); | ||
| vi.stubEnv("BUCKET_NAME", TEST_CONFIG.BUCKET_NAME); | ||
| vi.stubEnv( | ||
| "ALLOWED_MIME_TYPES", | ||
| JSON.stringify(TEST_CONFIG.ALLOWED_MIME_TYPES), | ||
| ); | ||
| vi.stubEnv("MAX_FILE_SIZE_BYTES", String(TEST_CONFIG.MAX_FILE_SIZE_BYTES)); | ||
| vi.stubEnv("URL_EXPIRY_SECONDS", String(TEST_CONFIG.URL_EXPIRY_SECONDS)); | ||
| vi.stubEnv("KEY_PREFIX_STATIC", TEST_CONFIG.KEY_PREFIX_STATIC); | ||
| vi.stubEnv( | ||
| "KEY_PREFIX_INCLUDE_TIMESTAMP", |
There was a problem hiding this comment.
Test coverage is missing for edge cases with invalid fileName values (empty string, whitespace only, extremely long filenames). These should be tested to ensure proper validation and sanitization behavior.
| // Add custom metadata | ||
| if (body.metadata) { | ||
| putObjectParams.Metadata = body.metadata; | ||
| } |
There was a problem hiding this comment.
The metadata from the request body is passed directly to S3 without validation. S3 metadata has restrictions (keys must be valid HTTP headers, values have size limits). Additionally, there's no limit on the number of metadata keys, which could be exploited. Consider adding validation for metadata keys and values, and limiting the number of allowed metadata entries.
| /** | ||
| * Authentication configuration (for future implementation) | ||
| * @experimental Not implemented yet | ||
| */ | ||
| authentication?: { | ||
| type: "cognito" | "api-key" | "iam"; | ||
| config: Record<string, any>; | ||
| }; |
There was a problem hiding this comment.
The authentication property is marked as experimental and not implemented, but it's included in the public interface without any runtime handling. This could be confusing for users who might try to use it. Consider either removing it until implemented, or adding a runtime check in the component that throws an error if someone tries to use it, clearly stating it's not yet implemented.
| let bucket: sst.aws.Bucket; | ||
|
|
||
| if (typeof props.bucket === "string") { | ||
| // Reference existing bucket |
There was a problem hiding this comment.
When using an existing bucket by providing a string, the code uses Bucket.get() which will reference an existing bucket. However, there's no validation that the bucket exists or that the Lambda will have permissions to access it. Consider documenting that users must ensure the bucket exists and appropriate IAM permissions are configured when using bucket reference mode.
| // Reference existing bucket | |
| // Reference existing bucket by name. This will not create or validate the bucket. | |
| // Ensure that: | |
| // - The bucket already exists in the target AWS account/region. | |
| // - The Lambda function has IAM permissions to access it (for example via `link` | |
| // or explicit IAM policies). |
| const headers = { | ||
| "Content-Type": "application/json", | ||
| "Access-Control-Allow-Origin": "*", // Configured via Lambda URL CORS | ||
| "Access-Control-Allow-Methods": "POST, OPTIONS", |
There was a problem hiding this comment.
The CORS headers are hardcoded in the Lambda handler response, but Lambda Function URLs have their own CORS configuration. This creates duplicate CORS handling and potential conflicts. The hardcoded headers in the response will override the Lambda Function URL CORS configuration. Either rely on Lambda Function URL CORS configuration (by removing manual headers) or document this behavior clearly.
| const headers = { | |
| "Content-Type": "application/json", | |
| "Access-Control-Allow-Origin": "*", // Configured via Lambda URL CORS | |
| "Access-Control-Allow-Methods": "POST, OPTIONS", | |
| // Note: CORS is configured via the Lambda Function URL. | |
| // Do not set CORS headers here to avoid overriding that configuration. | |
| const headers = { | |
| "Content-Type": "application/json", |
| expect(result.statusCode).toBe(HTTP_STATUS.OK); | ||
| // Key should not include date format or UUID since both are disabled | ||
| // Note: There's a double slash due to the key generation logic | ||
| expect(body.key).toBe( | ||
| `${TEST_CONFIG.KEY_PREFIX_STATIC}/${TEST_FILE.VALID_PNG.fileName}`, |
There was a problem hiding this comment.
The test acknowledges the double slash issue with a comment "Note: There's a double slash due to the key generation logic" but treats it as expected behavior. This is actually a bug in the handler that should be fixed rather than documented in tests. The expected key should be "uploads/test.png" not "uploads//test.png".
| const sanitizedFileName = body.fileName | ||
| .replace(/\.\./g, "") // Remove parent directory references | ||
| .replace(/^\/+/, "") // Remove leading slashes | ||
| .replace(/[^a-zA-Z0-9._-]/g, "_"); // Replace special chars |
There was a problem hiding this comment.
The filename sanitization replaces special characters with underscores, which could result in non-unique filenames when different original filenames map to the same sanitized version (e.g., "file@#.png" and "file$%.png" both become "file__.png"). This could lead to unintentional file overwrites. Consider either:
- Using the UUID prefix by default to ensure uniqueness
- Generating a hash of the original filename to preserve uniqueness
- Rejecting filenames with invalid characters instead of sanitizing them
Lambda's can't handle uploads > 10MB, hence using an intermediate step to generate a pre-signed URL for larger files to upload directly to S3 is a common pattern/workaround I use.