Add direct AWS Lambda deployment without Chalice dependency#5
Add direct AWS Lambda deployment without Chalice dependency#5
Conversation
Co-authored-by: sgbaird <45469701+sgbaird@users.noreply.github.com>
Co-authored-by: sgbaird <45469701+sgbaird@users.noreply.github.com>
Co-authored-by: sgbaird <45469701+sgbaird@users.noreply.github.com>
Co-authored-by: sgbaird <45469701+sgbaird@users.noreply.github.com>
Co-authored-by: sgbaird <45469701+sgbaird@users.noreply.github.com>
|
@seth-leavitt, if you're still running into issues with the BYU AWS, you could test it out on a personal AWS account while we're sorting things out. I haven't tested these instructions yet, but the hope is that these instructions will provide some additional clarity once we refine it |
|
@copilot create and commit a deployment.zip for me so that we can test it out. |
Co-authored-by: sgbaird <45469701+sgbaird@users.noreply.github.com>
Built and committed |
There was a problem hiding this comment.
Pull request overview
This pull request enables direct AWS Lambda deployment without requiring Chalice, making it easier for external users to deploy their own YouTube streaming Lambda functions. The PR maintains backward compatibility by keeping the existing Chalice deployment path (app.py) unchanged while adding a new direct deployment option via lambda_function.py.
Changes:
- Adds standalone
lambda_function.pywith improved error handling (400 for client errors, 500 for server errors) compared to the Chalice version - Implements automated deployment package creation via GitHub Actions workflow and local build script
- Provides comprehensive deployment documentation with step-by-step AWS setup instructions
Reviewed changes
Copilot reviewed 5 out of 7 changed files in this pull request and generated 5 comments.
Show a summary per file
| File | Description |
|---|---|
| lambda_function.py | New standalone Lambda handler that duplicates app.py logic but with improved error handling |
| build-deployment-zip.sh | Local build script for creating deployment.zip with all dependencies except Chalice |
| .github/workflows/build-deployment-zip.yaml | Automated workflow to build and attach deployment.zip to releases |
| DEPLOYMENT.md | Comprehensive 361-line deployment guide covering IAM, S3, Lambda setup, and troubleshooting |
| README.md | Restructured to present both deployment options with clear guidance on which to use |
| .gitignore | Adds deployment.zip and dependencies/ to ignored files |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| -d '{ | ||
| "body": { | ||
| "action": "create", | ||
| "cam_name": "Camera1", | ||
| "workflow_name": "MyWorkflow", | ||
| "privacy_status": "private" | ||
| } | ||
| }' |
There was a problem hiding this comment.
The curl example for testing via Function URL appears to have incorrect nesting. When using Lambda Function URLs, the HTTP request body is automatically placed in event['body'] as a string. The curl command should send the payload directly without wrapping it in another 'body' key. The correct format should be:
curl -X POST https://YOUR-FUNCTION-URL/ -H "Content-Type: application/json" -d '{"action": "create", "cam_name": "Camera1", "workflow_name": "MyWorkflow", "privacy_status": "private"}'
The current example would result in event['body'] containing '{"body": {...}}', which would then be parsed to {"body": {...}}, causing the code to look for payload.get("action") in the wrong nested structure.
| -d '{ | |
| "body": { | |
| "action": "create", | |
| "cam_name": "Camera1", | |
| "workflow_name": "MyWorkflow", | |
| "privacy_status": "private" | |
| } | |
| }' | |
| -d '{"action": "create", "cam_name": "Camera1", "workflow_name": "MyWorkflow", "privacy_status": "private"}' |
| """ | ||
| AWS Lambda handler function for YouTube streaming management. | ||
|
|
||
| This function can be directly deployed to AWS Lambda without Chalice. | ||
|
|
||
| Expected event payload: | ||
| { | ||
| "body": { | ||
| "action": "create" or "end", | ||
| "cam_name": "camera name", | ||
| "workflow_name": "workflow identifier", | ||
| "privacy_status": "public", "private", or "unlisted" (optional, default: "private") | ||
| } | ||
| } | ||
| """ |
There was a problem hiding this comment.
The docstring shows the event payload with body as a dict, but the code also handles body as a JSON string (lines 31-32), which is what Lambda Function URLs provide. Consider updating the docstring to mention both formats:
- For Lambda Function URLs: body is a JSON string that gets parsed
- For direct invocation/testing: body can be a dict
This would help users understand the different invocation methods.
| 1. Download and extract `deployment.zip` | ||
| 2. Edit `chalicelib/ytb_api_utils.py`: | ||
| ```python | ||
| # Lines 9-13 |
There was a problem hiding this comment.
The line reference "Lines 9-13" is slightly inaccurate. The constants are on lines 9 (CHANNEL_ID), 12 (S3_BUCKET), and 13 (S3_KEY) in ytb_api_utils.py, not consecutively on lines 9-13. Consider updating to "Lines 9, 12-13" for accuracy.
| # Lines 9-13 | |
| # Lines 9, 12-13 |
| cam_name = payload.get("cam_name", "UnknownCam") | ||
| workflow_name = payload.get("workflow_name", "UnknownWorkflow") |
There was a problem hiding this comment.
Documentation states that workflow_name is "required", but the code provides a default value of "UnknownWorkflow" when it's missing. This is inconsistent. Either the parameter should be truly required (raising an error when missing), or the documentation should indicate it's optional with a default. Given that workflow_name is used to identify and manage specific broadcasts, it should likely be required to avoid confusion from multiple workflows sharing the default name.
| google-auth-httplib2 | ||
|
|
||
| # Copy lambda function and chalicelib | ||
| echo "Copying lambda function and chalicelib..." |
There was a problem hiding this comment.
The script should verify that required files (lambda_function.py and chalicelib/) exist before attempting to copy them. If these files are missing, the script will fail with unclear error messages. Consider adding validation checks before the copy operations to provide clearer error messages to users.
| echo "Copying lambda function and chalicelib..." | |
| echo "Copying lambda function and chalicelib..." | |
| if [ ! -f lambda_function.py ]; then | |
| echo "Error: Required file 'lambda_function.py' not found in the current directory." | |
| echo "Please ensure you run this script from the project root where lambda_function.py exists." | |
| exit 1 | |
| fi | |
| if [ ! -d chalicelib ]; then | |
| echo "Error: Required directory 'chalicelib/' not found in the current directory." | |
| echo "Please ensure you run this script from the project root where chalicelib/ exists." | |
| exit 1 | |
| fi |
|
@seth-leavitt, you can try with this file (deployment.zip): https://github.com/AccelerationConsortium/streamingLambda/blob/a264ebff6e5eb6f4317158c0ec7a5b90c0d2b7e1/deployment.zip |
Enables deployment to AWS Lambda via zip file upload, removing the Chalice requirement for external users. The existing Chalice deployment path remains unchanged for internal use.
Changes
lambda_function.py: Standalone handler with proper error codes (400 for client errors, 500 for server errors).github/workflows/build-deployment-zip.yaml: Auto-generates deployment.zip on releases, uploadable as artifact or release assetbuild-deployment-zip.sh: Local build script for customization scenariosDEPLOYMENT.md: Complete deployment guide covering IAM setup, S3 configuration, and troubleshootingREADME.md: Restructured to show both deployment options upfrontdeployment.zip: Pre-built deployment package (~40MB) committed to the repo for immediate testingDeployment Package Structure
Dependencies installed:
boto3,google-api-python-client,google-auth,google-auth-oauthlib,google-auth-httplib2Notably excluded:
chalice(only needed for the Chalice deployment path)Usage
Download
deployment.zipdirectly from the repo or from the Releases page → upload to Lambda console → configure IAM/S3 → done. See DEPLOYMENT.md for details on IAM policies and S3 token storage.Original prompt
✨ Let Copilot coding agent set things up for you — coding agent works faster and does higher quality work when set up for your repo.