This repository contains Terraform configurations to provision AWS infrastructure consisting of Elastic Beanstalk environment, a Lambda function for automatic deployments, Route 53 for DNS management and an S3 bucket for object storage.
- Terraform installed locally.
- AWS credentials configured with the necessary permissions.
-
Create a new working Directory and change into it:
mkdir <working_dir> && cd <working_dir>
-
Clone this repository:
git clone <https://github.com/michaelkedey/aws_beanstalk.git>
-
Change into the project repo:
cd <aws_beanstalk/src/infrastracture>
-
Run the format script to format all Terraform files:
./format_validate_all.sh
-
Review and customize the variables in
.terraform.tfvarsfile inside thesrc/infrastracture/env**/drectories, with your specific configuration details. -
Review and customize the backend in
backend.tfvarsfile inside thesrc/infrastracture/env**/directories, with your specific configuration details. -
Initialize Terraform:
terraform init -var-file=<"./env/**/.terraform.tfvars"> -backend-config=<"./env/**/backend.tfvars">
-
Plan Terraform:
terraform plan -var-file=<"./env/**/.terraform.tfvars">
-
Apply the Terraform configuration:
terraform apply -var-file=<"./env/**/.terraform.tfvars"> --auto-approve
Follow the prompts to confirm the changes.
Note: it may be necesary to know you have to run thw apply command if the first one fails.
-
Destroy the resources after testing:
terraform destroy -var-file=<"./env/**/.terraform.tfvars"> --auto-approve- The VPC serves as the underlying network infrastructure on which all other resources are provisioned. The networking for this infrastructure include:
- 1 VPC
- 2 Private Subnets
- 2 Public Subnets
- 1 NAT Gateway
- Private and Public Route Tables with Route rules
- 1 Security Group
- Configuration details can be found in the
vpc.tffile undersrc/infrastracture/modules/vpc/vpc.tf
- An S3 bucket which serves as the storage for application codes is created as part of the infrastructure.
- Configuration details can be found in the
s3.tffile undersrc/infrastracture/modules/s3/s3.tf
- The Elastic Beanstalk environment is deployed in a new VPC and private subnets.
- Configuration details can be found in the
prod_env.tffile undersrc/infrastracture/modules/beanstalk/prod/prod_env.tf
- The Lambda function has an S3 event trigger which runs on the upload of an S3 object into the created bucket. Thereby automatically deploying application updates to Elastic Beanstalk from the S3 bucket.
Lambda is configured to only create application versions from uploads of S3 objects which meet a defined criteria, ensuring only relevant application code get deployed. The criteria is that the code has a
prefixof code_ andsuffixof .zip. - Configuration details can be found in the
lambda_function.tffile undersrc/infrastracture/modules/lambda/lambda.tf
An elastic load balancer, security group, target groups and listeners for the various services and ports has been provided for. The elastic load balancer distributes incoming traffic across multiple instances of your web server, such as ec2 instances as defined in the beanstalk environment. However, depending on your use case, you can associate this load balancer with the beanstalk, or allow beanstalk create it's own load balancer as has been configured.
- Configuration details can be found in the
lb.tffile undersrc/infrastracture/modules/loadbalancer/lb.tf
- Route 53 is used for DNS management. With this configuration Route 53 creates an A record based on a registered domain name linked to the CNAME of the beanstalk.
- Configuration details can be found in the
route.tffile undersrc/infrastracture/modules/route52/route.tf.
- 1 vpc
- 4 subnets
- 11 ssm parameters
- 1 security group
- 2 route tables with 4 associations
- 1 ngw
- 1 igw
- 1 eip
- 2 s3 objects
- 1 s3 bucket notification
- 1 perm
- 1 lambda function
- 8 policy attachements
- 4 role
- 3 policy
- 1 beanstalk app
- 1 s3 bucket
- 1 instance profile
- 1 beanstalk environment
The CI/CD pipeline uses GitHub Actions defined in yaml files contained in the .github/workflows directory. There are three yaml files for the infrastructure CI/CD. One file for each stage of deployment i.e. development (dev_actions.yaml file), staging (staging_actions.yaml file) and production (prod_actions.yaml file). The dev level is triggered by commit to branch main with paths 'src/infrastructure/**'. The staging level deployment is triggered by a successful completion of the dev_actions.yamlworkflow. The production level is triggered manually and requires manual approval. This is to ensure that production level will be triggered after development and staging meet expectations.
The CI/CD application pipeline use GitHub Actions also defined in yaml files contained in the .github/workflows directory. There are three yaml files for the application CI/CD. One file for each stage of deployment i.e. development (dev_s3.yaml file), staging (staging_s3.yaml file) and production (prod_s3.yaml file). The dev level is triggered by commit to branch s3_auto with paths 'src/s3_auto_uploads/**'. The staging level deployment is triggered by a successful completion of the dev_s3.yamlworkflow. The production level is triggered manually and requires manual approval so as to ensure that production level will be triggered after development and staging meet expectations.
The yaml files contain commands that sync s3_auto_uploads directory with the respective buckets of the various deployment stages.
The sync causes an upload into the S3 buckets which triggers the lambda if the objects so uploaded satisfy the prefix and suffix criteria. The triggered lambda copies the application update and creates and deploys an application version on Elastic Beanstalk.
-
Access your Elastic Beanstalk application using the provided environment URL or custom domain (if configured).
-
Monitor Lambda function execution and deployment logs in AWS CloudWatch.
Three destroy yaml files are contained in the .github/workflows directory. These include dev_destroy.yaml, staging_destroy.yaml and prod_destroy.yamlfiles. They require manual approval to trigger the destroy workflow for the respective deployment levels.
$ tree
.
|-- Manulife_Auto_IaC_Design_Architecture.png
|-- README.md
`-- src
|-- app_versions
| `-- LambdaWebApp2.zip
`-- infrastracture
|-- env
| |-- dev
| | `-- backend.tfvars
| |-- prod
| | `-- backend.tfvars
| `-- staging
| `-- backend.tfvars
|-- format_validate_all.sh
|-- locals.tf
|-- main.tf
|-- modules
| |-- app
| | |-- app.tf
| | |-- outputs.tf
| | `-- variables.tf
| |-- app_version
| | |-- app_version.tf
| | |-- outputs.tf
| | |-- store.tf
| | `-- variables.tf
| |-- beanstalk
| | `-- prod
| | |-- beanstalk-ec2-policy.json
| | |-- beanstalk-service-policy.json
| | |-- beanstalk.tf
| | |-- outputs.tf
| | `-- variables.tf
| |-- dir_upload
| | |-- uploads.tf
| | `-- variables.tf
| |-- file_upload
| | |-- outputs.tf
| | |-- uploads.tf
| | `-- variables.tf
| |-- route53
| | |-- output.tf
| | |-- route.tf
| | `-- variables.tf
| `-- vpc
| |-- outputs.tf
| |-- providers.tf
| |-- store.tf
| |-- variables.tf
| `-- vpc.tf
|-- outputs.tf
|-- providers.tf
|-- s3_uploads
| |-- dotnet-linux.zip
| `-- dotnet-linux2.zip
`-- variables.tf
17 directories, 39 files
- Make sure to review and update AWS credentials, region, and other sensitive information before committing to version control.
Feel free to contribute to this project by opening issues or creating pull requests.
