Infrastructure as Code for deploying AWS EKS cluster with VPC using Terraform and GitHub Actions. This project provides a production-ready, secure, and cost-optimized EKS deployment with automated CI/CD pipeline.
User-data-IaC/
βββ .github/
β βββ workflows/
β βββ eks-setup.yml # GitHub Actions CI/CD pipeline
βββ modules/
β βββ eks/
β β βββ main.tf # EKS cluster, node groups, addons
β β βββ variable.tf # EKS module variables
β β βββ output.tf # EKS module outputs
β βββ vpc/
β βββ main.tf # VPC, subnets, gateways, routes
β βββ variable.tf # VPC module variables
β βββ output.tf # VPC module outputs
βββ main.tf # Root module configuration
βββ provider.tf # Terraform and AWS provider config
βββ variable.tf # Root variables with defaults
βββ output.tf # Root outputs
βββ .gitignore # Git ignore patterns
βββ .terraform.lock.hcl # Terraform dependency lock
βββ LICENSE # MIT License
βββ README.md # This file give the brief of proj.
- π VPC Module: Creates isolated network with public/private subnets across 2 AZs as HA
- βΈοΈ EKS Module: Deploys managed Kubernetes cluster with worker nodes and essential addons
- π GitHub Actions: Automated deployment pipeline with proper error handling
- π Security: IAM roles, access entries, and encrypted state management
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β VPC (10.0.0.0/16) β
βββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββ€
β us-east-1a β us-east-1b β
βββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββ€
β Public Subnet β Public Subnet β
β 10.0.3.0/24 β 10.0.4.0/24 β
β βββββββββββββββββββββββ β βββββββββββββββββββββββββββββββββββ β
β β NAT Gateway β β β NAT Gateway β β
β βββββββββββββββββββββββ β βββββββββββββββββββββββββββββββββββ β
βββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββ€
β Private Subnet β Private Subnet β
β 10.0.1.0/24 β 10.0.2.0/24 β
β βββββββββββββββββββββββ β βββββββββββββββββββββββββββββββββββ β
β β EKS Worker Nodes β β β EKS Worker Nodes β β
β βββββββββββββββββββββββ β βββββββββββββββββββββββββββββββββββ β
βββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββ
- AWS CLI configured with appropriate permissions
- Terraform >= 1.5.7
- GitHub repository with Actions enabled
- S3 bucket for Terraform state storage
- DynamoDB table for state locking
git clone https://github.com/your-username/User-data-IaC.git
cd User-data-IaCNavigate to your repository β Settings β Secrets and variables β Actions, and add:
| Secret Name | Description | Example |
|---|---|---|
AWS_ACCESS_KEY_ID |
AWS Access Key | AKIAIOSFODNN7EXAMPLE |
AWS_SECRET_ACCESS_KEY |
AWS Secret Key | wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY |
BUCKET_TF_STATE |
S3 bucket for state | my-terraform-state-bucket |
- Go to Actions tab in your repository
- Select eks_setup workflow
- Click Run workflow
- Choose create-cluster action and required branch
# Initialize Terraform
terraform init -backend-config="bucket=your-terraform-state-bucket"
# Plan deployment
terraform plan
# Apply changes
terraform apply# Update kubeconfig
aws eks update-kubeconfig --region us-east-1 --name custom-eks
# Verify connection
kubectl get nodes| Component | Default Value | Description |
|---|---|---|
| Region | us-east-1 |
AWS region |
| VPC CIDR | 10.0.0.0/16 |
VPC IP range |
| EKS Version | 1.33 |
Kubernetes version |
| Node Instance | t3.small |
EC2 instance type |
| Node Count | 2 (min: 2, max: 3) |
Worker nodes |
| Disk Size | 20 GB |
EBS volume size |
Create terraform.tfvars file:
# Network Configuration
vpc_cidr = "10.0.0.0/16"
private_subnet_cidrs = ["10.0.1.0/24", "10.0.2.0/24"]
public_subnet_cidrs = ["10.0.3.0/24", "10.0.4.0/24"]
availability_zones = ["us-east-1a", "us-east-1b"]
# EKS Configuration
cluster_version = "1.33"
node_groups = {
general = {
instance_types = ["t3.small"]
scaling_config = {
desired_capacity = 3
min_size = 2
max_size = 5
}
}
}- β 1 VPC with DNS support
- β 2 Public subnets (multi-AZ)
- β 2 Private subnets (multi-AZ)
- β 1 Internet Gateway
- β 2 NAT Gateways (high availability)
- β Route tables and associations
- β Elastic IPs for NAT Gateways
- β EKS Cluster with API endpoint
- β Managed node group with auto-scaling
- β Essential addons (VPC CNI, kube-proxy, CoreDNS, EBS CSI)
- β IAM roles and policies
- β Access entries for cluster management
- β Security groups (managed by EKS)
- π Private Worker Nodes: All worker nodes in private subnets
- π‘οΈ IAM Access Control: Proper IAM roles and policies
- π Access Entries: Modern EKS access management
- ποΈ Encrypted State: S3 backend with encryption
- π State Locking: DynamoDB prevents concurrent modifications
- π Least Privilege: Minimal required permissions
- π‘ Right-sized Instances: t3.small for development workloads
- π Auto Scaling: Automatic node scaling based on demand
- π Managed Services: Reduces operational overhead
- β‘ Spot Instances: Can be configured for non-production workloads
- EKS Cluster: ~$73/month
- 2x t3.small nodes: ~$30/month
- 2x NAT Gateways: ~$90/month
- Total: ~$193/month
π‘ Cost Tip: Use single NAT Gateway for development to save ~$45/month
node_groups = {
spot = {
instance_types = ["t3.medium", "t3.large"]
capacity_type = "SPOT"
scaling_config = {
desired_capacity = 1
min_size = 1
max_size = 10
}
}
}# Add AWS Load Balancer Controller
resource "aws_eks_addon" "aws_load_balancer_controller" {
cluster_name = aws_eks_cluster.custom.name
addon_name = "aws-load-balancer-controller"
}# Enable CloudWatch logging
aws eks update-cluster-config \
--name custom-eks \
--logging '{"enable":["api","audit","authenticator","controllerManager","scheduler"]}'# Check cluster status
kubectl get nodes -o wide
# View system pods
kubectl get pods -n kube-system
# Check addon status
aws eks describe-addon --cluster-name custom-eks --addon-name vpc-cniAfter deploying application in eks, to access it from service as NodePort in case of LoadBalancer use the below steps: (If deployed as LB service then we can access application using the DNS provided by LB)
Start a debug pod with curl installed Run a temporary pod with curl tool (example uses curlimages/curl image):
kubectl run -i --tty curlpod --image=curlimages/curl --restart=Never -- sh
This gives you a shell prompt inside the pod. Curl your service inside cluster Use the service name and port, for example:
curl http://backend-service.myapp.svc.cluster.local:8008/docs
O/P would be the HTTP in CLI which confirms the app running.
| Issue | Solution |
|---|---|
| Access Denied | Verify IAM permissions and access entries |
| Timeout Errors | Check VPC configuration and security groups |
| EBS not Bounding | When the pvc is applied it usually will be in pending state due to missing permission |
| State Lock | Verify DynamoDB table exists and is accessible |
| Node Join Issues | Check subnet routing and security groups |
# Check AWS credentials
aws sts get-caller-identity
# Verify EKS cluster
aws eks describe-cluster --name custom-eks
# Check node group status
aws eks describe-nodegroup --cluster-name custom-eks --nodegroup-name general- Go to Actions tab
- Run eks_setup workflow
- Select delete-cluster action
terraform destroy- kubectl - Kubernetes CLI
- eksctl - EKS CLI tool
- k9s - Terminal UI for Kubernetes
- Lens - Kubernetes IDE
- minikube - Running k8s locally
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- π§ Email: sudarshanrpgowda7@gmail.com
- π Issues: GitHub Issues
- π¬ Discussions: GitHub Discussions
β Star this repository if it helped you!