GitOps repository for KubeStock Kubernetes deployments using ArgoCD.
gitops/
├── README.md
├── BOOTSTRAP.md # Cluster bootstrap instructions
├── OBSERVABILITY_SETUP.md # Observability stack documentation
├── argocd/ # ArgoCD configuration
│ ├── projects/ # AppProject definitions
│ │ ├── infrastructure.yaml
│ │ ├── staging.yaml
│ │ └── production.yaml
│ └── config/ # ArgoCD ConfigMaps
│ └── argocd-cm.yaml
├── apps/ # ArgoCD Application definitions
│ ├── ebs-csi-driver.yaml # EBS CSI Driver
│ ├── external-secrets.yaml # External Secrets Operator config
│ ├── metrics-server.yaml # Kubernetes Metrics Server
│ ├── shared-rbac.yaml # Shared RBAC resources
│ ├── staging/ # Staging environment apps
│ │ ├── kong-staging.yaml
│ │ ├── kubestock-staging.yaml
│ │ └── observability-staging.yaml
│ └── production/ # Production environment apps
│ ├── kong-production.yaml
│ ├── kubestock-production.yaml
│ └── observability-production.yaml
├── base/ # Base Kustomize manifests
│ ├── ebs-csi-driver/ # AWS EBS CSI Driver
│ ├── external-secrets/ # ClusterSecretStore
│ ├── kong/ # Kong API Gateway
│ ├── metrics-server/ # Metrics Server
│ ├── observability-stack/ # Prometheus, Grafana, Loki, Promtail
│ ├── services/ # Microservice deployments
│ │ ├── frontend/
│ │ ├── ms-identity/
│ │ ├── ms-inventory/
│ │ ├── ms-order-management/
│ │ ├── ms-product/
│ │ └── ms-supplier/
│ └── shared-rbac/ # Shared ClusterRoles
└── overlays/ # Environment-specific overlays
├── staging/ # Staging overlay
│ ├── kong/ # Kong config for staging
│ └── observability-stack/ # Observability for staging
└── production/ # Production overlay
├── kong/ # Kong config for production
└── observability-stack/ # Observability for production
- ms-product - Product Catalog Service
- ms-inventory - Inventory Management Service
- ms-supplier - Supplier Management Service
- ms-order-management - Order Management Service
- ms-identity - Identity/User Management Service
- kubestock-frontend - React Frontend Application
- Push changes to
gitops/overlays/staging/ - ArgoCD syncs changes to
kubestock-stagingnamespace - Run smoke tests via CI/CD pipeline
- Staging tests pass
- Manual approval in GitHub Actions / ArgoCD UI
- Promote to production
- Deploy to inactive color (e.g., green)
- Run smoke tests on green
- Switch traffic from blue to green via Service selector
- Keep blue as rollback target
From your local machine, SSH tunnel through bastion to the NLB:
# Get NLB DNS from Terraform output
cd infrastructure/terraform/prod
terraform output -raw nlb_dns_name
# SSH tunnel to ArgoCD UI (port 8443 on NLB routes to NodePort 30443)
ssh -L 8443:<NLB_DNS>:8443 -i ~/.ssh/kubestock-key ubuntu@<BASTION_IP>
# Access UI at https://localhost:8443 in your browser
# Username: admin
# Password: 1qK4StYU6Fs0W2l3Or use the tunnel script:
# Edit infrastructure/scripts/ssh-tunnels/tunnel-argocd.sh with your NLB DNS and bastion IP
./infrastructure/scripts/ssh-tunnels/tunnel-argocd.shTo retrieve admin password:
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -dFrom your local machine, SSH tunnel through bastion to the NLB:
# Get NLB DNS from Terraform output
cd infrastructure/terraform/prod
terraform output -raw nlb_dns_name
# SSH tunnel for HTTP (port 80 on NLB routes to NodePort 30080)
ssh -L 5173:<NLB_DNS>:80 -i ~/.ssh/kubestock-key ubuntu@<BASTION_IP>
# OR for HTTPS (port 443 on NLB routes to NodePort 30444)
ssh -L 5173:<NLB_DNS>:443 -i ~/.ssh/kubestock-key ubuntu@<BASTION_IP>
# Access staging frontend at http://localhost:5173 or https://localhost:5173
# All API requests will route through Kong Gateway to backend servicesOr use the tunnel scripts:
# Edit infrastructure/scripts/ssh-tunnels/tunnel-staging-frontend.sh with your NLB DNS and bastion IP
./infrastructure/scripts/ssh-tunnels/tunnel-staging-frontend.sh # For HTTP
./infrastructure/scripts/ssh-tunnels/tunnel-staging-frontend-https.sh # For HTTPSForward frontend to localhost:
kubectl port-forward -n kubestock-staging svc/kubestock-frontend 8080:80
# Access at http://localhost:8080Forward individual microservices:
# Product service
kubectl port-forward -n kubestock-staging svc/ms-product 3002:3002
# Inventory service
kubectl port-forward -n kubestock-staging svc/ms-inventory 3003:3003
# Supplier service
kubectl port-forward -n kubestock-staging svc/ms-supplier 3004:3004
# Order management service
kubectl port-forward -n kubestock-staging svc/ms-order-management 3005:3005
# Identity service
kubectl port-forward -n kubestack-staging svc/ms-identity 3006:3006Test services from within the cluster:
# Test frontend
kubectl exec -n kubestock-staging deploy/kubestock-frontend -- curl -s http://localhost:80
# Test backend services
kubectl exec -n kubestock-staging deploy/ms-product -- curl -s http://localhost:3002/health
kubectl exec -n kubestock-staging deploy/ms-inventory -- curl -s http://localhost:3003/health
kubectl exec -n kubestock-staging deploy/ms-supplier -- curl -s http://localhost:3004/health
kubectl exec -n kubestock-staging deploy/ms-order-management -- curl -s http://localhost:3005/health
kubectl exec -n kubestock-staging deploy/ms-identity -- curl -s http://localhost:3006/health| Environment | Auto-Sync | Prune | Self-Heal | Approval |
|---|---|---|---|---|
| Staging | ✅ On | ✅ On | ✅ On | Not required |
| Production | ❌ Off | ❌ Off | ❌ Off | Required |
Note: Staging has auto-sync enabled for rapid iteration. Production requires manual approval for safety.
Before deploying applications, certain secrets must be created manually as they cannot be stored in Git:
The External Secrets Operator requires AWS credentials to fetch secrets from AWS Secrets Manager and generate ECR tokens. These credentials must be created in each application namespace:
# Get the access key from Terraform outputs or AWS IAM
# The IAM user is: kubestock-external-secrets
# Create secret in kubestock-staging namespace
kubectl create secret generic aws-external-secrets-creds \
--namespace=kubestock-staging \
--from-literal=access-key-id=<AWS_ACCESS_KEY_ID> \
--from-literal=secret-access-key=<AWS_SECRET_ACCESS_KEY>
# Create secret in kubestock-production namespace
kubectl create secret generic aws-external-secrets-creds \
--namespace=kubestock-production \
--from-literal=access-key-id=<AWS_ACCESS_KEY_ID> \
--from-literal=secret-access-key=<AWS_SECRET_ACCESS_KEY>
# Create secret in external-secrets namespace (for ClusterSecretStore)
kubectl create secret generic aws-external-secrets-creds \
--namespace=external-secrets \
--from-literal=access-key-id=<AWS_ACCESS_KEY_ID> \
--from-literal=secret-access-key=<AWS_SECRET_ACCESS_KEY>Required IAM Permissions for the user:
secretsmanager:GetSecretValuesecretsmanager:DescribeSecretecr:GetAuthorizationTokenecr:BatchGetImageecr:GetDownloadUrlForLayerecr:BatchCheckLayerAvailability
When setting up a new cluster:
- Deploy External Secrets Operator to
external-secretsnamespace - Create
aws-external-secrets-credssecrets in all three namespaces - Deploy ArgoCD and configure projects
- Deploy applications - they will automatically:
- Fetch secrets from AWS Secrets Manager
- Generate ECR authentication tokens
- Pull container images from ECR
This repository follows GitOps principles:
- All Kubernetes manifests are version controlled
- ArgoCD syncs cluster state with this repository
- Changes are deployed via Pull Requests
- Automatic sync for staging, manual approval for production