This project implements a high-performance API Management (APIM) platform using Kong Gateway, designed to handle massive scale with optimal performance and security. The system is capable of handling approximately 3.3 billion API calls per day while maintaining security and performance standards.
The platform consists of the following key components:
- Kong Gateway: Open-source API Gateway providing security, rate limiting, and caching
- Rust Backend: High-performance backend service (capable of ~200,000 RPS)
- Kubernetes: Container orchestration for scalable deployment
- Monitoring Stack: Prometheus and Grafana for observability
- Redis: Database for rate limiting and caching
- Security: Key authentication and secure API access
- Performance: Optimized for high throughput (3.3B calls/day)
- Scalability: Kubernetes-based horizontal scaling
- Monitoring: Comprehensive observability with Prometheus and Grafana
- Rate Limiting: Configurable rate limiting for API protection
- Caching: Performance optimization through caching
The project supports two deployment scenarios:
Located in kind-deployment/, this setup uses:
- KinD (Kubernetes in Docker)
- OrbStack (Docker alternative)
- MetalLB for load balancing
- Local monitoring stack
Located in eks-deployment/, this setup uses:
- AWS EKS for Kubernetes management
- AWS ELB for load balancing
- EBS for persistent storage
- Cloud-based monitoring stack
The platform has been tested using the wrk tool, demonstrating:
- Rust backend: ~200,000 RPS (without Kong)
- Python backend: ~5,000 RPS (comparison)
- Full system: 3.3 billion API calls per day
Note: To achieve the benchmark of 3.3 billion API calls per day, the system requires specific optimizations including:
- Custom node groups with dedicated taints for workload isolation
- Optimized Kong Gateway configuration with tuned worker processes
- Strategic pod placement using node selectors and tolerations
- Horizontal scaling of both Kong Gateway and backend services
- Keep-alive connection optimizations
.
├── kind-deployment/ # Local development setup
│ ├── backend/ # Rust backend service
│ ├── monitoring/ # Prometheus and Grafana setup
│ ├── k8s-config/ # Kubernetes configurations
│ ├── scripts/ # Deployment scripts
│ └── docs/ # Documentation
│
└── eks-deployment/ # AWS cloud deployment
├── monitoring/ # Cloud monitoring setup
├── k8s-config/ # EKS configurations
├── scripts/ # Cloud deployment scripts
└── eks-cluster-config.yaml
- KinD
- OrbStack (or Docker)
- MetalLB
- kubectl
- Rust (for backend development)
- AWS account
- eksctl
- AWS CLI
- kubectl
- Navigate to
kind-deployment/ - Run
./run_all_setup.shto set up the complete local environment - Access the Kong Gateway dashboard
- Deploy and test your APIs
- Navigate to
eks-deployment/ - Configure AWS credentials
- Deploy the EKS cluster using the provided configuration
- Deploy the Kong Gateway and monitoring stack
- Configure your APIs
The platform includes comprehensive monitoring through:
- Prometheus for metrics collection
- Grafana for visualization
- Custom dashboards for API performance monitoring
- API Key Authentication
- Rate Limiting
- Secure API Routing
- TLS/SSL Support
- Access Control
The platform is optimized for:
- High throughput (3.3B calls/day)
- Low latency
- Horizontal scaling
- Efficient caching
- Resource optimization
For support, please open an issue in the GitHub repository or contact the maintainers.