Skip to content

tejuthomass/kong-apim

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 

Repository files navigation

A Cloud-Native Architecture for High-Volume API Management Using Kubernetes and Kong Gateway

This project implements a high-performance API Management (APIM) platform using Kong Gateway, designed to handle massive scale with optimal performance and security. The system is capable of handling approximately 3.3 billion API calls per day while maintaining security and performance standards.

Architecture Overview

The platform consists of the following key components:

  • Kong Gateway: Open-source API Gateway providing security, rate limiting, and caching
  • Rust Backend: High-performance backend service (capable of ~200,000 RPS)
  • Kubernetes: Container orchestration for scalable deployment
  • Monitoring Stack: Prometheus and Grafana for observability
  • Redis: Database for rate limiting and caching

Key Features

  • Security: Key authentication and secure API access
  • Performance: Optimized for high throughput (3.3B calls/day)
  • Scalability: Kubernetes-based horizontal scaling
  • Monitoring: Comprehensive observability with Prometheus and Grafana
  • Rate Limiting: Configurable rate limiting for API protection
  • Caching: Performance optimization through caching

Deployment Options

The project supports two deployment scenarios:

1. Local Development (Kind Deployment)

Located in kind-deployment/, this setup uses:

  • KinD (Kubernetes in Docker)
  • OrbStack (Docker alternative)
  • MetalLB for load balancing
  • Local monitoring stack

2. Cloud Deployment (AWS EKS)

Located in eks-deployment/, this setup uses:

  • AWS EKS for Kubernetes management
  • AWS ELB for load balancing
  • EBS for persistent storage
  • Cloud-based monitoring stack

Performance Testing

The platform has been tested using the wrk tool, demonstrating:

  • Rust backend: ~200,000 RPS (without Kong)
  • Python backend: ~5,000 RPS (comparison)
  • Full system: 3.3 billion API calls per day

Note: To achieve the benchmark of 3.3 billion API calls per day, the system requires specific optimizations including:

  • Custom node groups with dedicated taints for workload isolation
  • Optimized Kong Gateway configuration with tuned worker processes
  • Strategic pod placement using node selectors and tolerations
  • Horizontal scaling of both Kong Gateway and backend services
  • Keep-alive connection optimizations

Directory Structure

.
├── kind-deployment/          # Local development setup
│   ├── backend/             # Rust backend service
│   ├── monitoring/          # Prometheus and Grafana setup
│   ├── k8s-config/          # Kubernetes configurations
│   ├── scripts/             # Deployment scripts
│   └── docs/                # Documentation
│
└── eks-deployment/          # AWS cloud deployment
    ├── monitoring/          # Cloud monitoring setup
    ├── k8s-config/          # EKS configurations
    ├── scripts/             # Cloud deployment scripts
    └── eks-cluster-config.yaml

Prerequisites

Local Development

  • KinD
  • OrbStack (or Docker)
  • MetalLB
  • kubectl
  • Rust (for backend development)

Cloud Deployment

  • AWS account
  • eksctl
  • AWS CLI
  • kubectl

Getting Started

Local Development

  1. Navigate to kind-deployment/
  2. Run ./run_all_setup.sh to set up the complete local environment
  3. Access the Kong Gateway dashboard
  4. Deploy and test your APIs

Cloud Deployment

  1. Navigate to eks-deployment/
  2. Configure AWS credentials
  3. Deploy the EKS cluster using the provided configuration
  4. Deploy the Kong Gateway and monitoring stack
  5. Configure your APIs

Monitoring

The platform includes comprehensive monitoring through:

  • Prometheus for metrics collection
  • Grafana for visualization
  • Custom dashboards for API performance monitoring

Security Features

  • API Key Authentication
  • Rate Limiting
  • Secure API Routing
  • TLS/SSL Support
  • Access Control

Performance Optimization

The platform is optimized for:

  • High throughput (3.3B calls/day)
  • Low latency
  • Horizontal scaling
  • Efficient caching
  • Resource optimization

Support

For support, please open an issue in the GitHub repository or contact the maintainers.

About

A high-performance API Management platform built with Kong Gateway and Rust, capable of handling 3.3 billion API calls per day. Features include enterprise-grade security, cloud-native deployment (KinD/AWS EKS), and comprehensive monitoring with Prometheus/Grafana. Perfect for building scalable, secure API gateways with optimal performance.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages