Real-Time Distributed Microservices Platform A high-performance, event-driven microservices system designed for real-time bidding and automated inventory management. Built with Java 17 and Spring Boot 3, the platform leverages Reactive Programming (WebFlux) to handle high-concurrency bidding traffic with low latency. It utilizes Apache Kafka for asynchronous, decoupled communication between services, ensuring eventual consistency across distributed PostgreSQL and MongoDB databases. The architecture is fully containerized with Docker and orchestrated using Spring Cloud components (Eureka, Gateway) for scalability and fault tolerance.
Key Technologies: Java 17, Spring Boot 3, Spring Cloud, WebFlux, Apache Kafka, MongoDB, PostgreSQL, Redis, Docker, Microservices.
- Designed scalable event-driven microservices architecture using Spring Boot and Spring Cloud.
- Implemented high-throughput, low-latency bidding system with Reactive Spring WebFlux.
- Orchestrated asynchronous inter-service communication using Apache Kafka messaging.
- Ensured data consistency across distributed PostgreSQL and MongoDB databases.
- Built automated inventory management system with real-time stock synchronization.
- Developed centralized API Gateway for secure routing and load balancing.
- Integrated Service Discovery (Eureka) for dynamic scaling and high availability.
- Deployed containerized microservices ecosystem using Docker and Docker Compose.
- Utilized Redis caching to optimize read performance and reduce latency.
- Engineered fault-tolerant system handling distributed transactions and failure recovery.
The project follows a Distributed Microservices Architecture where each domain (Bidding, Inventory, Orders) is isolated in its own service, possessing its own database to ensure loose coupling.
- API Gateway: The entry point for all client requests, handling routing, and potentially authentication/rate-limiting.
- Discovery Server (Eureka): Acts as a service registry where all microservices register themselves, allowing dynamic discovery and load balancing.
- Bidding Service (Reactive): The core high-traffic service. It uses Spring WebFlux (non-blocking I/O) and MongoDB (high write throughput) to handle a massive influx of bids in real-time.
- Message Broker (Kafka): Acts as the central nervous system. When a critical action occurs (like a successful bid), an event is published here, decoupling the producer from consumers.
- Inventory Service: Manages product stock in PostgreSQL. It consumes events to atomically update inventory levels.
- Order Service: Manages the lifecycle of orders in PostgreSQL. It creates immutable order records based on successful transactions.
- Notification Service: Listens for system events to trigger real-time alerts to users (via WebSockets/Email).
The core workflow demonstrates the Saga Pattern (choreography approach) to handle distributed transactions without a central coordinator.
-
Bid Placement (User -> Bidding Service)
- The user submits a bid via the API Gateway.
- The Bidding Service receives the request. Since it uses WebFlux, it handles the request in a non-blocking manner, allowing it to serve thousands of concurrent users.
- The bid is persisted to MongoDB, optimized for high-speed writes.
-
Event Publication (Bidding Service -> Kafka)
- Upon successfully saving the bid, the Bidding Service acts as a Producer.
- It constructs a
SaleConfirmedEventcontaining theproductId,userId,price, andorderNumber. - This event is published to the
sale-confirmedKafka topic. This happens asynchronously, so the user gets an immediate response without waiting for inventory/order processing.
-
Event Consumption & Processing (Kafka -> Consumers)
- Inventory Service (Consumer): Listens to the
sale-confirmedtopic. It finds the product in its PostgreSQL database and decrements the stock. If stock is zero, it can trigger a compensation event (e.g.,BidFailedEvent) to roll back. - Order Service (Consumer): Listens to the same topic. It creates a new Order record in its own PostgreSQL database, serving as the immutable proof of sale.
- Notification Service (Consumer): Listens to the topic and triggers a notification to the user confirming their purchase.
- Inventory Service (Consumer): Listens to the
graph TD
%% Nodes
User([User])
Gateway[API Gateway]
Eureka[Discovery Server]
subgraph "Reactive Zone"
Bidding[Bidding Service]
Mongo[(MongoDB)]
end
Kafka{Apache Kafka}
subgraph "Transactional Zone"
Inventory[Inventory Service]
PostgresInv[(Postgres Inventory)]
Order[Order Service]
PostgresOrd[(Postgres Order)]
end
Notif[Notification Service]
%% Flows
User -->|1. POST /bids| Gateway
Gateway -->|2. Route| Bidding
Bidding -.->|Register| Eureka
Bidding -->|3. Save Bid| Mongo
Bidding -->|4. Publish SaleConfirmedEvent| Kafka
Kafka ==>|5. Consume Event| Inventory
Inventory -->|6. Update Stock| PostgresInv
Kafka ==>|5. Consume Event| Order
Order -->|7. Create Order| PostgresOrd
Kafka ==>|5. Consume Event| Notif
Notif -.->|8. Notify User| User
classDef database fill:#e1f5fe,stroke:#01579b,stroke-width:2px;
classDef service fill:#fff3e0,stroke:#e65100,stroke-width:2px;
classDef broker fill:#f3e5f5,stroke:#4a148c,stroke-width:2px;
class Mongo,PostgresInv,PostgresOrd database;
class Bidding,Inventory,Order,Notif,Gateway,Eureka service;
class Kafka broker;
- Why Kafka? To decouple the high-speed bidding (MongoDB/WebFlux) from the transactional inventory/order processing (Postgres/JPA). If the Inventory service goes down, bids can still be accepted and processed later.
- Why WebFlux? Traditional blocking servlets (Tomcat) struggle with high concurrency (e.g., 10k concurrent bids). WebFlux (Netty) uses an event-loop model to handle this efficiently with fewer threads.
- Why Database per Service? To ensure that services are independent. Scaling the Bidding service/DB doesn't impact the Inventory DB.