AI-Driven Security Mesh for Modern Microservices
SentientGate is a distributed security platform that sits in front of microservices, observes traffic in real time, detects suspicious behavior, and takes temporary enforcement actions before attacks spread.
Most API security setups fail in one of two ways:
- They are static and rule-only, so they miss evolving attacks.
- They are powerful but expensive, slow, and hard to run privately.
SentientGate is built to solve that gap. It combines fast gateway enforcement with event-driven analysis, historical context, and local AI inference to make security decisions that are both fast and adaptive.
Instead of blocking forever, it applies TTL-based temporary blocks, learns from behavior, and keeps services available under load.
SentientGate is a microservice security fabric with these core capabilities:
- Real-time request filtering at the gateway edge
- Event-driven threat analysis with Kafka
- Behavioral history analysis via gRPC
- Layered detection with strategy-based scoring
- Dynamic temporary blocking via Redis TTL
- Optional local LLM anomaly checks using Ollama
- Operational visibility through a React dashboard
Request journey:
- A client request enters
ApiGateway. - Gateway filters validate request context and check Redis blacklist state.
- Security events are published to Kafka for asynchronous analysis.
MCPServiceconsumes events and fetches recent user/IP behavior fromLogingServicethrough gRPC.- MCP applies layered strategies:
PatternMatchStrategyfor signature-like payload threatsBurstTrafficStrategyfor abusive traffic patternsAiAnomalyStrategyfor behavioral anomalies
- If risk crosses threshold, MCP writes a TTL block record to Redis.
- Next malicious requests are denied quickly at gateway level.
- Logs and decision outcomes are visible to operators in the UI.
This keeps the hot request path fast while moving deeper intelligence to async services.
SentientGate targets practical impact in production environments:
- Financial services: reduces fraud and bot abuse blast radius with fast temporary bans
- E-commerce: protects checkout and login surfaces during traffic spikes and bot storms
- SaaS platforms: provides centralized protection for many internal services behind one gateway
- Regulated industries: enables privacy-first AI analysis using local models (no external LLM dependency)
- Platform engineering teams: improves resilience by decoupling detection, storage, and enforcement
Business-level outcomes:
- Lower incident response time
- Fewer successful automated attacks
- Better uptime during abusive traffic windows
- Stronger auditability of security decisions
High-level diagram:
Sequence diagram:
| Service | Purpose | Default Port |
|---|---|---|
ApiGateway |
Entry point, filtering, rate limiting, Redis enforcement | 8079 |
MCPService |
Security brain, strategy analysis, enforcement decisions | 9991 |
AIService |
Local LLM-based anomaly analysis via Ollama | 8082 |
LogingService |
Log persistence, gRPC behavior history, dashboard data | 8010 |
EurekaServer |
Service discovery registry | 8761 |
DummyService |
Protected downstream test service | 8090 |
sentinel-ui |
Monitoring dashboard | 5173 |
| Layer | Tech |
|---|---|
| Language/Runtime | Java 21 |
| Frameworks | Spring Boot, Spring Cloud Gateway, Spring WebFlux |
| Messaging | Apache Kafka |
| Caching/Enforcement | Redis |
| Persistence | PostgreSQL |
| Service Discovery | Netflix Eureka |
| Inter-service RPC | gRPC |
| AI Inference | Ollama (gemma3:latest configured in AIService) |
| Frontend | React, Vite, Tailwind CSS |
| Containerization | Docker, Docker Compose |
| Build Tools | Maven and Gradle |
- Detection is decoupled from enforcement, so analysis can evolve without slowing the gateway.
- AI is local and optional, so teams keep data control and reduce vendor/API dependency.
- TTL blocks reduce false-positive damage compared with permanent bans.
- Strategy pattern allows easy extension for new threat heuristics.
- Event-driven architecture supports high-throughput and horizontal growth.
Prerequisites:
- Docker and Docker Compose
- Ollama running locally (for AIService), default endpoint:
http://localhost:11434 - Optional model pull:
ollama pull gemma3:latest
Start all services:
docker compose up -dStop all services:
docker compose downEach service is independently buildable:
- Maven services:
ApiGateway,AIService - Gradle services:
MCPService,LogingService,EurekaServer,DummyService
Typical local order:
- Start infrastructure: PostgreSQL, Redis, Kafka, Eureka
- Start
LogingServiceandMCPService - Start
AIService - Start
DummyService - Start
ApiGateway - Start UI from
UI/sentinel-gateway-ui
Run multi-service tests:
./run_tests.shRun gateway tests separately:
cd ApiGateway
./mvnw testSentientGate/
├── ApiGateway/
├── MCPService/
├── AIService/
├── LogingService/
├── EurekaServer/
├── DummyService/
├── UI/sentinel-gateway-ui/
├── Architectures/
├── docker-compose.yml
├── run_tests.sh
└── README.md
CURRENT_FLAWS_AND_VULNERABILITIES.mdARCHITECTURAL_DESIGN_FLAWS.mdARCHITECTURAL_SOLUTIONS.mdIMPROVEMENT_AND_HARDENING_GUIDE.mdFUTURE_README.mdSECURITY.md
Apache 2.0. See LICENSE.

