Skip to content

NullRabbitLabs/minerva-hdn

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

minerva-hdn

Honeypot detection from TCP timing samples.

When you scan a network, some hosts are honeypots — systems designed to look like real services but actually log and report your activity. minerva-hdn helps identify them by analysing the statistical fingerprint of their TCP response times.

Real services have natural timing variation from CPU scheduling, network jitter, and application processing. Honeypot emulators (Cowrie, Dionaea, etc.) tend to produce timing patterns that betray their synthetic origin: unnaturally uniform responses, a slow initial handshake followed by fast replies, or a bimodal distribution that suggests scripted behaviour.

minerva-hdn takes a list of round-trip time measurements (in ms) for a single host:port and returns a verdict, a probability score, and the specific indicators ("tells") that drove the decision.

{"verdict": "likely_honeypot", "probability": 0.8, "tells": ["High timing variance (CV=1.29)", "Bimodal timing distribution"], "target": "10.0.0.1:443"}

Verdicts: likely_real (p < 0.3), inconclusive (0.3–0.7), likely_honeypot (p > 0.7).

Detection signals

Signal What it means
High coefficient of variation (CV > 0.4) Timing variance is disproportionate to the mean — inconsistent with real OS network stack behaviour
Slow first response First packet significantly slower than subsequent ones — suggests lazy initialisation in emulation code
Bimodal distribution Two distinct timing clusters with a large gap — suggests two different code paths responding, typical of scripted emulators

Scores from each signal are summed and capped at 1.0. Five or more samples are needed for the bimodal check to run.


Install

pip install -e ".[dev]"          # dev (tests included)
pip install -e ".[server]"       # HTTP API
pip install -e ".[mcp]"          # MCP server
pip install -e ".[server,mcp]"   # everything

Usage

CLI

# Pipe JSON array
echo '[12.3, 45.1, 11.9, 200.4, 13.2]' | minerva-hdn analyze

# Explicit flags
minerva-hdn analyze --samples 12.3 45.1 11.9 200.4 13.2
minerva-hdn analyze --samples 12.3 45.1 11.9 --target 10.0.0.1:443

Output is always JSON. Exit codes: 0 = success, 1 = error. Warns to stderr if fewer than 5 samples (bimodal check disabled).

HTTP API

minerva-serve --host 0.0.0.0 --port 8080
POST /analyze
{"samples": [...], "host": "10.0.0.1", "port": 443}
→ {"verdict": "...", "probability": 0.82, "tells": [...]}

POST /analyze/batch
{"targets": [{"host": "...", "port": 443, "samples": [...]}, ...]}
→ {"assessments": [{"target_host": "...", "target_port": 443, "verdict": "...", ...}]}

GET /health
→ {"status": "ok"}

MCP server

minerva-mcp                                  # stdio — spawned by MCP client
minerva-mcp --transport sse --port 8080      # HTTP/SSE — Docker service

Exposes one tool: analyze_honeypot(samples_ms, target_host, target_port).


Manual testing

1. minerva-hdn in isolation

# CLI
minerva-hdn analyze --samples 12.3 45.1 11.9 200.4 13.2

# HTTP server
minerva-serve --host 0.0.0.0 --port 8080 &

# Single target
curl -s -X POST localhost:8080/analyze \
  -H 'Content-Type: application/json' \
  -d '{"samples": [12.3, 45.1, 200.4, 13.2, 11.9], "host": "10.0.0.1", "port": 443}' | jq .

# Batch (what the orchestrator calls)
curl -s -X POST localhost:8080/analyze/batch \
  -H 'Content-Type: application/json' \
  -d '{"targets": [{"host":"10.0.0.1","port":443,"samples":[12.3,45.1,200.4]},{"host":"10.0.0.2","port":22,"samples":[1.1,1.2,1.1]}]}' | jq .

# Health check
curl -s localhost:8080/health

kill %1

2. Unit tests (no infra needed)

pytest -v

Design

minerva-hdn is stateless — like nmap, it takes inputs and returns results. No database, no message queue. Persistence is entirely the caller's responsibility.

This makes it easy to embed as a library, call over HTTP, or drive from an AI agent via MCP — without pulling in infrastructure dependencies.

Development

pytest --cov=src       # tests with coverage

Context

Minerva-HDN is the open-source honeypot detection component of the NullRabbit platform — autonomous defence for validator infrastructure and decentralised networks.

Minerva consumes timing samples produced by Limpet, NullRabbit's eBPF/XDP network scanner. Together they form the scanning layer: Limpet discovers open ports with nanosecond RTT precision, Minerva determines whether those services are real or synthetic.

The analysis results feed into NullRabbit's proprietary threat analysis and behavioural baseline systems. For the governance framework behind autonomous defensive action, see:

On Earned Autonomy: Delegating Network-Lethal Authority to Machines Simon Morley, NullRabbit Labs, January 2026 DOI: 10.5281/zenodo.18406828

About

Honeypot detection from TCP timing samples. Statistical fingerprinting to distinguish real services from emulators like Cowrie and Dionaea.

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages