HUSH-MESH is an AWS-first maritime threat detection and convoy protection system leveraging machine learning, edge computing, and real-time path planning. The system deploys autonomous drone networks for defensive surveillance and threat assessment in maritime environments.
Key Capabilities: Real-time threat detection • Multi-sensor fusion • Edge ML inference • Self-healing mesh networks • Dynamic route optimization
Ensure you have Python 3.9+ installed:
python3 --versionInstall required dependencies:
pip install flask flask-cors fastapi uvicorn paho-mqtt torch numpypkill -f "python.*server" && pkill -f "python.*app" && pkill -f "http.server"cd ml
export PYTHONPATH=/home/participant/.local/lib/python3.11/site-packages:$PYTHONPATH
python3 ml_api_server.py > /tmp/ml_server.log 2>&1 &cd demo
python3 -m http.server 8081 > /tmp/demo_server.log 2>&1 &Check that all servers are running:
# Check ML API Server
curl http://localhost:9000/health
# Check Demo Server
curl http://localhost:8081/
# View server logs
tail -f /tmp/ml_server.log
tail -f /tmp/demo_server.log- Enhanced Multi-Route Demo: http://localhost:8081/enhanced_multi_route.html
- ML Test Interface: http://localhost:8081/test_ml.html
GET /health– Health check and server statusPOST /predict– Threat prediction inferencePOST /reset– Reset model state
- Serves static HTML demos and visualizations
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Edge Layer │ │ Cloud Layer │ │ Dashboard UI │
│ │ │ │ │ │
│ • Greengrass │◄──►│ • IoT Core │◄──►│ • React App │
│ • ML Models │ │ • Kinesis │ │ • Real-time Map │
│ • Sensors │ │ • Lambda │ │ • WebSocket │
│ • MQTT Comms │ │ • DynamoDB │ │ • Cognito │
└─────────────────┘ └─────────────────┘ └─────────────────┘
Core Technologies:
- Edge Computing: AWS IoT Greengrass + SageMaker Neo optimized models
- Cloud Infrastructure: Kinesis Streams + Lambda + DynamoDB + ECS Fargate
- ML Pipeline: SageMaker training with maritime threat detection models
- Communication: MQTT mesh networking for resilient edge coordination
# Test ML inference
python3 test_ml_integration.py
# Test convoy simulation
python3 simple_demo.py# Monitor ML server
tail -f /tmp/ml_server.log
# Monitor demo server
tail -f /tmp/demo_server.log
# Check for errors
grep -i error /tmp/ml_server.log# Check port availability
netstat -tlnp | grep -E ":(8081|9000)"
# Force kill and restart
pkill -9 -f "python.*server"
pkill -9 -f "http.server"
# Restart servers (run commands from steps 2-3 above)# Verify model file exists
ls -lh models/model.pth
# Check Python path
echo $PYTHONPATH
# Reinstall dependencies
pip install --force-reinstall torch numpyEnsure all servers are running and listening on correct ports:
ps aux | grep python
netstat -tlnp | grep LISTENFor production deployment with full AWS integration:
- Configure AWS CLI with appropriate credentials
- Deploy infrastructure using CDK/CloudFormation
- Set up IoT Greengrass on edge devices
- Deploy ML models to SageMaker endpoints
- Configure Kinesis streams for telemetry ingestion
See docs/deploy_instructions.md for comprehensive deployment guide.
hush-mesh/
├── ml/ # ML inference server
│ ├── ml_api_server.py # Flask API for threat detection
│ └── models/ # Trained PyTorch models
├── demo/ # HTML demos and visualizations
│ ├── enhanced_multi_route.html
│ └── test_ml.html
├── backend/ # Backend API server (optional)
├── docs/ # Documentation
│ ├── architecture.md
│ └── deploy_instructions.md
└── README.md
Human-in-the-Loop Design: All engagement decisions require explicit human authorization. The system provides threat assessment and recommendations only.
Defensive Posture: System is designed exclusively for defensive maritime convoy protection. No offensive capabilities.
Data Privacy: Synthetic training data only. No PII collection. All telemetry encrypted in transit and at rest.
Developed for: AWS Mission Autonomy Hackathon Fall 2025 Team: Clankers Status: Active Development – MVP Complete
