Skip to content
This repository was archived by the owner on Mar 4, 2026. It is now read-only.
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
41 changes: 41 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
# Python
__pycache__/
*.py[cod]
*$py.class
*.so
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg

# Virtual environments
venv/
ENV/
env/

# IDE
.vscode/
.idea/
*.swp
*.swo
*~

# Logs
*.log
/tmp/copilot-detached-*.log

# OS
.DS_Store
Thumbs.db
316 changes: 316 additions & 0 deletions HeadySystems_v13/apps/heady_admin_ui/IMPLEMENTATION_GUIDE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,316 @@
# Trust & Security Dashboard - Implementation Guide

## Overview

The Heady Trust & Security Dashboard provides a comprehensive visualization layer for understanding the system's trust, security, ethics, and operational metrics. This guide explains how to extend and customize the dashboard for production use.

## Architecture

```
┌─────────────────────────────────────────────────────┐
│ Browser (127.0.0.1 only) │
│ │
│ ┌────────────────────────────────────────────┐ │
│ │ dashboard_simple.html │ │
│ │ - Pure CSS/JS visualization │ │
│ │ - No external dependencies │ │
│ │ - Real-time metric updates │ │
│ └─────────────┬──────────────────────────────┘ │
└────────────────┼──────────────────────────────────┘
│ HTTP/JSON
┌─────────────────────────────────────────────────────┐
│ dashboard_server.py │
│ - HTTP server on 127.0.0.1 │
│ - RESTful API endpoints │
│ - Tunnel-only gateway compliance │
└─────────────────┬───────────────────────────────────┘
┌─────────────────────────────────────────────────────┐
│ trust_metrics_api.py │
│ - Data models & business logic │
│ - Metrics aggregation │
│ - Registry integration │
└─────────────────┬───────────────────────────────────┘
┌─────────────────────────────────────────────────────┐
│ System Components │
│ - REGISTRY.json │
│ - MCP Gateway │
│ - AI modules (Tempo, Docs Guardian, etc.) │
└──────────────────────────────────────────────────────┘
```

## Data Models

### TrustRating
Represents trust scores for system components.

```python
@dataclass
class TrustRating:
component_name: str
trust_score: float # 0.0 to 100.0
verification_method: str
last_verified: str
status: str # "verified", "pending", "failed"
```

### SecurityProtocol
Represents active security protocols.

```python
@dataclass
class SecurityProtocol:
protocol_name: str
protocol_type: str # "encryption", "authentication", "authorization", "audit"
status: str # "active", "inactive", "degraded"
compliance_level: str # "PPA-001", "PPA-002", etc.
last_audit: str
config_hash: str
```

### EthicalPriority
Represents ethical priorities and enforcement.

```python
@dataclass
class EthicalPriority:
priority_name: str
priority_level: int # 1 (highest) to 5 (lowest)
description: str
enforced: bool
enforcement_method: str
```

### RuntimeErrorMetric
Tracks runtime errors across modules.

```python
@dataclass
class RuntimeErrorMetric:
module_name: str
error_count: int
error_type: str
severity: str # "critical", "high", "medium", "low"
timestamp: str
resolved: bool
```

### ReliabilityMetric
Physics-handling modules reliability metrics.

```python
@dataclass
class ReliabilityMetric:
module_name: str
uptime_percentage: float
mean_time_between_failures: float # in hours
success_rate: float # 0.0 to 100.0
total_operations: int
failed_operations: int
last_updated: str
```

### BenchmarkVerification
Benchmark certifications and compliance.

```python
@dataclass
class BenchmarkVerification:
benchmark_name: str
certification_status: str # "certified", "pending", "failed"
score: float
compliance_standards: List[str]
verified_by: str
verification_date: str
attestation_hash: str
```

## Production Integration

### Connecting to Real Metrics

Currently, the dashboard uses sample data for demonstration. To integrate with real system metrics:

1. **Trust Ratings Integration**:
```python
def get_trust_ratings(self) -> List[Dict[str, Any]]:
# Replace sample data with actual component health checks
from heady_monitoring import get_component_health

ratings = []
for component in get_component_health():
rating = TrustRating(
component_name=component.name,
trust_score=component.trust_score,
verification_method=component.verification_method,
last_verified=component.last_check.isoformat(),
status=component.status
)
ratings.append(rating.to_dict())
return ratings
```

2. **Security Protocols Integration**:
```python
def get_security_protocols(self) -> List[Dict[str, Any]]:
# Read from governance.lock and system config
from heady_governance import get_active_protocols

protocols = []
for proto in get_active_protocols():
protocol = SecurityProtocol(
protocol_name=proto.name,
protocol_type=proto.type,
status=proto.status,
compliance_level=proto.compliance_level,
last_audit=proto.last_audit.isoformat(),
config_hash=self._compute_full_hash(proto.config)
)
protocols.append(protocol.to_dict())
return protocols
```

3. **Runtime Errors Integration**:
```python
def get_runtime_errors(self) -> List[Dict[str, Any]]:
# Connect to logging/monitoring system
from heady_logging import get_error_metrics

return get_error_metrics(since=datetime.now() - timedelta(hours=24))
```

### Adding New Metrics

To add a new metric type:

1. **Define the data model** in `trust_metrics_api.py`:
```python
@dataclass
class NewMetric:
field1: str
field2: float

def to_dict(self) -> Dict[str, Any]:
return asdict(self)
```

2. **Add a getter method** to `TrustMetricsAPI`:
```python
def get_new_metrics(self) -> List[Dict[str, Any]]:
"""Retrieve new metrics."""
# Implementation here
pass
```

3. **Add an API endpoint** in `dashboard_server.py`:
```python
elif parsed_path.path == '/api/metrics/new':
data = api.get_new_metrics()
```

4. **Update the dashboard** in `dashboard_simple.html`:
```javascript
function displayNewMetrics(metrics) {
// Visualization code here
}

// In loadData():
displayNewMetrics(data.new_metrics);
```

## Security Considerations

### Tunnel-Only Gateway
The server enforces binding to `127.0.0.1` only:

```python
if bind_address != '127.0.0.1':
print(f"ERROR: Security violation - server must bind to 127.0.0.1 only")
sys.exit(1)
```

Never bypass this check. For remote access, use SSH tunneling:
```bash
ssh -L 8080:127.0.0.1:8080 user@heady-server
```

### Data Isolation
The dashboard respects vertical isolation boundaries:
- Each metric endpoint serves only non-sensitive metadata
- Cross-vertical data sharing is prohibited
- Routing information only, no database content

### Audit Trail
All HTTP requests are logged with timestamps:
```python
def log_message(self, format, *args):
sys.stderr.write("[%s] %s - %s\n" %
(self.log_date_time_string(),
self.address_string(),
format % args))
```

## Testing

Run the comprehensive test suite:

```bash
python3 test_api.py
```

Expected output:
```
================================================================================
Testing Heady Trust Metrics API
================================================================================
✓ Trust Ratings: 4 items
✓ Security Protocols: 4 items
✓ Ethical Priorities: 5 items
✓ Runtime Errors: 3 items
✓ Reliability Metrics: 4 items
✓ Benchmark Verifications: 3 items
✓ Symbolic Signals: 7 items
```

## Troubleshooting

### Server Won't Start
- Check if port 8080 is already in use: `lsof -i :8080`
- Verify Python 3 is installed: `python3 --version`
- Ensure you're in the correct directory

### Dashboard Shows No Data
- Verify the API server is running
- Check browser console for errors (F12)
- Test API endpoints directly: `curl http://127.0.0.1:8080/api/metrics/all`

### REGISTRY.json Not Found
- Ensure REGISTRY.json exists in the repository root
- Check file permissions
- The API automatically searches up the directory tree

## Performance Optimization

For production deployments:

1. **Enable caching** for metrics that don't change frequently
2. **Use connection pooling** for database queries
3. **Implement rate limiting** on API endpoints
4. **Add compression** for HTTP responses
5. **Monitor memory usage** with long-running processes

## Future Enhancements

Potential improvements:
- WebSocket support for real-time updates
- Historical trend analysis and charts
- Alerting thresholds and notifications
- Export metrics to external monitoring systems
- Multi-language support for international deployments
- Dark mode theme option
- Customizable dashboard layouts
- PDF report generation
Loading