Build, run, and monitor intelligent operational workflows using AI agent orchestration and MCP tools β with enterprise-grade security powered by Archestra.
AgentOS is a DevOps-first platform where anyone β engineers, product managers, support teams β can build, run, and monitor AI-powered operational workflows without writing infrastructure code.
No DevOps experience needed. Drag and drop tools, chat with AI in plain English, or let autonomous agents investigate and fix incidents automatically.
| Mode | What You Do | Example |
|---|---|---|
| π§ Runbook Mode | Drag-and-drop workflow builder (like Zapier for containers) | If container unhealthy β restart β send alert |
| π Monitor Mode | Real-time dashboard with one-click actions | Click "Restart Container" or "AI Fix Suggestion" |
| π€ Agent Swarm Mode | Chat with AI in natural language | "Fix all unhealthy containers and notify me" |
- Product Managers β Build workflows to restart services, send alerts, monitor uptime
- Support Teams β Create automated incident response, get AI-powered troubleshooting
- Developers β Manage test environments, analyze logs, deploy containers
- Operations β Monitor production, implement self-healing, manage incidents
What you think happens:
Check if database is healthy β Alert me if there's a problem
What actually happens:
docker_logs β Reads container logs (passwords, API keys, connection strings inside)
ai_analyze β Sends your secrets to an external LLM
slack_notify β Your database password appears in a Slack channel
Result: Sensitive data exposed.
Three innocent-looking tools become dangerous in sequence:
Step 1: READ docker_logs, health_check β Accesses private data
Step 2: PROCESS ai_analyze, execute_command β Handles the content
Step 3: EXFILTRATE slack_notify, send_email β Sends it outside
Each tool is safe alone. Together, they form a complete data exfiltration chain.
- Firewall rules don't inspect AI tool calls
- Access controls are already granted to the AI
- Workflows are built visually, not in code
- Attackers bypass controls with clever prompts
You need security at the AI orchestration level β not the network level.
AgentOS integrates with Archestra β an agentic security engine that enforces policies before every tool executes, making data exfiltration structurally impossible regardless of what the AI decides.
Without Archestra: docker_logs β ai_analyze β slack_notify β Secrets leaked
With Archestra: docker_logs β ai_analyze β [BLOCKED] β β
Exfiltration prevented
Every MCP tool call in AgentOS is routed through the Archestra Security Engine:
AgentOS workflow triggers tool call
β
Archestra intercepts BEFORE execution
β
βββββββββββββββββββββββββββββββββββ
β Check: Tool Invocation Policy β β block_always / block_when_untrusted / allow
β Check: Data Context β β is this data trusted or untrusted?
βββββββββββββββββββββββββββββββββββ
β
Block or Allow
β
Apply Trusted Data Policy β mark_as_untrusted / sanitize_with_dual_llm / allow
β
Log result
No AI prompt can bypass this. The policy engine is deterministic code, not an AI decision.
Controls when a tool is allowed to run.
| Policy | Behavior | Example Use |
|---|---|---|
block_always |
Tool never executes | docker_exec β shell injection risk |
block_when_context_is_untrusted |
Blocked only when upstream data is untrusted | slack_notify β safe manually, dangerous after docker_logs |
allow |
Executes normally | docker_status β read-only metadata |
Controls how a tool's output is handled downstream.
| Policy | Behavior | Example Use |
|---|---|---|
mark_as_untrusted |
Flags output, restricts downstream tools | docker_logs β may contain secrets |
sanitize_with_dual_llm |
Two independent AIs verify before passing forward | ai_analyze β prevent prompt injection |
allow |
Output passes through unchanged | docker_status β safe metadata only |
Problem: docker_logs contains secrets, slack_notify sends externally.
# Trusted Data Policy
tool: docker_logs
action: mark_as_untrusted
reason: Logs may contain secrets
# Tool Invocation Policy
tool: slack_notify
action: block_when_context_is_untrusted
reason: Prevent data leakageWhat happens:
docker_logs runs β output marked UNTRUSTED
slack_notify attempts β Archestra detects UNTRUSTED context
β BLOCKED
Error: "Cannot send untrusted data externally"
Problem: Malicious content hidden inside container logs could hijack the AI.
tool: ai_analyze
action: sanitize_with_dual_llm
reason: Prevent prompt injection via log contentHow dual LLM sanitization works:
docker_logs output β LLM #1: "Check for malicious instructions"
β LLM #2: "Verify independently"
β Both agree: SAFE β passes through
β Either flags: SUSPICIOUS β BLOCKED
Why two LLMs? A single AI can be fooled. Two independent models with different architectures create consensus that's extremely hard to bypass.
tool: docker_exec
action: block_always
reason: Shell injection vulnerabilityAny workflow attempting docker_exec is blocked before it runs β no exceptions, no overrides.
You drag: docker_logs β slack_notify
Execution:
docker_logs runs β
output marked UNTRUSTED
slack_notify attempts β BLOCKED by Archestra
Error shown in UI: "Policy violation β untrusted data cannot be sent externally"
Container goes unhealthy
You click "Send Alert"
Archestra checks:
Source: docker_logs β UNTRUSTED
Destination: slack_notify β blocked for untrusted context
Result: Alert displayed on screen only. No external data leak.
You say: "Fix containers and notify me"
AI generates workflow and attempts execution.
Archestra blocks the notification step.
AI responds: "Containers fixed. Results shown on screen (data contains sensitive content)."
Every blocked attempt is logged with full context:
β 2 min ago
Workflow: "Database Health Monitor"
Tool: slack_notify
Reason: Blocked β untrusted data context
Source: docker_logs
β 15 min ago
Workflow: "Auto-Restart Services"
Tool: docker_exec
Reason: Blocked β always blocked
Attempted: restart service
β οΈ 1 hour ago
Workflow: "Log Analysis"
Tool: ai_analyze
Action: Sanitized β dual LLM applied
Detected: Potential prompt injection attempt
# Block dangerous execution
docker_exec: block_always
docker_run: block_always
# Mark data sources as untrusted
docker_logs: mark_as_untrusted + sanitize_with_dual_llm
health_check: mark_as_untrusted
# Restrict external communication
slack_notify: block_when_context_is_untrusted
email_send: block_when_context_is_untrusted
webhook_post: block_when_context_is_untrusted
# Allow safe read-only operations
docker_status: allow
docker_list: allow
docker_restart: allow # with validationβββββββββββββββββββββββββββββββββββββββββββββββ
β AgentOS β
β ββββββββββββ ββββββββββββ βββββββββββββββ β
β β Runbook β β Monitor β β Agent Swarm β β
β ββββββ¬ββββββ ββββββ¬ββββββ ββββββββ¬βββββββ β
β ββββββββββββββ΄βββββββββββββββ β
β Workflow Execution Engine β
β MCP Tool Registry β
βββββββββββββββββββ¬ββββββββββββββββββββββββββββ
β Every tool call
β
βββββββββββββββββββββββββββββββββββββββββββββββ
β Archestra Security β
β β
β ββββββββββββββββββββββββββββββββββββββββ β
β β Agentic Security Engine β β
β β β’ Intercept every MCP call β β
β β β’ Check invocation policy β β
β β β’ Check data trust context β β
β β β’ Block or allow β β
β β β’ Mark output trust level β β
β β β’ Sanitize with dual LLM if needed β β
β ββββββββββββββββββββββββββββββββββββββββ β
β β
β ββββββββββββββββ ββββββββββββββββββββββββ β
β β Invocation β β Trusted Data β β
β β Policies β β Policies β β
β ββββββββββββββββ ββββββββββββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββ
β
β
Docker Containers
Total setup time: ~20 minutes
Configure AgentOS to connect to the Archestra platform:
- Set Archestra proxy URL
- Add authentication token
- Enable policy enforcement
Set these essential policies through the Archestra dashboard:
- Block external comms with untrusted data
- Mark container logs as untrusted
- Sanitize AI analysis outputs
- Block shell execution tools
Create a test workflow: docker_logs β slack_notify and run it.
Expected result:
β Policy Violation
Tool: slack_notify
Reason: Cannot send untrusted data externally
Your data stayed safe.
Review Archestra violation logs regularly to spot attack attempts, fix broken workflows, and tune policies.
| Property | Detail |
|---|---|
| Deterministic | Rules enforced by code, not AI decisions |
| Platform-level | Blocks before tool execution, not after |
| No bypass | Even a compromised AI cannot override policies |
| Context-aware | Tracks data trust through the entire workflow |
| Real-time | Every tool call checked, every time |
| <1ms overhead | No meaningful performance impact |
- Defense in Depth β Tool invocation policies + trusted data policies + dual LLM verification work together
- Zero Trust β All container data and external data starts as untrusted; trust must be explicitly granted
- Fail Secure β If policy check fails, if context is unclear, if sanitization fails β block. Default to deny.