A plug-and-play multi-agent platform on Azure AI Foundry. Build, connect, and deploy AI agents that work together using open standards.
- A2A + MCP combined — connect agents across frameworks (A2A) and external tools (MCP) in one platform
- Drop-in agents — add a package to
agents/, runuv sync, the router auto-discovers it - Declarative HITL —
@tool(approval_mode="always_require")— one decorator, no custom approval code - 3-layer middleware — InputGuard, logging, and sensitive data masking with early termination
- Checkpoint + resume — pause and resume conversations across process restarts
- Azure-native — Managed Identity, VNet isolation, Azure AI Foundry integration
Built on Microsoft Agent Framework with HandoffBuilder orchestration.
Note: Microsoft Agent Framework is currently in Release Candidate (RC5) and A2A support is in beta. This platform tracks the latest pre-release versions. Pin your dependencies in production.
graph LR
User([User]) --> Triage[Triage Agent]
subgraph Local Agents
Triage -->|handoff| Helpdesk[Helpdesk Agent]
Triage -->|handoff| Knowledge[Knowledge Agent]
Triage -->|handoff| DataAnalyst[Data Analyst]
Triage -->|handoff| Incident[Incident Triage]
Triage -->|handoff| CodeReview[Code Reviewer]
Triage -->|handoff| InfraAnalyzer[Infra Analyzer]
end
subgraph Remote Agents - A2A Protocol
Triage -.->|A2A| Legal[Legal Agent]
Triage -.->|A2A| Finance[Finance Agent]
end
subgraph External Tools - MCP
Helpdesk -->|MCP| Confluence[Confluence]
Helpdesk -->|MCP| Jira[Jira]
end
Helpdesk & Knowledge & DataAnalyst & Incident & CodeReview & InfraAnalyzer & Legal --> Triage
- User sends a question — via CLI, DevUI, or API
- Triage agent analyzes and routes — reads each agent's description to pick the right specialist
- Specialist uses its tools — KB search, SQL queries, file search, MCP tools, or A2A calls
- Response flows back — through the triage agent to the user
New agents are auto-discovered. Drop a package in agents/, run uv sync, and the router picks it up.
- A2A protocol — connect agents across frameworks and services (demo)
- MCP integration — connect Confluence, Jira, SharePoint without code
- Plugin architecture —
scaffold→configure→deploy - Auto-discovery — no manifest or registry needed
- Human-in-the-loop — one decorator:
@tool(approval_mode="always_require") - RAG built-in — drop documents in
knowledge/, upload to vector store - Checkpointing — resume conversations after interruptions
- Middleware stack — InputGuard, logging, sensitive data masking
- Terraform IaC — POC (~$10/mo) and enterprise environments
- Observability — OpenTelemetry + Aspire Dashboard
Prerequisites
- Azure subscription — free trial
- Azure AI Foundry resource with a model deployment (e.g.,
gpt-4.1) - Python 3.13+ — download
- uv package manager —
pip install uv - Azure CLI — install, then
az login
Click Open in GitHub Codespaces above. Everything is pre-configured.
pip install uv
git clone https://github.com/b-franken/agent-platform.git
cd agent-platform
uv sync
cp .env.example .env
# Edit .env — set AZURE_AI_PROJECT_ENDPOINT and AZURE_OPENAI_RESPONSES_DEPLOYMENT_NAME
az login
uv run python -m agent_core.validate
uv run --package router-agent python -m router.mainYou should see:
Agent Platform — Interactive Mode
Discovered agents: helpdesk, knowledge-agent, data-analyst
Type your question (or 'quit' to exit):
You >
azd up # Deploys infrastructure + application in one commanduv run python -m agent_core.scaffold my-agent --description "What it does"
# Edit agents/my-agent/src/my_agent/tools.py and config.py
uv sync
uv run python -m agent_core.validate
# Restart — the router auto-discovers your new agentSee docs/adding-agents.md for the full guide with examples.
| Agent | What it demonstrates | Data source | Framework Features |
|---|---|---|---|
| helpdesk | IT troubleshooting + ticket management | YAML KB, SQLite | @tool, approval_mode HITL |
| knowledge-agent | Company docs search with citations | Markdown files | RAG, file_search, vector store |
| data-analyst | Natural language SQL queries | SQLite sample DB | Schema discovery, input validation |
| expense-approver | Budget checks + expense submission | SQLite | approval_mode, HITL |
| incident-triage | Severity classification + runbook lookup | Keyword matching | Structured Output (response_format) |
| code-reviewer | Code quality + security scanning | Regex patterns | Deterministic analysis, streaming |
| infra-analyzer | Terraform scanning + remediation | Regex HCL matching | Tool Approval HITL |
| router | Triage routing to specialists | — | HandoffBuilder, auto-discovery |
Note: All agents use local demo data (YAML, SQLite, regex) — no external API keys needed. This is intentional: the platform works out of the box. See docs/production.md for connecting real data sources.
- A2A demo — cross-framework agents communicating via the A2A protocol
- MCP server — expose tools via Model Context Protocol
- Start here — study
helpdesk(YAML KB, SQLite tickets, HITL) - Build your first agent — Tutorial (~15 min)
- Structured output — study
incident-triage(Pydantic models asresponse_format) - Security scanning — study
code-reviewerandinfra-analyzer - Multi-agent orchestration — study
router(HandoffBuilder, auto-discovery) - Cross-service agents — run the A2A demo and MCP server
- Deploy — Deployment guide
agents-platform/
├── packages/agent-core/ # Shared library (config, factory, middleware, registry)
├── agents/
│ ├── router/ # Triage + HandoffBuilder orchestration
│ ├── helpdesk/ # IT troubleshooting, KB search, ticket creation
│ ├── knowledge-agent/ # Company docs search with RAG citations
│ ├── data-analyst/ # Natural language SQL queries
│ ├── expense-approver/ # Human-in-the-loop demo
│ ├── incident-triage/ # Structured output + context providers
│ ├── code-reviewer/ # Code quality + security analysis
│ └── infra-analyzer/ # Terraform scanning + tool approval HITL
├── examples/
│ ├── a2a-demo/ # A2A cross-framework demo
│ └── mcp-server/ # MCP server example
├── infra/ # Terraform with Azure Verified Modules
├── deployment/azurefunctions/ # Azure Functions deployment
├── scripts/ # Setup and pre-flight checks
├── tests/ # Unit tests
├── evals/ # Agent quality evaluations
└── docs/ # Documentation
| Guide | Description |
|---|---|
| Getting Started | Setup and first run |
| Tutorial: Build Your First Agent | Step-by-step from zero |
| Key Concepts | Multi-agent systems, A2A, MCP, RAG explained |
| Adding Agents | The plugin contract and patterns |
| Architecture | How the platform works under the hood |
| Deployment | Local, Docker, Azure deployment options |
| From Demo to Production | Connect real data sources |
| Cost Optimization | Model selection and cost tips |
| Evaluations | Agent quality testing framework |
| Troubleshooting | Common issues and fixes |
| FAQ | Frequently asked questions |
| Method | Use case | Command |
|---|---|---|
| CLI | Local development | uv run --package router-agent python -m router.main |
| DevUI | Browser-based testing | uv run devui --port 8080 |
| Docker | Containerized dev | docker compose up |
| azd | One-command Azure deploy | azd up |
| Terraform | Infrastructure only | terraform apply |
| Azure Functions | Serverless production | func start |
The default deployment uses Azure AI Foundry Agent Service (GA since March 2026). Foundry manages the agent runtime — scaling, observability, enterprise security — you only pay for model tokens.
# Default: Foundry hosted (recommended)
azd up
# Alternative: self-hosted Container Apps
azd up -- -var="deployment_mode=container_apps"See docs/deployment.md for details.
Cost estimate
- Development: ~$5-10/month (Azure AI Foundry + GPT-4.1-mini for routing)
- Foundry hosted: Token cost only — no hosting fees, Foundry manages runtime
- Container Apps (alternative): ~$10-92/month depending on networking
- Clean up:
azd downorterraform destroy
See docs/cost-optimization.md for model selection tips.
The platform includes an eval framework for testing agent quality against a real Azure endpoint:
uv run pytest evals/ -m eval -vThree suites: routing accuracy, tool selection, and response quality. See docs/evals.md.
This platform includes built-in guardrails:
- InputGuard middleware — enforces input length and conversation turn limits
- Sensitive data masking — tool arguments and results are masked in logs by default
- Human-in-the-loop —
approval_mode="always_require"on destructive operations (e.g., ticket creation, infrastructure fixes) - Grounded responses — RAG with citations, SQL with actual queries shown
See TRANSPARENCY_FAQ.md for detailed transparency information and Microsoft Responsible AI principles.
- Microsoft Agent Framework — the framework this platform is built on
- Azure AI Foundry — the Azure service powering the agents
- A2A Protocol — the open standard for agent-to-agent communication
- Model Context Protocol (MCP) — the standard for agent-to-tool integration
- Microsoft Solution Accelerators — similar projects from Microsoft
We welcome contributions! Whether it's a new agent, a bug fix, or documentation improvement.
See CONTRIBUTING.md for guidelines. Good first issues are labeled good first issue.