Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions documentation/docs/_constants.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{/* This file stores constants used across the documentation */}

export const versions = {
latestVersion: 'v0.9.x',
quickStartDockerTag: 'v0.9.0'
latestVersion: 'v0.10.x',
quickStartDockerTag: 'v0.10.0'
};
6 changes: 6 additions & 0 deletions documentation/versioned_docs/version-v0.10.x/_constants.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
{/* This file stores constants used across the documentation */}

export const versions = {
latestVersion: 'v0.9.x',
quickStartDockerTag: 'v0.9.0'
};
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
# WSO2 Agent Manager Instrumentation

Zero-code OpenTelemetry instrumentation for Python agents using the Traceloop SDK, with trace visibility in the WSO2 Agent Manager.

## Overview

`amp-instrumentation` enables zero-code instrumentation for Python agents, automatically capturing traces for LLM calls, MCP requests, and other operations. It seamlessly wraps your agent’s execution with OpenTelemetry tracing powered by the Traceloop SDK.

## Features

- **Zero Code Changes**: Instrument existing applications without modifying code
- **Automatic Tracing**: Traces LLM calls, MCP requests, database queries, and more
- **OpenTelemetry Compatible**: Uses industry-standard OpenTelemetry protocol
- **Flexible Configuration**: Configure via environment variables
- **Framework Agnostic**: Works with any Python application built using a wide range of agent frameworks supported by the TraceLoop SDK

## Installation

```bash
pip install amp-instrumentation
```

## Quick Start

### 1. Register Your Agent

First, register your agent at the [WSO2 Agent Manager](https://github.com/wso2/agent-manager) to obtain your agent API key and configuration details.

### 2. Set Required Environment Variables

```bash
export AMP_OTEL_ENDPOINT="https://amp-otel-endpoint.com" # AMP OTEL endpoint
export AMP_AGENT_API_KEY="your-agent-api-key" # Agent-specific key generated after registration
```

### 3. Run Your Application

Use the `amp-instrument` command to wrap your application run command:

```bash
# Run a Python script
amp-instrument python my_script.py

# Run with uvicorn
amp-instrument uvicorn app:main --reload

# Run with any package manager
amp-instrument poetry run python script.py
amp-instrument uv run python script.py
```

That's it! Your application is now instrumented and sending traces to the WSO2 Agent Manager.
383 changes: 383 additions & 0 deletions documentation/versioned_docs/version-v0.10.x/concepts/evaluation.mdx

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
---
sidebar_position: 1
---

# Observability

WSO2 Agent Manager provides full-stack observability for AI agents — whether they are deployed through the platform or running externally. Traces, metrics, and logs flow into a centralized store that you can query and analyze through the AMP Console.

## Overview

Observability in AMP is built on [OpenTelemetry](https://opentelemetry.io/), the industry-standard framework for distributed tracing and instrumentation. Every agent interaction — LLM calls, tool invocations, MCP requests, retrieval operations, and agent reasoning steps — is captured as a structured trace and stored for analysis.

## Auto-Instrumentation for Deployed Agents

When you deploy an agent through WSO2 Agent Manager, observability is set up **automatically — no code changes required**.

### What Gets Instrumented

The Traceloop SDK (used under the hood) instruments a wide range of AI frameworks automatically:

| Category | Examples |
|----------|---------|
| LLM providers | OpenAI, Anthropic, Azure OpenAI |
| Agent frameworks | LangChain, LlamaIndex, CrewAI, Haystack |
| Vector stores | Pinecone, Weaviate, Chroma, Qdrant |
| MCP clients | Any MCP tool calls made by the agent |

### Trace Attributes Captured

Each span is enriched with metadata that makes it possible to evaluate and debug agent behaviour:

- **LLM spans**: model name, prompt tokens, completion tokens, latency, finish reason
- **Tool spans**: tool name, input arguments, output, execution time
- **Agent spans**: agent name, step number, reasoning output
- **Root span**: agent ID, deployment ID, correlation ID, end-to-end latency

## Observability for External Agents

Agents that are **not deployed through AMP** — for example, agents running locally, on-premises, or in a third-party cloud — can still send traces to AMP. These are called **Externally-Hosted Agents**.

### Registration

1. In the AMP Console, open your **Project** and click **+ Add Agent**.
2. Choose **Externally-Hosted Agent**.
3. Provide a **Name** and optional description, then click **Register**.
4. The **Setup Agent** panel opens automatically with a **Zero-code Instrumentation Guide**.

### Install the Package

```bash
pip install amp-instrumentation
```

### Generate an API Key

In the Setup Agent panel, select a **Token Duration** and click **Generate**. Copy the key immediately — it will not be shown again.

### Set Environment Variables

```bash
export AMP_OTEL_ENDPOINT="http://localhost:22893/otel"
export AMP_AGENT_API_KEY="<your-generated-api-key>"
```

### Run with Instrumentation

Wrap your agent's start command with `amp-instrument`:

```bash
amp-instrument python my_agent.py
amp-instrument uvicorn app:main --reload
amp-instrument poetry run python agent.py
```

No changes to your agent code are required. The same Traceloop-based auto-instrumentation applies — all supported AI frameworks are traced automatically.

---

## Trace Visibility in AMP Console

Once traces start flowing in, you can explore them in the AMP Console under your agent's sidebar:

- **OBSERVABILITY → Traces** — search and inspect individual traces by time range or correlation ID; expand a trace to see LLM spans, tool spans, and agent reasoning steps
Original file line number Diff line number Diff line change
@@ -0,0 +1,87 @@
# Contributing Guidelines

This document establishes guidelines for using GitHub Discussions and Issues for technical conversations about the Agent Manager.

## Getting Started

- Discussions, issues, feature ideas, bug reports, and design proposals from the community are welcome
- For security vulnerabilities, please report to security@wso2.com as per the [WSO2 Security Reporting Guidelines](https://security.docs.wso2.com/en/latest/security-reporting/report-security-issues/)

## Discussion Categories

| Category | Purpose | Example Topics |
|----------|---------|----------------|
| **Announcements** | Official updates from maintainers | Releases, roadmap updates, breaking changes |
| **General** | Open-ended conversations | Community introductions, general questions |
| **Ideas** | Feature suggestions and brainstorming | New capabilities, integration ideas |
| **Q&A** | Technical questions with answers | Implementation help, troubleshooting |
| **Show and Tell** | Share projects and integrations | Agent implementations, use cases |
| **Design Proposals** | Technical design discussions | Architecture changes, system design, new features requiring review |

## When to Use Discussions vs Issues

| Use Discussions For | Use Issues For |
|---------------------|----------------|
| Open-ended questions | Bug reports with reproduction steps |
| Feature ideas and brainstorming | Concrete feature requests with clear scope |
| Design proposals and RFCs | Actionable tasks and work items |
| Community engagement | Pull request discussions |
| Troubleshooting help | Security vulnerabilities (private) |

## Guidelines

### Starting a Discussion

1. **Search first** - Check existing discussions to avoid duplicates
2. **Choose the right category** - Use the category table above
3. **Use a clear title** - Be specific and descriptive
4. **Provide context** - Include relevant details, code snippets, or diagrams

### Promoting Discussions to Issues

When a discussion results in actionable work:
1. Summarize the outcome in a final comment
2. Create a linked GitHub Issue for implementation
3. Reference the discussion in the issue for context

## Feature Lifecycle

Features progress through distinct stages from initial concept to implementation:

### 1. Idea Stage

High-level discussions about capabilities we want to explore start in the **Ideas** category. These are similar to epics—broad in scope with no imposed structure. Ideas allow open brainstorming before committing to specific solutions.

### 2. Design Proposal Stage

When an idea is refined into a well-scoped feature, create a discussion in the **Design Proposals** category. Proposals must follow the standard template:

| Section | Description |
|---------|-------------|
| **Problem** | Describe the problem, who is affected, and the impact |
| **User Stories** | Define user stories using the format "As a [role], I want [goal] so that [benefit]" |
| **Existing Solutions** | How is this solved elsewhere? Include current workarounds and links to relevant implementations, docs, or design proposals |
| **Proposed Solution** | Technical approach and design details |
| **Alternatives Considered** | What other approaches were evaluated? |
| **Open Questions** | Unresolved technical decisions that need input (if any) |
| **Milestone Plan** | Implementation phases aligned with release milestones |

#### Proposal Labels

Use these labels to track design proposal status:

| Label | Description |
|-------|-------------|
| `Proposal/Draft` | Initial proposal, still being written |
| `Proposal/Review` | Ready for team review and feedback |
| `Proposal/Approved` | Design accepted, ready for implementation |
| `Proposal/Rejected` | Proposal declined |
| `Proposal/Implemented` | Design fully implemented |

### 3. Implementation Tracking

Once a design proposal is approved:
1. Create GitHub Issues for implementation tasks
2. Link issues back to the design proposal discussion
3. Assign issues to appropriate milestones
4. Track progress through milestone completion
Loading