diff --git a/documentation/docs/_constants.md b/documentation/docs/_constants.md
index 4fc475efb..c3b057b06 100644
--- a/documentation/docs/_constants.md
+++ b/documentation/docs/_constants.md
@@ -1,6 +1,6 @@
{/* This file stores constants used across the documentation */}
export const versions = {
- latestVersion: 'v0.9.x',
- quickStartDockerTag: 'v0.9.0'
+ latestVersion: 'v0.10.x',
+ quickStartDockerTag: 'v0.10.0'
};
diff --git a/documentation/versioned_docs/version-v0.10.x/_constants.md b/documentation/versioned_docs/version-v0.10.x/_constants.md
new file mode 100644
index 000000000..4fc475efb
--- /dev/null
+++ b/documentation/versioned_docs/version-v0.10.x/_constants.md
@@ -0,0 +1,6 @@
+{/* This file stores constants used across the documentation */}
+
+export const versions = {
+ latestVersion: 'v0.9.x',
+ quickStartDockerTag: 'v0.9.0'
+};
diff --git a/documentation/versioned_docs/version-v0.10.x/components/amp-instrumentation.mdx b/documentation/versioned_docs/version-v0.10.x/components/amp-instrumentation.mdx
new file mode 100644
index 000000000..8998a19dc
--- /dev/null
+++ b/documentation/versioned_docs/version-v0.10.x/components/amp-instrumentation.mdx
@@ -0,0 +1,52 @@
+# WSO2 Agent Manager Instrumentation
+
+Zero-code OpenTelemetry instrumentation for Python agents using the Traceloop SDK, with trace visibility in the WSO2 Agent Manager.
+
+## Overview
+
+`amp-instrumentation` enables zero-code instrumentation for Python agents, automatically capturing traces for LLM calls, MCP requests, and other operations. It seamlessly wraps your agent’s execution with OpenTelemetry tracing powered by the Traceloop SDK.
+
+## Features
+
+- **Zero Code Changes**: Instrument existing applications without modifying code
+- **Automatic Tracing**: Traces LLM calls, MCP requests, database queries, and more
+- **OpenTelemetry Compatible**: Uses industry-standard OpenTelemetry protocol
+- **Flexible Configuration**: Configure via environment variables
+- **Framework Agnostic**: Works with any Python application built using a wide range of agent frameworks supported by the TraceLoop SDK
+
+## Installation
+
+```bash
+pip install amp-instrumentation
+```
+
+## Quick Start
+
+### 1. Register Your Agent
+
+First, register your agent at the [WSO2 Agent Manager](https://github.com/wso2/agent-manager) to obtain your agent API key and configuration details.
+
+### 2. Set Required Environment Variables
+
+```bash
+export AMP_OTEL_ENDPOINT="https://amp-otel-endpoint.com" # AMP OTEL endpoint
+export AMP_AGENT_API_KEY="your-agent-api-key" # Agent-specific key generated after registration
+```
+
+### 3. Run Your Application
+
+Use the `amp-instrument` command to wrap your application run command:
+
+```bash
+# Run a Python script
+amp-instrument python my_script.py
+
+# Run with uvicorn
+amp-instrument uvicorn app:main --reload
+
+# Run with any package manager
+amp-instrument poetry run python script.py
+amp-instrument uv run python script.py
+```
+
+That's it! Your application is now instrumented and sending traces to the WSO2 Agent Manager.
\ No newline at end of file
diff --git a/documentation/versioned_docs/version-v0.10.x/concepts/evaluation.mdx b/documentation/versioned_docs/version-v0.10.x/concepts/evaluation.mdx
new file mode 100644
index 000000000..8acf12539
--- /dev/null
+++ b/documentation/versioned_docs/version-v0.10.x/concepts/evaluation.mdx
@@ -0,0 +1,383 @@
+---
+sidebar_position: 2
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Evaluation
+
+WSO2 Agent Manager provides built-in evaluation capabilities to continuously assess AI agent quality. Evaluation works by running **evaluators** against execution **traces** and producing quality scores you can track over time through the AMP Console.
+
+## Why Evaluate Agents?
+
+Traditional software is deterministic: given the same input, you get the same output. Tests pass or fail consistently. AI agents break this assumption. The same prompt can produce:
+
+- Different final answers (correct, partially correct, or wrong)
+- Different tool call sequences (efficient or roundabout)
+- Different reasoning paths (sound or flawed)
+- Different error modes (graceful fallback or hallucinated response)
+
+This non-determinism means you cannot test an agent once and trust it forever. A prompt that worked yesterday might fail tomorrow because the model's behavior shifted, a tool's API changed, or context retrieval returned different documents.
+
+Continuous evaluation addresses this by enabling:
+
+- **Regression detection**: catch quality drops before users notice
+- **Production monitoring**: track quality trends across real traffic
+- **Failure analysis**: identify which failure modes to fix next
+- **Data-driven improvement**: measure the impact of changes over time
+
+## Trace-Based Evaluation
+
+Evaluation in AMP is built on **traces**, the detailed execution records that capture every step of an agent's work. When an agent processes a request, AMP instrumentation records the entire execution as a structured trace containing LLM calls, tool invocations, retrieval operations, and agent reasoning steps (see [Trace Attributes Captured](./observability.mdx#trace-attributes-captured)).
+
+Evaluation runs **separately from the agent**, analyzing these traces after the agent has finished executing. This architecture provides several advantages:
+
+- **Zero performance impact**: evaluation never slows down or interferes with the agent's runtime
+- **Framework-agnostic**: any agent that produces OpenTelemetry traces can be evaluated, regardless of framework (LangChain, CrewAI, OpenAI Agents, or custom)
+- **Retrospective analysis**: you can evaluate old traces with new evaluators without re-running the agent
+
+```mermaid
+flowchart LR
+ A[Agent runs] --> B[Traces collected]
+ B --> C[Monitor runs evaluators]
+ C --> D[Scores produced]
+ D --> E[Dashboard & charts]
+```
+
+## Evaluators
+
+Evaluating an agent is not just about checking whether the final answer is correct. Even when the output looks right, the agent might have taken a wasteful path to get there: calling redundant tools, looping unnecessarily, or failing to recover from errors gracefully. A single agent interaction has multiple dimensions of quality:
+
+- **Accuracy**: is the information factually correct?
+- **Helpfulness**: does the response address what the user actually needed?
+- **Safety**: did any step produce harmful or policy-violating content?
+- **Tool usage**: did the agent use the right tools? Did it avoid unnecessary or redundant calls?
+- **Error recovery**: when a tool call failed or returned unexpected results, did the agent adapt?
+- **Efficiency**: did the agent complete the task without unnecessary steps or excessive token usage?
+- **Reasoning**: were the agent's decisions logical and purposeful?
+- **Tone**: was the communication appropriate and professional?
+
+Each dimension needs its own evaluator, a specific check that scores one aspect of quality. By combining multiple evaluators, you build a comprehensive quality profile that covers both the output and the behavior that produced it.
+
+AMP includes **24 built-in evaluators** across these dimensions (see [Built-in Evaluators](#built-in-evaluators) for the full reference). You can also [create custom evaluators](#custom-evaluators) for domain-specific quality checks. Built-in evaluators fall into two categories:
+
+
+
+
+Deterministic checks that measure objective, quantifiable metrics. They are fast, free, and produce consistent results: the same trace always gets the same score.
+
+**Best for**: latency, token usage, response length, required tools, prohibited content. Anything that can be measured with rules rather than judgment.
+
+
+
+
+Use a large language model to assess subjective qualities that rules cannot capture. The evaluator sends structured trace data to the LLM with scoring instructions, and the LLM returns a score with an explanation. They require an API key for a [supported LLM provider](#supported-llm-providers). See [Configuring LLM-as-Judge Evaluators](#configuring-llm-as-judge-evaluators) for configuration details.
+
+**Best for**: helpfulness, accuracy, safety, tone, reasoning quality. Anything where a human reviewer would need to read and judge the output.
+
+
+
+
+| | Rule-Based | LLM-as-Judge |
+|---|---|---|
+| **Speed** | Instant | Seconds (LLM API call) |
+| **Cost** | Free | LLM API cost per evaluation |
+| **Consistency** | Fully deterministic | May vary slightly between runs |
+| **Best for** | Objective, measurable metrics | Subjective quality assessment |
+
+## Evaluation Levels
+
+A trace captures the full request lifecycle, which often involves multiple agents, numerous LLM calls, and tool invocations. For example, a travel booking request might produce a trace like this:
+
+```
+Trace (user request → final response)
+│
+├── AgentSpan: "supervisor"
+│ ├── LLMSpan: reasoning ("User wants to book a flight. Let me find options.")
+│ ├── ToolSpan: search_flights (from: NYC, to: Tokyo)
+│ ├── LLMSpan: reasoning ("Found 3 flights. Delegating booking to the travel agent.")
+│ ├── ToolSpan: delegate_to_agent ("travel-agent")
+│ │ └── AgentSpan: "travel-agent"
+│ │ ├── LLMSpan: reasoning ("Booking the cheapest option.")
+│ │ └── ToolSpan: book_flight (flight_id: AA100)
+│ └── LLMSpan: reasoning ("Flight booked successfully.")
+│
+└── AgentSpan: "itinerary-formatter"
+ ├── LLMSpan: reasoning ("Let me format the booking into an itinerary.")
+ └── ToolSpan: format_itinerary (booking: CONF-12345)
+```
+
+Not all evaluators need the same data. An accuracy evaluator needs the full trace (input, output, all tool calls), while a safety evaluator needs to inspect each LLM call individually, since harmful content might appear in intermediate reasoning even if the final response filters it out. An efficiency evaluator might only care about a single agent's behavior within a multi-agent trace.
+
+Evaluators operate at one of three levels. The level determines what data the evaluator receives and how many times it runs per trace.
+
+
+
+
+Evaluates the **complete execution** from user input to final output. The evaluator sees everything: all tool calls, retrieved documents, LLM interactions, and end-to-end metrics. Produces **one score per trace**. This is the most common level.
+
+- *Was the final response helpful and accurate?*
+- *Is the response grounded in tool results and retrieved documents?*
+- *Did the request complete within acceptable time?*
+- *Were the right tools used across all agents?*
+
+
+
+
+Evaluates **individual agent behavior** within the trace. The evaluator sees a single agent's reasoning steps, tool calls, and decisions, isolated from other agents. Produces **one score per agent execution** captured within the trace.
+
+- *Did the planner agent create a sound execution plan?*
+- *Was the executor agent efficient, or did it loop unnecessarily?*
+- *Did this agent recover gracefully from errors?*
+- *Did the agent use the right subset of its available tools?*
+
+In a trace with 3 agents, an agent-level evaluator runs 3 times, producing 3 separate scores. This lets you compare agents within the same trace and identify which one needs improvement.
+
+
+
+
+Evaluates **each individual LLM call** within the trace. The evaluator sees a single model interaction: the messages sent, the response returned, and per-call metrics like token usage. Produces **one score per LLM call**.
+
+- *Was this LLM response safe and free of harmful content?*
+- *Was the tone appropriate for the context?*
+- *Was the response coherent and well-structured?*
+- *Is this model call cost-efficient?*
+
+In a trace with 5 LLM calls, an LLM-level evaluator runs 5 times, catching the specific call that produced unsafe content even if the final response filtered it out.
+
+
+
+
+### How Evaluators Are Dispatched
+
+You don't need to configure iteration logic. The system inspects each evaluator's level and dispatches automatically:
+
+```
+Trace with 3 agents and 5 LLM calls:
+
+Trace-level evaluator: runs 1 time (once for the whole trace)
+Agent-level evaluator: runs 3 times (once per agent)
+LLM-level evaluator: runs 5 times (once per LLM call)
+```
+
+## Custom Evaluators
+
+Built-in evaluators cover common quality dimensions, but every agent has domain-specific requirements: checking that responses follow a particular format, validating against business rules, or scoring domain-specific accuracy. Custom evaluators let you define your own evaluation logic and use it alongside built-in evaluators in any monitor.
+
+Custom evaluators are created in the AMP Console and come in two types. Both types receive one of three data models depending on the evaluation level you select:
+
+- **Trace level**: receives a `Trace` object (full execution from input to output)
+- **Agent level**: receives an `AgentTrace` object (single agent's steps and decisions)
+- **LLM level**: receives an `LLMSpan` object (single LLM call with messages and response)
+
+
+
+
+Write a Python function that receives trace data and returns a score. Your function can implement any logic: deterministic rules, external API calls, regex matching, statistical analysis, or any combination.
+
+```python
+def evaluate(trace: Trace) -> EvalResult:
+ # Your evaluation logic
+ if not trace.output:
+ return EvalResult.skip("No output to evaluate")
+ score = 1.0 if len(trace.output) > 100 else 0.5
+ return EvalResult(score=score, explanation="Checked output length")
+```
+
+
+
+
+Write a prompt template that tells an LLM how to evaluate the trace. The system handles sending the prompt to the configured LLM, parsing the response into a structured score and explanation, and retrying on failures. Prompt templates use placeholders to inject trace data. The same model, temperature, and criteria configuration used by built-in LLM-as-Judge evaluators applies to custom LLM judges.
+
+```text
+You are evaluating a customer support agent's response.
+
+User query: {trace.input}
+
+Agent response: {trace.output}
+
+Tools used: {trace.get_tool_steps()}
+
+Evaluate whether the agent:
+1. Correctly understood and addressed the customer's issue
+2. Provided accurate information consistent with the tool results
+3. Maintained a professional and empathetic tone
+
+Score 1.0 if all criteria are met, 0.5 if partially met,
+0.0 if the response is incorrect or unhelpful.
+```
+
+
+
+
+### Configuration Parameters
+
+Custom evaluators can define **configurable parameters**: typed inputs (string, integer, float, boolean, array, enum) with defaults and constraints. Users set parameter values when adding the evaluator to a monitor, making a single evaluator reusable across different contexts.
+
+For example, a "Response Format Check" evaluator might define a `required_format` parameter (enum: `json`, `markdown`, `plain`) so different monitors can check for different formats without duplicating the evaluator.
+
+:::info Tutorial
+For a step-by-step walkthrough of creating custom evaluators in the AMP Console, see the [Custom Evaluators](../tutorials/custom-evaluators.mdx) tutorial.
+:::
+
+## Monitors
+
+A **monitor** is a configured evaluation job that runs one or more evaluators against agent traces. Each monitor belongs to a specific agent and environment, and produces scores that are tracked over time.
+
+### Continuous Monitors
+
+Continuous monitors run on a **recurring schedule**, evaluating new traces on each run. Use these for ongoing production quality monitoring.
+
+- Configure an **interval** (minimum 5 minutes) that controls how often the monitor runs.
+- Can be **started** and **suspended** at any time.
+- When started, the first evaluation runs within 60 seconds.
+- Each run evaluates traces produced since the last run's time window.
+
+### Historical Monitors
+
+Historical monitors perform a **one-time evaluation** over a specific time window. Use these to analyze past agent behavior, such as reviewing interactions from the past week after a deployment or evaluating a specific incident period.
+
+- Set a **start time** and **end time** to define the evaluation window.
+- Evaluation **runs immediately** when created.
+- Cannot be started or suspended after completion.
+
+### Monitor Statuses
+
+The overall monitor status is derived from its configuration and latest run:
+
+| Status | Meaning |
+|--------|---------|
+| **Active** | Running on schedule (continuous) or completed successfully (historical) |
+| **Suspended** | Paused, can be restarted (continuous monitors only) |
+| **Failed** | The most recent run encountered an error |
+
+### Monitor Runs
+
+Each time a monitor evaluates traces, it creates a **run**. A run progresses through the following statuses:
+
+| Run Status | Meaning |
+|------------|---------|
+| **Pending** | Run is queued and waiting to start |
+| **Running** | Evaluators are actively processing traces |
+| **Success** | All evaluators completed successfully |
+| **Failed** | An error occurred. Check run logs for details |
+
+For continuous monitors, each scheduled execution creates a new run. You can view the full run history, rerun failed runs, and inspect logs for any run from the monitor dashboard.
+
+## Scores and Results
+
+### How Scoring Works
+
+Every evaluator produces a score from **0.0** (worst) to **1.0** (best) for each evaluated item (trace, agent execution, or LLM call depending on the evaluator's level). Each score also includes an **explanation**: a brief description of why that score was given.
+
+A score of **0.0** is a real measurement. It means the evaluator ran, analyzed the data, and determined the agent failed completely. This is different from a **skip**, which means the evaluator could not run at all (for example, an LLM-level evaluator on a trace with no LLM calls, or a context relevance evaluator on a trace with no retrieval operations). Skipped evaluations are tracked separately and do not affect aggregated scores.
+
+### Aggregated Metrics
+
+Individual scores are aggregated across all evaluated traces in a run into summary metrics:
+
+- **Mean score**: average quality across all evaluations
+- **Pass rate**: percentage of evaluations that scored at or above the evaluator's threshold
+- **Min / Max**: boundary scores showing the best and worst cases
+
+A high mean with a high pass rate indicates consistent quality. A high mean with a low pass rate signals inconsistency: the agent performs well on most traces but fails on a significant portion.
+
+### Viewing Results
+
+Results are available in two places in the AMP Console: the **monitor dashboard** and the **trace view**.
+
+#### Monitor Dashboard
+
+The monitor dashboard provides an overview of evaluation results across all traces in a time window:
+
+- **Radar chart**: mean scores across all evaluators at a glance, showing agent strengths and weaknesses
+- **Evaluation summary**: total evaluation count, weighted average score, and **per-level statistics** (number of traces evaluated, agent executions evaluated, and LLM invocations evaluated, each with evaluator counts and skip rates)
+- **Time-series trends**: how each evaluator's score changes over time, useful for spotting regressions or improvements after deployments
+- **Per-evaluator breakdowns**: detailed metrics (mean, pass rate, count, skipped) for each evaluator
+- **Score breakdown by agent**: when agent-level evaluators are configured, a table showing mean scores per evaluator for each agent in the trace, with execution counts. Helps identify which agent in a multi-agent system needs improvement.
+- **Score breakdown by model**: when LLM-level evaluators are configured, a table showing mean scores per evaluator for each LLM model used, with invocation counts. Helps compare quality across different models.
+
+#### Trace View
+
+Evaluation scores are also visible directly in the trace view, making it easy to debug specific agent interactions:
+
+- **Traces table**: a Score column displays the average evaluator score for each trace, color-coded from red (low) to green (high)
+- **Span header**: when you select a span, evaluator scores appear as color-coded percentage chips alongside duration, token count, and model information
+- **Scores tab**: a dedicated tab in the span details panel shows each evaluator's result with the score and a markdown-rendered explanation. Skipped evaluators display a skip reason. Trace-level scores appear on the root span, while agent-level and LLM-level scores appear on their respective spans.
+
+See the [Evaluation Monitors](../tutorials/evaluation-monitors.mdx) tutorial for a step-by-step walkthrough.
+
+---
+
+## Built-in Evaluators
+
+
+
+
+Deterministic evaluators that measure objective, quantifiable metrics. Fast, free, and fully consistent.
+
+| Evaluator | Level | Description | Key Parameters |
+|-----------|-------|-------------|----------------|
+| **Length Compliance** | Trace | Checks if output length is within configured min/max character bounds | `min_length` (default: 1), `max_length` (default: 10,000) |
+| **Latency Performance** | Trace | Scores execution speed against a configurable time limit. Degrades linearly above the limit | `max_latency_ms` (default: 30,000ms) |
+| **Content Safety** | Trace | Checks output for prohibited strings and patterns | `prohibited_strings`, `prohibited_patterns`, `case_sensitive` |
+| **Content Coverage** | Trace | Measures how many required strings and patterns were found in the output | `required_strings`, `required_patterns`, `case_sensitive` |
+| **Token Efficiency** | Trace | Checks total token usage against a configurable limit. Degrades linearly above it | `max_tokens` (default: 10,000) |
+| **Iteration Efficiency** | Agent | Scores whether the agent completed within iteration limits (measured by LLM call count) | `max_iterations` (default: 10) |
+| **Tool Coverage** | Agent | Measures how many required tools were invoked at least once | `required_tools` |
+| **Step Success Rate** | Agent | Measures the ratio of tool execution steps completed without errors | `min_success_rate` (default: 0.8) |
+| **Sequence Adherence** | Agent | Measures how closely the actual tool call sequence matches the expected order | `expected_sequence`, `strict` (default: false) |
+
+
+
+
+LLM-scored evaluators that assess subjective qualities. Require an API key for a [supported LLM provider](#supported-llm-providers).
+
+| Evaluator | Level | Description |
+|-----------|-------|-------------|
+| **Accuracy** | Trace | Scores factual correctness of information in the response |
+| **Clarity** | Trace | Scores readability, structure, and absence of ambiguity |
+| **Completeness** | Trace | Checks whether the response addresses all sub-questions and requirements in the input |
+| **Context Relevance** | Trace | Scores whether documents retrieved by RAG pipelines are relevant to the query |
+| **Groundedness** | Trace | Verifies that factual claims are grounded in tool results or retrieved documents |
+| **Helpfulness** | Trace | Scores whether the response actually helps the user with what they asked for |
+| **Relevance** | Trace | Scores whether the final response is semantically relevant to the user's query |
+| **Error Recovery** | Agent | Scores how gracefully the agent detects and recovers from errors during execution |
+| **Instruction Following** | Agent | Checks whether the agent follows system prompt constraints and user instructions |
+| **Path Efficiency** | Agent | Scores whether the agent's execution path is efficient. Detects redundant steps and loops |
+| **Reasoning Quality** | Agent | Scores whether the agent's execution steps are logical, purposeful, and well-reasoned |
+| **Coherence** | LLM | Scores each LLM call for logical flow, internal consistency, and structure |
+| **Conciseness** | LLM | Scores each LLM call for unnecessary verbosity and filler content |
+| **Safety** | LLM | Checks each LLM call for harmful, toxic, biased, or policy-violating content |
+| **Tone** | LLM | Scores each LLM call for appropriate and professional tone |
+
+
+
+
+## Configuring LLM-as-Judge Evaluators
+
+All LLM-as-Judge evaluators share these configurable parameters:
+
+| Parameter | Default | Description |
+|-----------|---------|-------------|
+| **Model** | `openai/gpt-4o-mini` | The LLM model used for judging, in `provider/model` format (e.g., `anthropic/claude-sonnet-4-6`) |
+| **Criteria** | `quality, accuracy, and helpfulness` | Custom evaluation criteria the judge uses when scoring |
+| **Temperature** | `0.0` | LLM temperature. Lower values produce more consistent scores |
+
+The model you choose affects both the quality and cost of evaluation. More capable models (e.g., GPT-4o, Claude Sonnet) tend to produce more nuanced and accurate scores, while smaller models (e.g., GPT-4o-mini) are faster and cheaper. Choose based on the criticality of the evaluation. Safety checks may warrant a more capable model, while tone checks may work well with a smaller one.
+
+## Supported LLM Providers
+
+To use LLM-as-Judge evaluators, you need to provide an API key for at least one supported provider when creating a monitor:
+
+| Provider | API Key |
+|----------|---------|
+| **OpenAI** | `OPENAI_API_KEY` |
+| **Anthropic** | `ANTHROPIC_API_KEY` |
+| **Google AI Studio** | `GEMINI_API_KEY` |
+| **Groq** | `GROQ_API_KEY` |
+| **Mistral AI** | `MISTRAL_API_KEY` |
+
+Credentials are stored securely with the monitor and used only when the evaluation job runs. You only need to add each provider once per monitor. All evaluators using that provider share the same credentials.
+
diff --git a/documentation/versioned_docs/version-v0.10.x/concepts/observability.mdx b/documentation/versioned_docs/version-v0.10.x/concepts/observability.mdx
new file mode 100644
index 000000000..a08f0f9f6
--- /dev/null
+++ b/documentation/versioned_docs/version-v0.10.x/concepts/observability.mdx
@@ -0,0 +1,83 @@
+---
+sidebar_position: 1
+---
+
+# Observability
+
+WSO2 Agent Manager provides full-stack observability for AI agents — whether they are deployed through the platform or running externally. Traces, metrics, and logs flow into a centralized store that you can query and analyze through the AMP Console.
+
+## Overview
+
+Observability in AMP is built on [OpenTelemetry](https://opentelemetry.io/), the industry-standard framework for distributed tracing and instrumentation. Every agent interaction — LLM calls, tool invocations, MCP requests, retrieval operations, and agent reasoning steps — is captured as a structured trace and stored for analysis.
+
+## Auto-Instrumentation for Deployed Agents
+
+When you deploy an agent through WSO2 Agent Manager, observability is set up **automatically — no code changes required**.
+
+### What Gets Instrumented
+
+The Traceloop SDK (used under the hood) instruments a wide range of AI frameworks automatically:
+
+| Category | Examples |
+|----------|---------|
+| LLM providers | OpenAI, Anthropic, Azure OpenAI |
+| Agent frameworks | LangChain, LlamaIndex, CrewAI, Haystack |
+| Vector stores | Pinecone, Weaviate, Chroma, Qdrant |
+| MCP clients | Any MCP tool calls made by the agent |
+
+### Trace Attributes Captured
+
+Each span is enriched with metadata that makes it possible to evaluate and debug agent behaviour:
+
+- **LLM spans**: model name, prompt tokens, completion tokens, latency, finish reason
+- **Tool spans**: tool name, input arguments, output, execution time
+- **Agent spans**: agent name, step number, reasoning output
+- **Root span**: agent ID, deployment ID, correlation ID, end-to-end latency
+
+## Observability for External Agents
+
+Agents that are **not deployed through AMP** — for example, agents running locally, on-premises, or in a third-party cloud — can still send traces to AMP. These are called **Externally-Hosted Agents**.
+
+### Registration
+
+1. In the AMP Console, open your **Project** and click **+ Add Agent**.
+2. Choose **Externally-Hosted Agent**.
+3. Provide a **Name** and optional description, then click **Register**.
+4. The **Setup Agent** panel opens automatically with a **Zero-code Instrumentation Guide**.
+
+### Install the Package
+
+```bash
+pip install amp-instrumentation
+```
+
+### Generate an API Key
+
+In the Setup Agent panel, select a **Token Duration** and click **Generate**. Copy the key immediately — it will not be shown again.
+
+### Set Environment Variables
+
+```bash
+export AMP_OTEL_ENDPOINT="http://localhost:22893/otel"
+export AMP_AGENT_API_KEY=""
+```
+
+### Run with Instrumentation
+
+Wrap your agent's start command with `amp-instrument`:
+
+```bash
+amp-instrument python my_agent.py
+amp-instrument uvicorn app:main --reload
+amp-instrument poetry run python agent.py
+```
+
+No changes to your agent code are required. The same Traceloop-based auto-instrumentation applies — all supported AI frameworks are traced automatically.
+
+---
+
+## Trace Visibility in AMP Console
+
+Once traces start flowing in, you can explore them in the AMP Console under your agent's sidebar:
+
+- **OBSERVABILITY → Traces** — search and inspect individual traces by time range or correlation ID; expand a trace to see LLM spans, tool spans, and agent reasoning steps
\ No newline at end of file
diff --git a/documentation/versioned_docs/version-v0.10.x/contributing/contributing.mdx b/documentation/versioned_docs/version-v0.10.x/contributing/contributing.mdx
new file mode 100644
index 000000000..0f7b6aa8b
--- /dev/null
+++ b/documentation/versioned_docs/version-v0.10.x/contributing/contributing.mdx
@@ -0,0 +1,87 @@
+# Contributing Guidelines
+
+This document establishes guidelines for using GitHub Discussions and Issues for technical conversations about the Agent Manager.
+
+## Getting Started
+
+- Discussions, issues, feature ideas, bug reports, and design proposals from the community are welcome
+- For security vulnerabilities, please report to security@wso2.com as per the [WSO2 Security Reporting Guidelines](https://security.docs.wso2.com/en/latest/security-reporting/report-security-issues/)
+
+## Discussion Categories
+
+| Category | Purpose | Example Topics |
+|----------|---------|----------------|
+| **Announcements** | Official updates from maintainers | Releases, roadmap updates, breaking changes |
+| **General** | Open-ended conversations | Community introductions, general questions |
+| **Ideas** | Feature suggestions and brainstorming | New capabilities, integration ideas |
+| **Q&A** | Technical questions with answers | Implementation help, troubleshooting |
+| **Show and Tell** | Share projects and integrations | Agent implementations, use cases |
+| **Design Proposals** | Technical design discussions | Architecture changes, system design, new features requiring review |
+
+## When to Use Discussions vs Issues
+
+| Use Discussions For | Use Issues For |
+|---------------------|----------------|
+| Open-ended questions | Bug reports with reproduction steps |
+| Feature ideas and brainstorming | Concrete feature requests with clear scope |
+| Design proposals and RFCs | Actionable tasks and work items |
+| Community engagement | Pull request discussions |
+| Troubleshooting help | Security vulnerabilities (private) |
+
+## Guidelines
+
+### Starting a Discussion
+
+1. **Search first** - Check existing discussions to avoid duplicates
+2. **Choose the right category** - Use the category table above
+3. **Use a clear title** - Be specific and descriptive
+4. **Provide context** - Include relevant details, code snippets, or diagrams
+
+### Promoting Discussions to Issues
+
+When a discussion results in actionable work:
+1. Summarize the outcome in a final comment
+2. Create a linked GitHub Issue for implementation
+3. Reference the discussion in the issue for context
+
+## Feature Lifecycle
+
+Features progress through distinct stages from initial concept to implementation:
+
+### 1. Idea Stage
+
+High-level discussions about capabilities we want to explore start in the **Ideas** category. These are similar to epics—broad in scope with no imposed structure. Ideas allow open brainstorming before committing to specific solutions.
+
+### 2. Design Proposal Stage
+
+When an idea is refined into a well-scoped feature, create a discussion in the **Design Proposals** category. Proposals must follow the standard template:
+
+| Section | Description |
+|---------|-------------|
+| **Problem** | Describe the problem, who is affected, and the impact |
+| **User Stories** | Define user stories using the format "As a [role], I want [goal] so that [benefit]" |
+| **Existing Solutions** | How is this solved elsewhere? Include current workarounds and links to relevant implementations, docs, or design proposals |
+| **Proposed Solution** | Technical approach and design details |
+| **Alternatives Considered** | What other approaches were evaluated? |
+| **Open Questions** | Unresolved technical decisions that need input (if any) |
+| **Milestone Plan** | Implementation phases aligned with release milestones |
+
+#### Proposal Labels
+
+Use these labels to track design proposal status:
+
+| Label | Description |
+|-------|-------------|
+| `Proposal/Draft` | Initial proposal, still being written |
+| `Proposal/Review` | Ready for team review and feedback |
+| `Proposal/Approved` | Design accepted, ready for implementation |
+| `Proposal/Rejected` | Proposal declined |
+| `Proposal/Implemented` | Design fully implemented |
+
+### 3. Implementation Tracking
+
+Once a design proposal is approved:
+1. Create GitHub Issues for implementation tasks
+2. Link issues back to the design proposal discussion
+3. Assign issues to appropriate milestones
+4. Track progress through milestone completion
\ No newline at end of file
diff --git a/documentation/versioned_docs/version-v0.10.x/getting-started/managed-cluster.mdx b/documentation/versioned_docs/version-v0.10.x/getting-started/managed-cluster.mdx
new file mode 100644
index 000000000..168f82365
--- /dev/null
+++ b/documentation/versioned_docs/version-v0.10.x/getting-started/managed-cluster.mdx
@@ -0,0 +1,1085 @@
+---
+sidebar_position: 3
+---
+# On Managed Kubernetes
+
+Install the Agent Manager on managed Kubernetes services (AWS EKS, Google GKE, Azure AKS, etc.).
+
+## Overview
+
+This guide walks through deploying the Agent Manager on managed Kubernetes clusters provided by cloud platforms. The installation consists of two main phases:
+
+1. **OpenChoreo Platform Setup** - Install the base OpenChoreo platform (Control Plane, Data Plane, Build Plane, Observability Plane)
+2. **Agent Manager Installation** - Install the Agent Manager components on top of OpenChoreo
+
+**Important:** This setup is designed for development and exploration. For production deployments, additional security hardening, proper domain configuration, identity provider integration, and persistent storage are required.
+
+## Prerequisites
+
+### Kubernetes Cluster Requirements
+
+You need a managed Kubernetes cluster with the following specifications:
+
+- **Kubernetes version:** 1.32 or higher
+- **Cluster size:** At least 3 nodes
+- **Node resources:** Each node should have minimum 4 CPU cores and 8 GB RAM
+- **LoadBalancer support:** Cloud provider LoadBalancer service type support (or MetalLB)
+- **Public IP accessibility:** LoadBalancer must be publicly accessible for Let's Encrypt HTTP-01 validation
+
+### Supported Cloud Providers
+
+This guide has been tested with:
+
+- **Amazon Web Services (EKS)**
+- **Google Cloud Platform (GKE)**
+- **Microsoft Azure (AKS)**
+- Other managed Kubernetes services with LoadBalancer support
+
+### Required Tools
+
+Before installation, ensure you have the following tools installed:
+
+- **kubectl** (v1.32+) - Kubernetes command-line tool
+- **helm** (v3.12+) - Package manager for Kubernetes
+- **curl** - Command-line tool for transferring data
+
+Verify tools are installed:
+
+```bash
+kubectl version --client
+helm version
+curl --version
+```
+
+### Pre-installed Components
+
+Ensure your cluster has the following components installed:
+
+- **cert-manager** (v1.19.2+) - Required for TLS certificate management
+- **Gateway API CRDs** - Required for Gateway and HTTPRoute resources used by the API Gateway
+- **External Secrets Operator** (v1.3.2+) - Required for syncing external secrets into Kubernetes Secrets
+- **kgateway** (v2.2.1+) - Required for Kubernetes Gateway implementation
+
+
+Don't have Gateway API CRDs, cert-manager, External Secrets Operator, or kgateway? Install them here
+
+Install Gateway API CRDs:
+
+```bash
+kubectl apply --server-side \
+ -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.4.1/experimental-install.yaml
+```
+
+Install cert-manager:
+
+```bash
+helm upgrade --install cert-manager oci://quay.io/jetstack/charts/cert-manager \
+ --namespace cert-manager \
+ --create-namespace \
+ --version v1.19.2 \
+ --set crds.enabled=true
+
+kubectl wait --for=condition=Available deployment/cert-manager -n cert-manager --timeout=180s
+```
+
+Install External Secrets Operator:
+
+```bash
+helm upgrade --install external-secrets oci://ghcr.io/external-secrets/charts/external-secrets \
+ --namespace external-secrets \
+ --create-namespace \
+ --version 1.3.2 \
+ --set installCRDs=true
+
+kubectl wait --for=condition=Available deployment/external-secrets -n external-secrets --timeout=180s
+```
+
+Install OpenBao for Workflow Plane secret management:
+
+```bash
+helm upgrade --install openbao oci://ghcr.io/openbao/charts/openbao \
+ --namespace openbao \
+ --create-namespace \
+ --version 0.25.6 \
+ --values https://raw.githubusercontent.com/wso2/agent-manager/amp/v0.10.0/deployments/single-cluster/values-openbao.yaml
+
+kubectl wait --for=condition=Ready pod -l app.kubernetes.io/name=openbao -n openbao --timeout=120s
+```
+
+Configure External Secrets ClusterSecretStore for OpenBao:
+
+```bash
+kubectl apply -f - <
+
+### Permissions
+
+Ensure you have sufficient permissions to:
+
+- Create namespaces
+- Deploy Helm charts
+- Create and manage Kubernetes resources (Deployments, Services, ConfigMaps, Secrets)
+- Create LoadBalancer services
+- Manage cert-manager Issuers and Certificates
+- Access cluster resources via kubectl
+
+## Phase 1: OpenChoreo Platform Setup
+
+The Agent Manager requires a complete OpenChoreo platform installation.
+
+**📚 Base Installation Guide: [OpenChoreo Managed Kubernetes Installation](https://openchoreo.dev/docs/getting-started/try-it-out/on-managed-kubernetes/)**
+
+### Setup Secrets Store
+
+Before installing OpenChoreo components, create a ClusterSecretStore for managing secrets:
+
+```bash
+# Create ClusterSecretStore with fake provider (for development)
+kubectl apply -f - </dev/null); do
+ kubectl rollout status "${sts}" -n openchoreo-observability-plane --timeout=900s
+done
+```
+
+**Note:** Follow the OpenChoreo guide to register the Observability Plane with the Control Plane (creating ObservabilityPlane CR with CA certificates).
+
+#### Step 5: Install Observability Modules
+
+After installing the Observability Plane, install the observability modules for logs and metrics:
+
+```
+# Install observability-logs-opensearch
+helm upgrade --install observability-logs-opensearch \
+ oci://ghcr.io/openchoreo/charts/observability-logs-opensearch \
+ --create-namespace \
+ --namespace openchoreo-observability-plane \
+ --version 0.3.1 \
+ --set openSearchSetup.openSearchSecretName="opensearch-admin-credentials" \
+ --timeout 600s
+
+# Enable log collection with fluent-bit
+helm upgrade observability-logs-opensearch \
+ oci://ghcr.io/openchoreo/charts/observability-logs-opensearch \
+ --namespace openchoreo-observability-plane \
+ --version 0.3.1 \
+ --reuse-values \
+ --set fluent-bit.enabled=true \
+ --timeout 600s
+
+# Install observability-metrics-prometheus
+helm upgrade --install observability-metrics-prometheus \
+ oci://ghcr.io/openchoreo/charts/observability-metrics-prometheus \
+ --create-namespace \
+ --namespace openchoreo-observability-plane \
+ --version 0.2.0 \
+ --timeout 600s
+```
+
+#### Step 6: Configure Observability Integration
+
+Link the Data Plane and Build Plane to the Observability Plane:
+
+```bash
+# Configure DataPlane to use observability plane
+kubectl patch dataplane default -n default --type merge \
+ -p '{"spec":{"observabilityPlaneRef":{"kind":"ObservabilityPlane","name":"default"}}}'
+
+# Configure BuildPlane to use observability plane
+kubectl patch buildplane default -n default --type merge \
+ -p '{"spec":{"observabilityPlaneRef":{"kind":"ObservabilityPlane","name":"default"}}}'
+```
+
+**For detailed plane registration steps (extracting CA certificates, creating plane CRs, creating secrets and configuring domains/TLS), refer to the [OpenChoreo Managed Kubernetes Guide](https://openchoreo.dev/docs/getting-started/try-it-out/on-managed-kubernetes/).**
+
+### Verify OpenChoreo Installation
+
+Before proceeding to Agent Manager installation, verify all OpenChoreo components are running:
+
+```bash
+# Check all OpenChoreo namespaces exist
+kubectl get namespace openchoreo-control-plane
+kubectl get namespace openchoreo-data-plane
+kubectl get namespace openchoreo-build-plane
+kubectl get namespace openchoreo-observability-plane
+
+# Verify pods are running
+kubectl get pods -n openchoreo-control-plane
+kubectl get pods -n openchoreo-data-plane
+kubectl get pods -n openchoreo-build-plane
+kubectl get pods -n openchoreo-observability-plane
+
+# Check OpenSearch is available (required for Agent Manager)
+kubectl get pods -n openchoreo-observability-plane -l app=opensearch
+
+# Verify plane registrations
+kubectl get dataplane default -n default
+kubectl get buildplane default -n default
+kubectl get observabilityplane default -n default
+```
+
+All pods should be in `Running` or `Completed` state before proceeding.
+
+### Access OpenChoreo Console
+
+The OpenChoreo console is available at:
+- Console: `https://${CP_DOMAIN}`
+- API: `https://api.${CP_DOMAIN}`
+
+You can access it directly using the domain configured above.
+
+## Phase 2: Agent Manager Installation
+
+Now that OpenChoreo is installed, you can install the Agent Manager components.
+
+The Agent Manager installation consists of five main components:
+
+1. **Agent Manager** - Core platform (PostgreSQL, API, Console)
+2. **Platform Resources Extension** - Default Organization, Project, Environment, DeploymentPipeline
+3. **Observability Extension** - Traces Observer service
+4. **Build Extension** - Workflow templates for building container images
+5. **Evaluation Extension** - Workflow templates for running automated evaluations
+
+### Configuration Variables
+
+Set the following environment variables before installation:
+
+```bash
+# Version (default: 0.10.0)
+export VERSION="0.10.0"
+
+# Helm chart registry
+export HELM_CHART_REGISTRY="ghcr.io/wso2"
+
+# Namespaces
+export AMP_NS="wso2-amp"
+export BUILD_CI_NS="openchoreo-build-plane"
+export OBSERVABILITY_NS="openchoreo-observability-plane"
+export DEFAULT_NS="default"
+export DATA_PLANE_NS="openchoreo-data-plane"
+```
+
+### Step 1: Install Gateway Operator
+
+The Gateway Operator manages API Gateway resources and enables secure trace ingestion to the Observability Plane.
+
+```bash
+# Install Gateway Operator
+helm install gateway-operator \
+ oci://ghcr.io/wso2/api-platform/helm-charts/gateway-operator \
+ --version 0.2.0 \
+ --namespace ${DATA_PLANE_NS} \
+ --set logging.level=debug \
+ --set gateway.helm.chartVersion=0.3.0 \
+ --timeout 600s
+
+# Wait for Gateway Operator deployment
+kubectl wait --for=condition=Available \
+ deployment -l app.kubernetes.io/name=gateway-operator \
+ -n ${DATA_PLANE_NS} --timeout=300s
+```
+
+**Apply Gateway Operator Configuration:**
+
+This configuration sets up API authentication (using JWT/JWKS) and rate limiting policies:
+
+```bash
+# Apply Gateway Operator configuration
+kubectl apply -f https://raw.githubusercontent.com/wso2/agent-manager/amp/v0.10.0/deployments/values/api-platform-operator-full-config.yaml
+```
+
+**Grant RBAC for WSO2 API Platform CRDs:**
+
+This grants the data plane cluster-agent permissions to manage WSO2 API Platform resources:
+
+```bash
+kubectl apply -f - </dev/null || \
+ kubectl get svc obs-gateway-gateway-gateway-runtime -n ${DATA_PLANE_NS} \
+ -o jsonpath='{.status.loadBalancer.ingress[0].hostname}' 2>/dev/null)
+
+echo "Observability Gateway: ${OBS_GATEWAY_IP}"
+
+# Install the platform Helm chart with instrumentation URL configured
+helm install amp \
+ oci://${HELM_CHART_REGISTRY}/wso2-agent-manager \
+ --version 0.10.0 \
+ --namespace ${AMP_NS} \
+ --create-namespace \
+ --set console.config.instrumentationUrl="http://${OBS_GATEWAY_IP}:22893/otel" \
+ --timeout 1800s
+```
+
+**Note:** If you're using port-forwarding or exposing the gateway differently, update the `console.config.instrumentationUrl` accordingly.
+
+**Note:** If you change `agentManagerService.config.publisherApiKey.value`, make sure to set the same value for `ampEvaluation.publisher.apiKey` in the Evaluation Extension chart (Step 9). Both must match for evaluation score publishing to work.
+
+**Wait for components to be ready:**
+
+```bash
+# Wait for PostgreSQL StatefulSet
+kubectl wait --for=jsonpath='{.status.readyReplicas}'=1 \
+ statefulset/amp-postgresql -n ${AMP_NS} --timeout=600s
+
+# Wait for Agent Manager Service
+kubectl wait --for=condition=Available \
+ deployment/amp-api -n ${AMP_NS} --timeout=600s
+
+# Wait for Console
+kubectl wait --for=condition=Available \
+ deployment/amp-console -n ${AMP_NS} --timeout=600s
+```
+
+### Step 6: Install Platform Resources Extension
+
+The Platform Resources Extension creates default resources:
+
+- Default Organization
+- Default Project
+- Environment
+- DeploymentPipeline
+
+**Installation:**
+
+```bash
+# Install Platform Resources Extension
+helm install amp-platform-resources \
+ oci://${HELM_CHART_REGISTRY}/wso2-amp-platform-resources-extension \
+ --version 0.10.0 \
+ --namespace ${DEFAULT_NS} \
+ --timeout 1800s
+```
+
+**Note:** This extension is non-fatal if installation fails. The platform will function, but default resources may not be available.
+
+### Step 7: Install Observability Extension
+
+The observability extension includes the Traces Observer service for querying traces from OpenSearch.
+
+**Note:** The OpenTelemetry Collector ConfigMap should have been applied in Phase 1, Step 5. If you skipped it, apply it now:
+
+```bash
+# Verify or apply the OpenTelemetry collector config map
+kubectl apply -f https://raw.githubusercontent.com/wso2/agent-manager/amp/v0.10.0/deployments/values/oc-collector-configmap.yaml \
+ -n ${OBSERVABILITY_NS}
+```
+
+**Create ExternalSecrets for Observability Plane:**
+
+Before installing the observability extension, create the required ExternalSecrets:
+
+```bash
+# Create ExternalSecret for OpenSearch admin credentials
+kubectl apply -f - < -n
+
+# Check node resources
+kubectl top nodes
+
+# Check persistent volume claims
+kubectl get pvc -A
+```
+
+**LoadBalancer not getting external IP:**
+```bash
+# Check service events
+kubectl describe svc -n
+
+# For AWS EKS, ensure the service is internet-facing
+kubectl get svc -n -o yaml | grep aws-load-balancer-scheme
+```
+
+**Let's Encrypt certificate not being issued:**
+```bash
+# Check certificate status
+kubectl describe certificate -n
+
+# Check cert-manager logs
+kubectl logs -n cert-manager -l app=cert-manager
+
+# Ensure LoadBalancer is publicly accessible
+curl -v http:///.well-known/acme-challenge/test
+```
+
+**Gateway not becoming Programmed:**
+```bash
+# Check Gateway Operator logs
+kubectl logs -n openchoreo-data-plane -l app.kubernetes.io/name=gateway-operator
+
+# Check Gateway status
+kubectl describe apigateway obs-gateway -n openchoreo-data-plane
+```
+
+**Plane registration issues:**
+```bash
+# Verify planeID matches between plane CR and Helm values
+kubectl get dataplane default -n default -o yaml
+kubectl get buildplane default -n default -o yaml
+
+# Check Control Plane logs
+kubectl logs -n openchoreo-control-plane -l app.kubernetes.io/name=openchoreo-control-plane
+```
+
+**OpenSearch connectivity issues:**
+```bash
+# Check OpenSearch pods
+kubectl get pods -n openchoreo-observability-plane -l app=opensearch
+
+# Test OpenSearch connectivity
+kubectl run -it --rm debug --image=curlimages/curl --restart=Never -- \
+ curl -v http://opensearch.openchoreo-observability-plane.svc.cluster.local:9200
+```
+
+**AWS EKS specific issues:**
+```bash
+# Check LoadBalancer is internet-facing
+kubectl get svc -n -o jsonpath='{.metadata.annotations}'
+
+# Verify security groups allow traffic
+aws ec2 describe-security-groups --filters "Name=tag:kubernetes.io/cluster/,Values=owned"
+```
+
+## Reference Configuration Files
+
+All configuration values files used in this guide are available in the repository:
+
+- **Control Plane Values**: [deployments/single-cluster/values-cp.yaml](https://raw.githubusercontent.com/wso2/agent-manager/amp/v0.10.0/deployments/single-cluster/values-cp.yaml)
+- **Data Plane Values**: [deployments/single-cluster/values-dp.yaml](https://raw.githubusercontent.com/wso2/agent-manager/amp/v0.10.0/deployments/single-cluster/values-dp.yaml)
+- **Build Plane Values**: [deployments/single-cluster/values-bp.yaml](https://raw.githubusercontent.com/wso2/agent-manager/amp/v0.10.0/deployments/single-cluster/values-bp.yaml)
+- **Observability Plane Values**: [deployments/single-cluster/values-op.yaml](https://raw.githubusercontent.com/wso2/agent-manager/amp/v0.10.0/deployments/single-cluster/values-op.yaml)
+- **Gateway Operator Config**: [deployments/values/api-platform-operator-full-config.yaml](https://raw.githubusercontent.com/wso2/agent-manager/amp/v0.10.0/deployments/values/api-platform-operator-full-config.yaml)
+- **Observability Gateway**: [deployments/values/obs-gateway.yaml](https://raw.githubusercontent.com/wso2/agent-manager/amp/v0.10.0/deployments/values/obs-gateway.yaml)
+- **OTEL Collector ConfigMap**: [deployments/values/oc-collector-configmap.yaml](https://raw.githubusercontent.com/wso2/agent-manager/amp/v0.10.0/deployments/values/oc-collector-configmap.yaml)
+- **OTEL Collector RestApi**: [deployments/values/otel-collector-rest-api.yaml](https://raw.githubusercontent.com/wso2/agent-manager/amp/v0.10.0/deployments/values/otel-collector-rest-api.yaml)
+
+You can customize these files for your specific deployment needs.
diff --git a/documentation/versioned_docs/version-v0.10.x/getting-started/quick-start.mdx b/documentation/versioned_docs/version-v0.10.x/getting-started/quick-start.mdx
new file mode 100644
index 000000000..56692de20
--- /dev/null
+++ b/documentation/versioned_docs/version-v0.10.x/getting-started/quick-start.mdx
@@ -0,0 +1,80 @@
+---
+sidebar_position: 1
+---
+# Quick Start Guide
+
+Get the Agent Manager running with a single command using a dev container!
+
+## Prerequisites
+
+Ensure the following before you begin:
+
+- **Docker** (Engine 26.0+ recommended)
+ Allocate at least 8 GB RAM and 4 CPUs.
+
+- **Mac users**: Use Colima for best compatibility
+
+ ```sh
+ colima start --vm-type=vz --vz-rosetta --cpu 4 --memory 8
+ ```
+
+## 🚀 Installation Using Dev Container
+
+The quick-start includes a dev container with all required tools pre-installed (kubectl, Helm, K3d). This ensures a consistent environment across different systems.
+
+### Step 1: Run the Dev Container
+
+```bash
+docker run --rm -it --name amp-quick-start \
+ -v /var/run/docker.sock:/var/run/docker.sock \
+ --network=host \
+ ghcr.io/wso2/amp-quick-start:v0.10.0
+```
+
+### Step 2: Run Installation Inside Container
+
+Once inside the container, run the installation script:
+
+```bash
+./install.sh
+```
+
+**Time:** ~15-20 minutes
+
+This installs everything you need:
+- ✅ K3d cluster
+- ✅ OpenChoreo platform
+- ✅ Agent Manager
+- ✅ Full observability stack
+
+## What Happens During Installation
+
+1. **Prerequisites Check**: Verifies Docker, kubectl, Helm, and K3d are available
+2. **K3d Cluster Setup**: Creates a local Kubernetes cluster named `amp-local`
+3. **CoreDNS Configuration**: Applies CoreDNS custom configuration for OpenChoreo
+4. **Machine ID Generation**: Generates machine IDs for Fluent Bit observability
+5. **Cluster Prerequisites**: Installs Cert Manager, Gateway API CRDs, External Secrets Operator, and kgateway
+6. **Secrets Setup**: Installs OpenBao for Workflow Plane and configures ClusterSecretStore with OpenBao backend
+7. **OpenChoreo Installation**: Installs OpenChoreo Control Plane, Data Plane, Workflow Plane, and Observability Plane
+8. **Gateway Operator**: Installs Gateway Operator with RBAC for WSO2 API Platform CRDs
+9. **AMP Thunder Extension**: Installs WSO2 AMP Thunder Extension
+10. **AI Gateway Extension**: Registers and installs WSO2 AI Gateway for LLM Provider management.
+11. **Agent Management Platform**: Installs core platform (PostgreSQL, API, Console) and extensions (Secrets, Platform Resources, Observability, Build, Evaluation, Gateway)
+
+## Access Your Platform
+
+After installation completes, use the following endpoints to access the platform.
+
+- **Console**: [`http://localhost:3000`](http://localhost:3000)
+
+Login using the following credentials:
+- **Username**: `admin`
+- **Password**: `admin`
+
+## Uninstall
+
+**Platform only:**
+
+```bash
+./uninstall.sh
+```
diff --git a/documentation/versioned_docs/version-v0.10.x/getting-started/self-hosted-cluster.mdx b/documentation/versioned_docs/version-v0.10.x/getting-started/self-hosted-cluster.mdx
new file mode 100644
index 000000000..12ac28b0b
--- /dev/null
+++ b/documentation/versioned_docs/version-v0.10.x/getting-started/self-hosted-cluster.mdx
@@ -0,0 +1,1071 @@
+---
+sidebar_position: 2
+---
+# On Self-Hosted Kubernetes
+
+Install the Agent Manager on a self-hosted Kubernetes cluster with OpenChoreo.
+
+## Overview
+
+This guide walks through deploying the Agent Manager on a self-hosted Kubernetes cluster. The installation consists of two main phases:
+
+1. **OpenChoreo Platform Setup** - Install the base OpenChoreo platform (Control Plane, Data Plane, Build Plane, Observability Plane)
+2. **Agent Manager Installation** - Install the Agent Manager components on top of OpenChoreo
+
+**Important:** This setup is designed for development and exploration. For production deployments, additional security hardening, TLS configuration, and identity provider integration are required.
+
+## Prerequisites
+
+### Hardware Requirements
+
+- **Minimum Resources:**
+ - 8 GB RAM
+ - 4 CPU cores
+ - ~10 GB free disk space
+
+### Required Tools
+
+Before installation, ensure you have the following tools installed:
+
+- **Docker** (v26.0+) - Container runtime
+- **kubectl** (v1.32+) - Kubernetes command-line tool
+- **helm** (v3.12+) - Package manager for Kubernetes
+- **k3d** (v5.8+) - Lightweight Kubernetes for local development (optional, for local clusters)
+
+**Platform-Specific Notes:**
+- **macOS users:** Use Colima with VZ and Rosetta support
+- **Rancher Desktop users:** Must use containerd and configure HTTP registry access for the Build Plane
+
+Verify tools are installed:
+
+```bash
+docker --version
+kubectl version --client
+helm version
+k3d version # If using k3d for local development
+```
+
+### For Existing Kubernetes Clusters
+
+If you have an existing Kubernetes cluster, ensure:
+
+- Kubernetes 1.32+ is running
+- cert-manager, Gateway API CRDs, External Secrets Operator, and kgateway are pre-installed.
+- An ingress controller is configured
+- Cluster has minimum 8 GB RAM and 4 CPU cores
+
+
+Don't have Gateway API CRDs, cert-manager, External Secrets Operator, or kgateway? Install them here
+
+Install Gateway API CRDs:
+
+```bash
+kubectl apply --server-side \
+ -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.4.1/experimental-install.yaml
+```
+
+Install cert-manager:
+
+```bash
+helm upgrade --install cert-manager oci://quay.io/jetstack/charts/cert-manager \
+ --namespace cert-manager \
+ --create-namespace \
+ --version v1.19.2 \
+ --set crds.enabled=true
+
+kubectl wait --for=condition=Available deployment/cert-manager -n cert-manager --timeout=180s
+```
+
+Install External Secrets Operator:
+
+```bash
+helm upgrade --install external-secrets oci://ghcr.io/external-secrets/charts/external-secrets \
+ --namespace external-secrets \
+ --create-namespace \
+ --version 1.3.2 \
+ --set installCRDs=true
+
+kubectl wait --for=condition=Available deployment/external-secrets -n external-secrets --timeout=180s
+```
+
+Install OpenBao for Workflow Plane secret management:
+
+```bash
+helm upgrade --install openbao oci://ghcr.io/openbao/charts/openbao \
+ --namespace openbao \
+ --create-namespace \
+ --version 0.25.6 \
+ --values https://raw.githubusercontent.com/wso2/agent-manager/amp/v0.10.0/deployments/single-cluster/values-openbao.yaml
+
+kubectl wait --for=condition=Ready pod -l app.kubernetes.io/name=openbao -n openbao --timeout=120s
+```
+
+Configure External Secrets ClusterSecretStore for OpenBao:
+
+```bash
+kubectl apply -f - <
+
+### Permissions
+
+Ensure you have sufficient permissions to:
+
+- Create namespaces
+- Deploy Helm charts
+- Create and manage Kubernetes resources (Deployments, Services, ConfigMaps, Secrets)
+- Access cluster resources via kubectl
+
+## Phase 1: OpenChoreo Platform Setup
+
+The Agent Manager requires a complete OpenChoreo platform installation.
+
+**📚 Base Installation Guide: [OpenChoreo Self-Hosted Kubernetes Installation](https://openchoreo.dev/docs/getting-started/try-it-out/on-self-hosted-kubernetes/)**
+
+### Setup Secrets Store
+
+Before installing OpenChoreo components, create a ClusterSecretStore for managing secrets:
+
+```bash
+# Create ClusterSecretStore with fake provider (for development)
+kubectl apply -f - </dev/null); do
+ kubectl rollout status "${sts}" -n openchoreo-observability-plane --timeout=900s
+done
+```
+
+**Note:** Follow the OpenChoreo guide to register the Observability Plane with the Control Plane (creating ObservabilityPlane CR with CA certificates) and Create secrets.
+
+#### Step 5: Install Observability Modules
+
+After installing the Observability Plane, install the observability modules for logs and metrics:
+
+```bash
+# Create ExternalSecret for OpenSearch admin credentials
+kubectl apply -f - <:10082`
+- **In-cluster only:** `registry.openchoreo-build-plane.svc.cluster.local:5000`
+
+**Verification:**
+
+You can verify the registry endpoint configured in your OpenChoreo Build Plane by checking the workflow templates:
+
+```bash
+kubectl get clusterworkflowtemplate ballerina-buildpack-ci -o yaml | grep REGISTRY_ENDPOINT
+```
+
+**Note:** This extension is optional. The platform will function without it, but build CI features may not work.
+
+### Step 8: Install Evaluation Extension
+
+The Evaluation Extension provides workflow templates for running automated evaluations against agent traces.
+
+**Installation:**
+
+```bash
+# Install Evaluation Extension
+helm install amp-evaluation-extension \
+ oci://${HELM_CHART_REGISTRY}/wso2-amp-evaluation-extension \
+ --version 0.10.0 \
+ --namespace ${BUILD_CI_NS} \
+ --timeout 1800s
+```
+
+**Note:** The default `publisher.apiKey` must match the `publisherApiKey.value` configured in the Agent Manager chart (Step 4). Both default to `amp-internal-api-key`. If you changed the Agent Manager's `publisherApiKey.value`, override it here:
+
+```bash
+helm install amp-evaluation-extension \
+ oci://${HELM_CHART_REGISTRY}/wso2-amp-evaluation-extension \
+ --version 0.10.0 \
+ --namespace ${BUILD_CI_NS} \
+ --set ampEvaluation.publisher.apiKey="your-custom-key" \
+ --timeout 1800s
+```
+
+**Note:** This extension is optional. The platform will function without it, but evaluation features may not work.
+
+### Step 9: Install AI Gateway Extension
+
+The AI Gateway Extension registers the AI Gateway with the Agent Manager for managing LLM Providers.
+
+**Important:** This step must be done **last** — it depends on the Agent Manager service being healthy and the Thunder Extension (IDP) being ready for token exchange.
+
+**Set configuration variables:**
+
+| Variable | Description | Default |
+|----------|-------------|---------|
+| `apiGateway.controlPlane.host` | Agent Manager service URL | `amp-api-gateway-manager.wso2-amp.svc.cluster.local:9243` |
+| `agentManager.apiUrl` | Agent Manager API URL (reachable from bootstrap job) | `http://amp-api.wso2-amp.svc.cluster.local:9000/api/v1` |
+| `agentManager.idp.tokenUrl` | Thunder Extension token endpoint | `http://amp-thunder-extension-service.amp-thunder.svc.cluster.local:8090/oauth2/token` |
+
+**Installation:**
+
+```bash
+# Install AI Gateway Extension
+helm install amp-ai-gateway \
+ oci://${HELM_CHART_REGISTRY}/wso2-amp-ai-gateway-extension \
+ --version 0.10.0 \
+ --namespace ${DATA_PLANE_NS} \
+ --set apiGateway.controlPlane.host="amp-api-gateway-manager.${AMP_NS}.svc.cluster.local:9243" \
+ --set agentManager.apiUrl="http://amp-api.${AMP_NS}.svc.cluster.local:9000/api/v1" \
+ --set agentManager.idp.tokenUrl="http://amp-thunder-extension-service.${THUNDER_NS}.svc.cluster.local:8090/oauth2/token" \
+ --timeout 1800s
+```
+
+**Wait for the bootstrap job to complete:**
+
+```bash
+kubectl wait --for=condition=complete job/amp-gateway-bootstrap \
+ -n ${DATA_PLANE_NS} --timeout=300s
+```
+
+**Verify the AI Gateway is running:**
+
+```bash
+# Check the APIGateway CR
+kubectl get apigateway default-ai -n ${DATA_PLANE_NS}
+
+# Check the bootstrap job
+kubectl get jobs -n ${DATA_PLANE_NS}
+```
+
+**Note:** This extension is optional. The platform will function without it, but the AI Gateway will not be available.
+
+
+## Verification
+
+Verify all components are installed and running:
+
+```bash
+# 1. Check OpenChoreo Platform Components
+echo "=== OpenChoreo Platform Status ==="
+kubectl get pods -n openchoreo-control-plane
+kubectl get pods -n openchoreo-data-plane
+kubectl get pods -n openchoreo-build-plane
+kubectl get pods -n openchoreo-observability-plane
+
+# 2. Check Agent Manager Components
+echo "=== Agent Manager Status ==="
+kubectl get pods -n wso2-amp
+
+# 3. Check Observability Extension
+echo "=== Observability Extension Status ==="
+kubectl get pods -n openchoreo-observability-plane | grep -E "amp-traces-observer"
+
+# 4. Check Build Extension
+echo "=== Build Extension Status ==="
+kubectl get pods -n openchoreo-build-plane | grep build-workflow
+
+# 5. Check Gateway Operator
+echo "=== Gateway Operator Status ==="
+kubectl get pods -n openchoreo-data-plane -l app.kubernetes.io/name=gateway-operator
+
+# 6. Check Gateway and API Resources
+echo "=== Gateway and API Resources ==="
+kubectl get apigateway obs-gateway -n openchoreo-data-plane
+kubectl get restapi traces-api-secure -n openchoreo-data-plane
+
+# 7. Check Helm Releases
+echo "=== Helm Releases ==="
+helm list -n openchoreo-control-plane
+helm list -n openchoreo-data-plane
+helm list -n openchoreo-build-plane
+helm list -n openchoreo-observability-plane
+helm list -n wso2-amp
+helm list -n default
+
+# 8. Verify Plane Registrations
+echo "=== Plane Registrations ==="
+kubectl get dataplane default -n default -o jsonpath='{.spec.observabilityPlaneRef}'
+kubectl get buildplane default -n default -o jsonpath='{.spec.observabilityPlaneRef}'
+```
+
+Expected output should show all pods in `Running` or `Completed` state.
+
+## Access the Platform
+
+### Access via Ingress (Recommended)
+
+If you're using the provided k3d/Traefik ingress configuration, the services are accessible directly:
+
+**OpenChoreo Platform:**
+- Console: `http://openchoreo.localhost:8080`
+- API: `http://api.openchoreo.localhost:8080`
+- Default credentials: `admin@openchoreo.dev` / `Admin@123`
+
+**Agent Manager:**
+- Console: Access through OpenChoreo console or via port forwarding (see below)
+- API: Access via port forwarding (see below)
+
+### Port Forwarding (Alternative)
+
+For direct access or non-ingress setups, use port forwarding:
+
+```bash
+# Agent Manager Console (port 3000)
+kubectl port-forward -n wso2-amp svc/amp-console 3000:3000 &
+
+# Agent Manager API (port 9000)
+kubectl port-forward -n wso2-amp svc/amp-api 9000:9000 &
+
+# Observability Gateway HTTP (port 22893)
+kubectl port-forward -n openchoreo-data-plane svc/obs-gateway-gateway-gateway-runtime 22893:22893 &
+
+# Observability Gateway HTTPS (port 22894)
+kubectl port-forward -n openchoreo-data-plane svc/obs-gateway-gateway-gateway-runtime 22894:22894 &
+```
+
+### Access URLs (Port Forwarding)
+
+After port forwarding is set up:
+
+- **Agent Manager Console**: `http://localhost:3000`
+- **Agent Manager API**: `http://localhost:9000`
+- **Observability Gateway (HTTP)**: `http://localhost:22893/otel`
+- **Observability Gateway (HTTPS)**: `https://localhost:22894/otel`
+
+### Handling Self-Signed Certificate Issues (HTTPS)
+
+If you need to use the HTTPS endpoint for OTEL exporters and encounter self-signed certificate issues, you can extract and use the certificate authority (CA) certificate from the cluster:
+
+```bash
+# Extract the CA certificate from the Kubernetes secret
+kubectl get secret obs-gateway-gateway-controller-tls \
+ -n openchoreo-data-plane \
+ -o jsonpath='{.data.ca\.crt}' | base64 --decode > ca.crt
+
+# Export the certificate path for OTEL exporters (use absolute path to the ca.crt file)
+export OTEL_EXPORTER_OTLP_CERTIFICATE=$(pwd)/ca.crt
+```
+
+
+## Custom Configuration
+
+### Using Custom Values File
+
+Create a custom values file (e.g., `custom-values.yaml`):
+
+```yaml
+agentManagerService:
+ replicaCount: 2
+ resources:
+ requests:
+ memory: 512Mi
+ cpu: 500m
+
+console:
+ replicaCount: 2
+
+postgresql:
+ auth:
+ password: "my-secure-password"
+```
+
+Install with custom values:
+
+```bash
+helm install amp \
+ oci://${HELM_CHART_REGISTRY}/wso2-agent-manager \
+ --version 0.10.0 \
+ --namespace ${AMP_NS} \
+ --create-namespace \
+ --timeout 1800s \
+ -f custom-values.yaml
+```
+
+## Production Considerations
+
+**Important:** This installation is designed for development and exploration only. For production deployments, you must:
+
+1. **Replace default credentials** with a proper identity provider (OAuth, SAML, etc.)
+2. **Configure TLS certificates** - Replace self-signed certificates with proper CA-signed certificates
+3. **Implement multi-cluster connectivity** - Configure proper networking between planes
+4. **Set up persistent observability storage** - Configure persistent volumes and backup strategies for OpenSearch
+5. **Resource sizing** - Adjust resource requests/limits based on workload requirements
+6. **High availability** - Deploy multiple replicas of critical components
+7. **Monitoring and alerting** - Set up proper monitoring for production workloads
+8. **Security hardening** - Apply security best practices (network policies, RBAC, pod security policies)
+
+## Troubleshooting
+
+### Common Issues
+
+**Pods stuck in Pending state:**
+```bash
+# Check resource availability
+kubectl describe pod -n
+
+# Check node resources
+kubectl top nodes
+```
+
+**Gateway not becoming Programmed:**
+```bash
+# Check Gateway Operator logs
+kubectl logs -n openchoreo-data-plane -l app.kubernetes.io/name=gateway-operator
+
+# Check Gateway status
+kubectl describe apigateway obs-gateway -n openchoreo-data-plane
+```
+
+**Plane registration issues:**
+```bash
+# Verify planeID matches between DataPlane CR and Helm values
+kubectl get dataplane default -n default -o yaml
+
+# Check Control Plane logs
+kubectl logs -n openchoreo-control-plane -l app.kubernetes.io/name=openchoreo-control-plane
+```
+
+**OpenSearch connectivity issues:**
+```bash
+# Check OpenSearch pods
+kubectl get pods -n openchoreo-observability-plane -l app=opensearch
+
+# Test OpenSearch connectivity
+kubectl run -it --rm debug --image=curlimages/curl --restart=Never -- \
+ curl -v http://opensearch.openchoreo-observability-plane.svc.cluster.local:9200
+```
+
+## Additional Configuration
+
+### k3d Cluster-Specific Setup (Optional)
+
+If you're using k3d and need to ensure `host.k3d.internal` DNS resolution works correctly, configure CoreDNS:
+
+```bash
+# Get the gateway IP for the k3d network
+CLUSTER_NAME="amp-local" # Adjust to your cluster name
+GATEWAY_IP=$(docker network inspect "k3d-${CLUSTER_NAME}" \
+ -f '{{range .IPAM.Config}}{{.Gateway}}{{end}}' 2>/dev/null || true)
+
+# Add host.k3d.internal to CoreDNS NodeHosts
+if [ -n "$GATEWAY_IP" ]; then
+ CURRENT_HOSTS=$(kubectl get cm coredns -n kube-system \
+ -o jsonpath='{.data.NodeHosts}')
+
+ # Check if entry already exists
+ if ! echo "$CURRENT_HOSTS" | grep -q "host.k3d.internal"; then
+ echo "Adding host.k3d.internal to CoreDNS..."
+ kubectl patch configmap coredns -n kube-system --type merge \
+ -p "{\"data\":{\"NodeHosts\":\"${CURRENT_HOSTS}\n${GATEWAY_IP} host.k3d.internal\n\"}}"
+
+ # Restart CoreDNS
+ kubectl rollout restart deployment coredns -n kube-system
+ kubectl rollout status deployment/coredns -n kube-system --timeout=60s
+ fi
+fi
+```
+
+### Reference Configuration Files
+
+All configuration values files used in this guide are available in the repository:
+
+- **Control Plane Values**: [deployments/single-cluster/values-cp.yaml](https://raw.githubusercontent.com/wso2/agent-manager/amp/v0.10.0/deployments/single-cluster/values-cp.yaml)
+- **Data Plane Values**: [deployments/single-cluster/values-dp.yaml](https://raw.githubusercontent.com/wso2/agent-manager/amp/v0.10.0/deployments/single-cluster/values-dp.yaml)
+- **Build Plane Values**: [deployments/single-cluster/values-bp.yaml](https://raw.githubusercontent.com/wso2/agent-manager/amp/v0.10.0/deployments/single-cluster/values-bp.yaml)
+- **Observability Plane Values**: [deployments/single-cluster/values-op.yaml](https://raw.githubusercontent.com/wso2/agent-manager/amp/v0.10.0/deployments/single-cluster/values-op.yaml)
+- **Gateway Operator Config**: [deployments/values/api-platform-operator-full-config.yaml](https://raw.githubusercontent.com/wso2/agent-manager/amp/v0.10.0/deployments/values/api-platform-operator-full-config.yaml)
+- **Observability Gateway**: [deployments/values/obs-gateway.yaml](https://raw.githubusercontent.com/wso2/agent-manager/amp/v0.10.0/deployments/values/obs-gateway.yaml)
+- **OTEL Collector ConfigMap**: [deployments/values/oc-collector-configmap.yaml](https://raw.githubusercontent.com/wso2/agent-manager/amp/v0.10.0/deployments/values/oc-collector-configmap.yaml)
+- **OTEL Collector RestApi**: [deployments/values/otel-collector-rest-api.yaml](https://raw.githubusercontent.com/wso2/agent-manager/amp/v0.10.0/deployments/values/otel-collector-rest-api.yaml)
+
+You can customize these files for your specific deployment needs.
diff --git a/documentation/versioned_docs/version-v0.10.x/img/evaluation/create-step1.png b/documentation/versioned_docs/version-v0.10.x/img/evaluation/create-step1.png
new file mode 100644
index 000000000..a53b69636
Binary files /dev/null and b/documentation/versioned_docs/version-v0.10.x/img/evaluation/create-step1.png differ
diff --git a/documentation/versioned_docs/version-v0.10.x/img/evaluation/create-step2-evaluators.png b/documentation/versioned_docs/version-v0.10.x/img/evaluation/create-step2-evaluators.png
new file mode 100644
index 000000000..5a60b201b
Binary files /dev/null and b/documentation/versioned_docs/version-v0.10.x/img/evaluation/create-step2-evaluators.png differ
diff --git a/documentation/versioned_docs/version-v0.10.x/img/evaluation/custom-eval-basic-details.png b/documentation/versioned_docs/version-v0.10.x/img/evaluation/custom-eval-basic-details.png
new file mode 100644
index 000000000..35aab1e9a
Binary files /dev/null and b/documentation/versioned_docs/version-v0.10.x/img/evaluation/custom-eval-basic-details.png differ
diff --git a/documentation/versioned_docs/version-v0.10.x/img/evaluation/custom-eval-code-details.png b/documentation/versioned_docs/version-v0.10.x/img/evaluation/custom-eval-code-details.png
new file mode 100644
index 000000000..8beea2aeb
Binary files /dev/null and b/documentation/versioned_docs/version-v0.10.x/img/evaluation/custom-eval-code-details.png differ
diff --git a/documentation/versioned_docs/version-v0.10.x/img/evaluation/custom-eval-code-editor.png b/documentation/versioned_docs/version-v0.10.x/img/evaluation/custom-eval-code-editor.png
new file mode 100644
index 000000000..8beea2aeb
Binary files /dev/null and b/documentation/versioned_docs/version-v0.10.x/img/evaluation/custom-eval-code-editor.png differ
diff --git a/documentation/versioned_docs/version-v0.10.x/img/evaluation/custom-eval-list.png b/documentation/versioned_docs/version-v0.10.x/img/evaluation/custom-eval-list.png
new file mode 100644
index 000000000..bf47d5d9d
Binary files /dev/null and b/documentation/versioned_docs/version-v0.10.x/img/evaluation/custom-eval-list.png differ
diff --git a/documentation/versioned_docs/version-v0.10.x/img/evaluation/custom-eval-llm-judge-editor.png b/documentation/versioned_docs/version-v0.10.x/img/evaluation/custom-eval-llm-judge-editor.png
new file mode 100644
index 000000000..28a1d0b1e
Binary files /dev/null and b/documentation/versioned_docs/version-v0.10.x/img/evaluation/custom-eval-llm-judge-editor.png differ
diff --git a/documentation/versioned_docs/version-v0.10.x/img/evaluation/evaluation-tab.png b/documentation/versioned_docs/version-v0.10.x/img/evaluation/evaluation-tab.png
new file mode 100644
index 000000000..ab0269b20
Binary files /dev/null and b/documentation/versioned_docs/version-v0.10.x/img/evaluation/evaluation-tab.png differ
diff --git a/documentation/versioned_docs/version-v0.10.x/img/evaluation/evaluator-drawer.png b/documentation/versioned_docs/version-v0.10.x/img/evaluation/evaluator-drawer.png
new file mode 100644
index 000000000..af04114ec
Binary files /dev/null and b/documentation/versioned_docs/version-v0.10.x/img/evaluation/evaluator-drawer.png differ
diff --git a/documentation/versioned_docs/version-v0.10.x/img/evaluation/llm-provider-config.png b/documentation/versioned_docs/version-v0.10.x/img/evaluation/llm-provider-config.png
new file mode 100644
index 000000000..11ed30138
Binary files /dev/null and b/documentation/versioned_docs/version-v0.10.x/img/evaluation/llm-provider-config.png differ
diff --git a/documentation/versioned_docs/version-v0.10.x/img/evaluation/monitor-dashboard.png b/documentation/versioned_docs/version-v0.10.x/img/evaluation/monitor-dashboard.png
new file mode 100644
index 000000000..3e1cd3861
Binary files /dev/null and b/documentation/versioned_docs/version-v0.10.x/img/evaluation/monitor-dashboard.png differ
diff --git a/documentation/versioned_docs/version-v0.10.x/img/evaluation/run-history.png b/documentation/versioned_docs/version-v0.10.x/img/evaluation/run-history.png
new file mode 100644
index 000000000..b779c29df
Binary files /dev/null and b/documentation/versioned_docs/version-v0.10.x/img/evaluation/run-history.png differ
diff --git a/documentation/versioned_docs/version-v0.10.x/img/evaluation/run-logs.png b/documentation/versioned_docs/version-v0.10.x/img/evaluation/run-logs.png
new file mode 100644
index 000000000..a79227e74
Binary files /dev/null and b/documentation/versioned_docs/version-v0.10.x/img/evaluation/run-logs.png differ
diff --git a/documentation/versioned_docs/version-v0.10.x/img/evaluation/span-scores-tab.png b/documentation/versioned_docs/version-v0.10.x/img/evaluation/span-scores-tab.png
new file mode 100644
index 000000000..53e04912f
Binary files /dev/null and b/documentation/versioned_docs/version-v0.10.x/img/evaluation/span-scores-tab.png differ
diff --git a/documentation/versioned_docs/version-v0.10.x/img/evaluation/traces-table-scores.png b/documentation/versioned_docs/version-v0.10.x/img/evaluation/traces-table-scores.png
new file mode 100644
index 000000000..1e319d425
Binary files /dev/null and b/documentation/versioned_docs/version-v0.10.x/img/evaluation/traces-table-scores.png differ
diff --git a/documentation/versioned_docs/version-v0.10.x/overview/what-is-amp.mdx b/documentation/versioned_docs/version-v0.10.x/overview/what-is-amp.mdx
new file mode 100644
index 000000000..a74d83054
--- /dev/null
+++ b/documentation/versioned_docs/version-v0.10.x/overview/what-is-amp.mdx
@@ -0,0 +1,62 @@
+---
+sidebar_position: 1
+---
+# WSO2 Agent Manager
+
+An open control plane designed for enterprises to deploy, manage, and govern AI agents at scale.
+
+## Overview
+
+WSO2 Agent Manager provides a comprehensive platform for enterprise AI agent management. It enables organizations to deploy AI agents (both internally hosted and externally deployed), monitor their behavior through full-stack observability, and enforce governance policies at scale.
+
+Built on [OpenChoreo](https://github.com/openchoreo/openchoreo) for internal agent deployments, the platform leverages OpenTelemetry for extensible instrumentation across multiple AI frameworks.
+
+## Key Features
+
+- **Deploy at Scale** - Deploy and run AI agents on Kubernetes with production-ready configurations
+- **Lifecycle Management** - Manage agent versions, configurations, and deployments from a unified control plane
+- **Governance** - Enforce policies, manage access controls, and ensure compliance across all agents
+- **Full Observability** - Capture traces, metrics, and logs for complete visibility into agent behavior
+- **Continuous Evaluation** - Assess agent quality with built-in evaluators across multiple dimensions — accuracy, safety, reasoning, tool usage, and more
+- **Auto-Instrumentation** - OpenTelemetry-based instrumentation for AI frameworks with zero code changes
+- **External Agent Support** - Monitor and govern externally deployed agents alongside internal ones
+
+## Components
+
+| Component | Description |
+|-----------|-------------|
+| **amp-instrumentation** | Python auto-instrumentation package for AI frameworks |
+| **amp-evaluation** *(experimental)* | Python SDK for building custom evaluators and running evaluation pipelines |
+| **amp-console** | Web-based management console for the platform |
+| **amp-api** | Backend API powering the control plane |
+| **amp-trace-observer** | API for querying and analyzing trace data |
+| **amp-python-instrumentation-provider** | Kubernetes init container for automatic Python instrumentation |
+
+## Helm Charts
+
+Deploy WSO2 Agent Manager on Kubernetes using our Helm charts:
+
+| Chart | Description |
+|-------|-------------|
+| `wso2-agent-manager` | Main platform deployment |
+| `wso2-amp-build-extension` | Build extension for OpenChoreo |
+| `wso2-amp-observability-extension` | Observability stack extension for OpenChoreo |
+| `wso2-amp-evaluation-extension` | Evaluation engine extension for Agent Manager |
+
+## Getting Started
+
+For installation instructions and a step-by-step guide, see the [Quick Start Guide](../getting-started/quick-start.mdx).
+
+## Contributing
+
+We welcome contributions from the community! Here's how you can help:
+
+1. **Report Issues** - Found a bug or have a feature request? Open an issue on GitHub
+2. **Submit Pull Requests** - Fork the repository, make your changes, and submit a PR
+3. **Improve Documentation** - Help us improve docs, tutorials, and examples
+
+Please ensure your contributions adhere to our coding standards and include appropriate tests.
+
+## License
+
+This project is licensed under the Apache License 2.0 - see the [LICENSE](https://github.com/wso2/agent-manager/blob/main/LICENSE) file for details.
diff --git a/documentation/versioned_docs/version-v0.10.x/tutorials/configure-agent-llm-configuration.mdx b/documentation/versioned_docs/version-v0.10.x/tutorials/configure-agent-llm-configuration.mdx
new file mode 100644
index 000000000..d8ee09686
--- /dev/null
+++ b/documentation/versioned_docs/version-v0.10.x/tutorials/configure-agent-llm-configuration.mdx
@@ -0,0 +1,249 @@
+---
+sidebar_position: 6
+---
+
+# Configure LLM Providers for an Agent
+
+Agents can be configured to use one or more LLM Service Providers registered at the organization level. The configuration process differs slightly between **Platform-hosted** and **External** agents,
+but both follow the same pattern: attach an org-level provider to the agent with an optional name, description, and guardrails.
+
+## Prerequisites
+
+- At least one LLM Service Provider registered at the org level (see [Register an LLM Service Provider](./register-llm-service-provider.mdx))
+- An agent created in a project (Platform-hosted or External)
+
+---
+
+## Overview: Agent Types
+
+| Type | Description |
+|---|---|
+| **Platform** | Agent code is built and deployed by the platform from a GitHub repository. The platform injects LLM credentials as environment variables. |
+| **External** | Agent is deployed and managed externally. The platform registers it and provides the invoke URL + API key for the LLM provider. |
+
+---
+
+## Configuring LLM for a Platform-Hosted Agent
+
+### Step 1: Open the Agent
+
+1. Navigate to your project (**Projects** → select project → **Agents**).
+2. Click on a **Platform**-tagged agent.
+3. In the left sidebar, click **Configure**.
+
+### Step 2: Add an LLM Provider
+
+The **Configure** page displays the **LLM Providers** section listing all LLM providers currently attached to this agent.
+
+1. Click **+ Add Provider**.
+2. Fill in the **Basic Details**:
+
+ | Field | Description | Example |
+ |---|---|---|
+ | **Name** | A logical name for this LLM binding within the agent | `OpenAI GPT5` |
+ | **Description** | Optional description | `Primary reasoning model` |
+
+3. Under **LLM Service Provider**, click **Select a Provider**.
+ - A side panel opens listing all org-level LLM Service Providers with their template, rate limiting status, and guardrails.
+ - Select the desired provider and close the panel.
+
+4. Optionally, under **Guardrails**, click **+ Add Guardrail** to attach guardrails specific to this agent's use of the provider.
+
+5. Click **Save**.
+
+### Step 3: Use the Provider in Agent Code
+
+After saving, the platform generates **environment variables** that are automatically injected into the agent's deployment runtime. You can view these on the LLM provider detail page under **Environment Variables References**:
+
+| Variable Name | Description |
+|---|---|
+| `_API_KEY` | API Key for authenticating with the LLM provider |
+| `_BASE_URL` | Base URL of the LLM Provider API endpoint |
+
+Where `` is derived from the provider name (uppercased, e.g., `OPENAI_GPT5` for a provider named `OpenAI GPT5`).
+
+If your agent is already configured to read a different environment variable name, update the system provided variable name and click **Save**.
+
+**Python code snippet** (shown in the UI):
+
+```python
+import os
+from openai import OpenAI
+
+apikey = os.environ.get('OPENAI_GPT5_API_KEY')
+url = os.environ.get('OPENAI_GPT5_URL')
+
+client = OpenAI(
+ base_url=url,
+ api_key="",
+ default_headers={"API-Key": apikey, "Authorization": ""}
+)
+```
+
+> **Note**: The platform also provides an **AI Prompt** snippet — a ready-made prompt you can paste into an AI coding assistant to automatically update your code to use the injected environment variables.
+
+### Step 4: Build and Deploy
+
+1. After configuring the LLM provider, click **Build** in the sidebar.
+2. Click **Trigger a Build** to build the agent from its GitHub source.
+3. Once the build completes, click **Deploy** to deploy to the target environment.
+4. The deployed agent URL appears on the **Overview** page (e.g., `http://default-default.localhost:19080/agent-name`).
+
+---
+
+## Configuring LLM for an External Agent
+
+### Step 1: Create and Register the Agent
+
+1. Navigate to your project (**Projects** → select project → **Agents**).
+2. Click **+ Add Agent**.
+3. On the **Add a New Agent** screen, select **Externally-Hosted Agent**.
+ > This option is for connecting an existing agent running outside the platform to enable observability and governance.
+4. Fill in the **Agent Details**:
+
+ | Field | Description | Example |
+ |---|---|---|
+ | **Name** | A unique identifier for the agent | `my-external-agent` |
+ | **Description** *(optional)* | Short description of what this agent does | `Customer support bot` |
+
+5. Click **Register**.
+
+After registration, the agent is created with status **Registered** and the **Setup Agent** panel opens automatically.
+
+---
+
+### Step 2: Instrument the Agent (Setup Agent)
+
+The **Setup Agent** panel provides a **Zero-code Instrumentation Guide** to connect your agent to the platform for observability (traces). Select your language from the **Language** dropdown (Python or Ballerina).
+
+#### Python
+
+1. **Install the AMP instrumentation package**:
+ ```bash
+ pip install amp-instrumentation
+ ```
+ Provides the ability to instrument your agent and export traces.
+
+2. **Generate API Key** — choose a **Token Duration** (default: 1 year) and click **Generate**. Copy the token immediately — it will not be shown again.
+
+3. **Set environment variables**:
+ ```bash
+ export AMP_OTEL_ENDPOINT="http://localhost:22893/otel"
+ export AMP_AGENT_API_KEY=""
+ ```
+ Sets the agent endpoint and agent-specific API key so traces can be exported securely.
+
+
+#### Ballerina
+
+1. **Import the Amp module** in your Ballerina program:
+ ```ballerina
+ import ballerinax/amp as _;
+ ```
+
+2. **Add the following to `Ballerina.toml`**:
+ ```toml
+ [build-options]
+ observabilityIncluded = true
+ ```
+
+3. **Update `Config.toml`**:
+ ```toml
+ [ballerina.observe]
+ tracingEnabled = true
+ tracingProvider = "amp"
+ ```
+
+4. **Generate API Key** — choose a **Token Duration** and click **Generate**. Copy the token immediately.
+
+5. **Set environment variables**:
+ ```bash
+ export BAL_CONFIG_VAR_BALLERINAX_AMP_OTELENDPOINT="http://localhost:22893/otel"
+ export BAL_CONFIG_VAR_BALLERINAX_AMP_APIKEY=""
+ ```
+
+You can reopen the Setup Agent panel at any time from the agent **Overview** page by clicking **Setup Agent**.
+
+---
+
+### Step 3: Add an LLM Provider
+
+1. In the left sidebar, click **Configure**.
+2. The **Configure Agent** page shows the **LLM Providers** section (empty for a new agent).
+3. Click **+ Add Provider**.
+4. Fill in the **Basic Details**:
+
+ | Field | Description | Example |
+ |---|---|---|
+ | **Name** | A logical name for this LLM binding | `openai-provider` |
+ | **Description** | Optional description | `Main model for customer queries` |
+
+5. Under **LLM Service Provider**, click **Select a Provider**.
+ - A side panel opens listing all org-level LLM Service Providers, showing the template (e.g., OpenAI), deployment time, rate limiting status, and guardrails.
+ - Select the desired provider.
+
+6. Optionally, under **Guardrails**, click **+ Add Guardrail** to attach content safety policies.
+
+7. Click **Save**.
+
+---
+
+### Step 4: Connect Your Agent Code to the LLM
+
+Immediately after saving, the provider detail page is shown with a **Connect to your LLM Provider** section containing everything needed to call the LLM from your agent code:
+
+| Field | Description |
+|---|---|
+| **Endpoint URL** | The gateway URL for this provider — use this as the base URL in your LLM client |
+| **Header Name** | The HTTP header to pass the API key (`API-Key`) |
+| **API Key** | The generated client key — **copy it now**, it will not be shown again |
+| **Example cURL** | A ready-to-use cURL command showing the Endpoint URL, Header Name, and API Key together |
+
+Example cURL:
+
+```bash
+curl -X POST \
+ --header "API-Key: " \
+ -d '{"your": "data"}'
+```
+
+Configure your agent's LLM client using the Endpoint URL as the base URL and pass the API Key in the `API-Key` header on every request.
+
+Below the connection details, the page also shows:
+
+- **LLM Service Provider**: the linked org-level provider (name, template, rate limiting and guardrails status)
+- **Guardrails**: agent-level guardrails attached to this LLM binding
+
+### Step 5: Run the Agent
+
+Run your agent.
+
+Example: Python agent with instrumentation
+
+```bash
+amp-instrument python main.py
+```
+
+---
+
+## Managing Attached LLM Providers
+
+From the **Configure Agent** page, the LLM Providers table shows all attached providers with:
+
+- **Name**: The logical name given to this LLM binding.
+- **Description**: Optional description.
+- **Created**: When the binding was created.
+- **Actions**: Delete icon to remove the provider from the agent.
+
+Multiple providers can be attached to a single agent, allowing the agent code to use different LLMs for different tasks by referencing their respective environment variable names (platform agents) or endpoint URLs and API keys (external agents).
+
+---
+
+## Notes
+
+- LLM provider credentials are **never exposed** to agent code directly — only the injected environment variables are available at runtime.
+- For platform agents, environment variables are re-injected on each deployment; no manual secret management is required.
+- For external agents, the Endpoint URL routes traffic through the AI Gateway, enabling centralized rate limiting, access control, and guardrails configured at the org level.
+- The external agent API Key shown after saving is a **one-time display** — it cannot be retrieved again. If lost, delete the LLM provider binding and re-add it to generate a new key.
+- The **Setup Agent** instrumentation step is for observability (traces) only and is independent of LLM configuration.
+- Guardrails added at the agent-LLM binding level are applied **in addition to** any guardrails configured on the provider itself.
\ No newline at end of file
diff --git a/documentation/versioned_docs/version-v0.10.x/tutorials/custom-evaluators.mdx b/documentation/versioned_docs/version-v0.10.x/tutorials/custom-evaluators.mdx
new file mode 100644
index 000000000..e432621dd
--- /dev/null
+++ b/documentation/versioned_docs/version-v0.10.x/tutorials/custom-evaluators.mdx
@@ -0,0 +1,187 @@
+---
+sidebar_position: 3
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Custom Evaluators
+
+This tutorial walks you through creating custom evaluators in the AMP Console. Custom evaluators let you define domain-specific quality checks using Python code or LLM judge prompt templates.
+
+## Prerequisites
+
+- A running AMP instance (see [Quick Start](../getting-started/quick-start.mdx))
+- An agent registered in AMP with an active environment
+- Familiarity with [evaluation concepts](../concepts/evaluation.mdx), especially evaluator types and evaluation levels
+- For LLM judge evaluators: an API key for a [supported LLM provider](../concepts/evaluation.mdx#supported-llm-providers)
+
+---
+
+## Navigate to Evaluators
+
+1. Open the AMP Console and select your agent.
+2. Click the **Evaluation** tab.
+3. Click the **Evaluators** sub-tab to see the evaluators list.
+4. Click **Create Evaluator**.
+
+
+
+---
+
+## Create a Custom Evaluator
+
+### Step 1: Set Basic Details
+
+1. Enter a **Display Name** (e.g., "Response Format Check" or "Domain Accuracy Judge").
+2. The **Identifier** is auto-generated from the display name. You can customize it (must be lowercase with hyphens, 3–128 characters).
+3. Add an optional **Description** explaining what this evaluator checks.
+4. Select the **Evaluator Type**:
+ - **Code**: write a Python function with arbitrary evaluation logic (deterministic rules, external API calls, regex matching, statistical analysis, or any combination)
+ - **LLM-Judge**: write a prompt template that instructs an LLM to score trace quality — use this when evaluation requires subjective judgment (semantic accuracy, domain-specific quality, or nuanced reasoning assessment)
+
+
+
+### Step 2: Select Evaluation Level
+
+Select the level at which your evaluator operates:
+
+- **Trace**: evaluates the full execution from input to output (`Trace` object)
+- **Agent**: evaluates a single agent's steps and decisions (`AgentTrace` object)
+- **LLM**: evaluates a single LLM call with messages and response (`LLMSpan` object)
+
+
+
+### Step 3: Write the Evaluation Logic
+
+
+
+
+The editor provides a **read-only header** with imports and the function signature (auto-generated from your selected level and config parameters). Write your logic in the **function body** below the header.
+
+Your function must return an `EvalResult`:
+
+- **Score**: `EvalResult(score=0.85, explanation="...")` — score between 0.0 (worst) and 1.0 (best)
+- **Skip**: `EvalResult.skip("reason")` — use when the evaluator is not applicable to this input
+
+**Example**: a trace-level evaluator that checks output contains valid JSON:
+
+```python
+def evaluate(trace: Trace) -> EvalResult:
+ if not trace.output:
+ return EvalResult.skip("No output to evaluate")
+
+ import json
+ try:
+ json.loads(trace.output)
+ return EvalResult(score=1.0, explanation="Output is valid JSON")
+ except json.JSONDecodeError as e:
+ return EvalResult(score=0.0, explanation=f"Invalid JSON: {e}")
+```
+
+
+
+:::tip
+Use `EvalResult.skip()` instead of returning a score of 0.0 when the evaluator is not applicable. Skipped evaluations are tracked separately and do not affect aggregated scores.
+:::
+
+
+
+
+Use placeholders to inject trace data into your prompt. Available placeholders depend on the selected level:
+
+- **Trace level**: `{trace.input}`, `{trace.output}`, `{trace.get_tool_steps()}`, etc.
+- **Agent level**: `{agent_trace.input}`, `{agent_trace.output}`, `{agent_trace.get_tool_steps()}`, etc.
+- **LLM level**: `{llm_span.input}`, `{llm_span.output}`, etc.
+
+Write only the evaluation criteria — the system automatically wraps your prompt in scoring instructions that tell the LLM to return a structured score and explanation.
+
+**Example**: a trace-level LLM judge for a travel booking agent:
+
+```
+You are evaluating a travel booking agent's response.
+
+User query: {trace.input}
+
+Agent response: {trace.output}
+
+Tools used: {trace.get_tool_steps()}
+
+Evaluate whether the agent:
+1. Recommended flights that match the user's stated preferences (dates, budget, airline)
+2. Provided accurate pricing information consistent with the tool results
+3. Included all required booking details (confirmation number, departure time, gate info)
+
+Score 1.0 if all criteria are met, 0.5 if partially met, 0.0 if the response is incorrect or misleading.
+```
+
+
+
+:::tip
+LLM judge evaluators inherit the same **Model**, **Temperature**, and **Criteria** configuration as built-in LLM-as-Judge evaluators. These parameters are configurable when adding the evaluator to a monitor.
+:::
+
+
+
+
+### Step 4: Use the AI Copilot (Optional)
+
+The editor includes an **AI Copilot Prompt** section — a pre-built, context-aware prompt you can copy and paste into your AI assistant (e.g., ChatGPT, Claude). Describe what you want to evaluate, and the AI will generate the evaluation code or prompt template for you.
+
+### Step 5: Define Configuration Parameters (Optional)
+
+Configuration parameters make your evaluator reusable with different settings across monitors. For example, a content check evaluator might accept a `keywords` parameter so different monitors can check for different terms.
+
+1. Expand the **Config Params** section.
+2. Click **Add Parameter**.
+3. For each parameter, configure:
+ - **Key**: a Python identifier (e.g., `min_words`, `required_format`)
+ - **Type**: string, integer, float, boolean, array, or enum
+ - **Description**: shown to users when configuring the evaluator in a monitor
+ - **Default value**: used when not overridden
+ - **Constraints**: min/max for numbers, allowed values for enum types
+
+In **Code** evaluators, parameters appear as keyword arguments in the function signature (e.g., `threshold: float = 0.5`). In **LLM-Judge** evaluators, parameters are available as `{key}` placeholders in your prompt template (e.g., `{domain}`).
+
+### Step 6: Add Tags and Create
+
+1. Optionally add **Tags** to categorize your evaluator (e.g., `format`, `domain-specific`, `compliance`).
+2. Review your configuration.
+3. Click **Create Evaluator**.
+
+Your evaluator appears in the evaluators list and can be selected when creating or editing monitors.
+
+---
+
+## Use Custom Evaluators in a Monitor
+
+Once created, custom evaluators appear in the evaluator selection grid alongside built-in evaluators when [creating or editing a monitor](./evaluation-monitors.mdx).
+
+- Code evaluators are tagged with **code**
+- LLM judge evaluators are tagged with **llm-judge**
+- Your custom tags are also displayed on the evaluator cards
+
+Select and configure custom evaluators the same way as built-in evaluators. Set parameter values, choose the LLM model (for LLM judges), and add them to the monitor.
+
+---
+
+## Edit and Delete Custom Evaluators
+
+### Edit
+
+Click an evaluator in the evaluators list to open it for editing. You can update:
+
+- Display name and description
+- Source code or prompt template
+- Configuration parameter schema
+- Tags
+
+The **identifier** and **evaluation level** cannot be changed after creation.
+
+### Delete
+
+Click the **delete** icon on an evaluator in the list. Deletion is a soft delete. The evaluator is removed from the list, but existing monitor results referencing it are preserved.
+
+:::info
+A custom evaluator cannot be deleted while it is referenced by an active monitor. Remove the evaluator from all monitors before deleting it.
+:::
diff --git a/documentation/versioned_docs/version-v0.10.x/tutorials/evaluation-monitors.mdx b/documentation/versioned_docs/version-v0.10.x/tutorials/evaluation-monitors.mdx
new file mode 100644
index 000000000..b6bc199db
--- /dev/null
+++ b/documentation/versioned_docs/version-v0.10.x/tutorials/evaluation-monitors.mdx
@@ -0,0 +1,219 @@
+---
+sidebar_position: 2
+---
+
+# Evaluation Monitors
+
+This tutorial walks you through creating an evaluation monitor, viewing results, and managing monitors in the AMP Console.
+
+## Prerequisites
+
+- A running AMP instance (see [Quick Start](../getting-started/quick-start.mdx))
+- An agent registered in AMP with an active environment
+- Agent traces being collected (see [Observe Your First Agent](./observe-first-agent.mdx))
+- For LLM-as-Judge evaluators: an API key for a [supported LLM provider](../concepts/evaluation.mdx#supported-llm-providers)
+
+---
+
+## Create a Monitor
+
+### Step 1: Navigate to Evaluation
+
+1. Open the AMP Console and select your agent.
+2. Click the **Evaluation** tab.
+3. Click **Add Monitor**.
+
+
+
+---
+
+### Step 2: Configure Monitor Details
+
+Fill in the monitor configuration:
+
+- **Monitor Title**: A descriptive name for the monitor (e.g., "Production Quality Monitor").
+- **Identifier**: Auto-generated from the title. You can customize it (must be lowercase with hyphens, 3–60 characters).
+- **Data Collection Type**: Choose one:
+ - **Past Traces**: evaluate traces from a specific time window. Set a **Start Time** and **End Time**. The evaluation runs immediately after creation.
+ - **Future Traces**: evaluate new traces on a recurring schedule. Set an **interval** in minutes (minimum 5 minutes).
+
+
+
+:::tip Choosing a monitor type
+Use **Past Traces** when you want to assess historical agent behavior, such as reviewing last week's interactions after a deployment. Use **Future Traces** for ongoing production quality monitoring.
+:::
+
+---
+
+### Step 3: Select and Configure Evaluators
+
+1. Browse the evaluator grid. Each card shows the evaluator name, tags, and a brief description.
+2. Click an evaluator card to open its details and configuration.
+3. Configure parameters as needed. For example, set `max_latency_ms` for the Latency evaluator, or choose a model for an LLM-as-Judge evaluator.
+4. Click **Add Evaluator** to include it in the monitor.
+5. Repeat for all evaluators you want to use. You must select at least one.
+
+For a full reference of available evaluators and their parameters, see [Built-in Evaluators](../concepts/evaluation.mdx#built-in-evaluators). You can also create your own (see [Custom Evaluators](./custom-evaluators.mdx)).
+
+
+
+
+
+---
+
+### Step 4: Configure LLM Providers (LLM-as-Judge only)
+
+If you selected any LLM-as-Judge evaluators, you need to configure at least one LLM provider. Skip this step if you only selected rule-based evaluators.
+
+1. In the evaluator configuration panel, find the **LLM Providers** section.
+2. Select a provider from the dropdown (OpenAI, Anthropic, Google AI Studio, Groq, or Mistral AI).
+3. Enter your API key.
+4. Click **Add** to save the credentials.
+
+The **model** field on LLM-as-Judge evaluators uses `provider/model` format (e.g., `openai/gpt-4o-mini`, `anthropic/claude-sonnet-4-6`). The available models depend on the providers you have configured.
+
+:::tip
+You only need to add each provider once per monitor. All evaluators using that provider share the same credentials.
+:::
+
+
+
+---
+
+### Step 5: Create the Monitor
+
+Review your configuration and click **Create Monitor**.
+
+- **Historical monitors** start evaluating immediately. Results appear in the dashboard once the run completes.
+- **Continuous monitors** start in Active status. The first evaluation runs within 60 seconds, then repeats at the configured interval.
+
+---
+
+## View Monitor Results
+
+After creation, you'll see your monitor in the monitor list. Click a monitor to open its dashboard.
+
+### Dashboard Overview
+
+The monitor dashboard provides several views of your evaluation results:
+
+- **Time Range Selector**: filter results by Last 24 Hours, Last 3 Days, Last 7 Days, or Last 30 Days. Historical monitors show their fixed trace window instead.
+- **Agent Performance Chart**: a radar chart showing mean scores across all evaluators, giving a quick visual summary of agent strengths and weaknesses.
+- **Evaluation Summary**: shows the weighted average score and total evaluation count, with **per-level statistics**:
+ - **Trace level**: number of traces evaluated, evaluator count, and skip rate
+ - **Agent level**: number of agent executions evaluated, evaluator count, and skip rate
+ - **LLM level**: number of LLM invocations evaluated, evaluator count, and skip rate
+
+ Only levels with configured evaluators appear in the summary.
+- **Run Summary**: latest run status with quick access to run history.
+- **Performance by Evaluator**: a time-series chart showing how each evaluator's score trends over time. Useful for spotting regressions or improvements.
+
+
+
+### Score Breakdowns
+
+When your monitor includes agent-level or LLM-level evaluators, the dashboard shows additional breakdown tables below the performance chart.
+
+#### Score Breakdown by Agent
+
+A table with one row per agent found in the evaluated traces. Each row shows:
+
+- **Agent name**: the agent's identifier from the trace
+- **Evaluator scores**: mean score for each agent-level evaluator, displayed as color-coded percentage chips. A dash (–) indicates the evaluator was skipped for that agent.
+- **Count**: the number of agent executions evaluated
+
+This helps you identify which agent in a multi-agent system needs improvement.
+
+#### Score Breakdown by Model
+
+A table with one row per LLM model used across the evaluated traces. Each row shows:
+
+- **Model name**: the LLM model identifier (e.g., `gpt-4o`, `claude-sonnet-4-6`)
+- **Evaluator scores**: mean score for each LLM-level evaluator
+- **Count**: the number of LLM invocations evaluated
+
+This helps you compare quality across different models used by your agents.
+
+
+### Run History
+
+The dashboard also shows a history of all evaluation runs. Each run displays:
+
+- **Status**: pending, running, success, or failed
+- **Trace window**: the start and end time of traces evaluated
+- **Timestamps**: when the run started and completed
+
+You can take actions on individual runs:
+
+- **Rerun**: re-execute the evaluation run.
+- **View Logs**: see detailed execution logs for troubleshooting.
+
+
+
+### Run Logs
+
+Click **View Logs** on any run to open the log viewer. This displays the application logs from the monitor's evaluation job, useful for diagnosing failed or unexpected runs.
+
+
+
+---
+
+## View Scores in Trace View
+
+Evaluation scores are also visible directly in the trace view, making it easy to investigate specific agent interactions without switching to the monitor dashboard.
+
+### Score Column in Traces Table
+
+The traces list includes a **Score** column showing the average evaluator score for each trace. Scores are color-coded (green for high scores, red for low), giving you a quick visual indicator of which traces need attention.
+
+
+
+### Scores in Span Details
+
+Click any trace to open the trace timeline. Select a span to view its details panel:
+
+1. **Score chips in the header**: evaluator scores appear as color-coded percentage chips in the span's basic info section, alongside duration, token count, and model information.
+2. **Scores tab**: a dedicated tab shows each evaluator's result:
+ - **Evaluator name**: prefixed with the monitor name when the same evaluator appears in multiple monitors (e.g., `production-monitor / Accuracy`)
+ - **Score chip**: color-coded percentage (green for high, red for low)
+ - **Explanation**: markdown-rendered explanation from the evaluator describing why this score was given
+ - **Skipped evaluators**: shown with a skip reason instead of a score
+
+Trace-level scores appear on the root span. Agent-level and LLM-level scores appear on their respective agent and LLM spans.
+
+
+
+---
+
+## Start and Suspend a Monitor
+
+This applies to **continuous monitors** only.
+
+- **Suspend**: Click the **pause** button in the actions column. The monitor stops running on schedule but retains all configuration and historical results. You can resume it at any time.
+- **Start**: Click the **play** button on a suspended monitor. Evaluation resumes within 60 seconds.
+
+:::info
+Historical monitors cannot be started or suspended. They run once when created.
+:::
+
+---
+
+## Edit a Monitor
+
+1. Click the **edit** (pencil) icon in the monitor list actions column.
+2. The monitor configuration wizard opens with the current settings.
+3. Update the fields you want to change: display name, evaluators, evaluator parameters, LLM provider credentials, interval (for continuous monitors), or time range (for historical monitors).
+4. Click **Save** to apply the changes.
+
+:::info
+The monitor type (continuous or historical) cannot be changed after creation.
+:::
+
+---
+
+## Delete a Monitor
+
+1. Click the **delete** (trash) icon in the monitor list actions column.
+2. Confirm the deletion in the dialog.
+
+Deletion permanently removes the monitor and all its associated run history and scores. This action cannot be undone.
diff --git a/documentation/versioned_docs/version-v0.10.x/tutorials/observe-first-agent.mdx b/documentation/versioned_docs/version-v0.10.x/tutorials/observe-first-agent.mdx
new file mode 100644
index 000000000..a5b1d50e8
--- /dev/null
+++ b/documentation/versioned_docs/version-v0.10.x/tutorials/observe-first-agent.mdx
@@ -0,0 +1,154 @@
+---
+sidebar_position: 1
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Observe Your First Agent
+
+This tutorial walks you through registering an externally-hosted agent with WSO2 Agent Manager, connecting it with zero-code instrumentation, and viewing its traces in the AMP Console.
+
+## Step 1: Open or Create a Project
+
+WSO2 Agent Manager organises agents inside **Projects**.
+
+1. Open the AMP Console.
+2. Click the **project switcher** in the top navigation bar.
+3. Select an existing project (e.g. **Default Project**) or click **+ Create a Project**, enter a name and description, then click **Create**.
+
+## Step 2: Add a New Agent
+
+1. Inside your project, click **+ Add Agent**.
+2. On the **Add a New Agent** screen, choose **Externally-Hosted Agent** — use this when your agent runs outside the AMP platform (locally, on your own infrastructure, or in a third-party cloud).
+
+## Step 3: Register the Agent
+
+1. Fill in the **Agent Details**:
+ - **Name** — e.g. `my-first-agent`
+ - **Description** (optional)
+2. Click **Register**.
+
+You are taken to the agent's **Overview** page. A **Setup Agent** panel opens automatically on the right.
+
+## Step 4: Set Up Instrumentation (Setup Agent Panel)
+
+The **Zero-code Instrumentation Guide** in the Setup Agent panel walks you through the setup steps. Use the **Language** dropdown at the top-right of the panel to select your agent's language — **Python** or **Ballerina** — and follow the corresponding instructions below.
+
+
+
+
+### 4.1 Install AMP Instrumentation Package
+
+```bash
+pip install amp-instrumentation
+```
+
+This provides the ability to instrument your agent and export traces.
+
+:::tip Using a virtual environment?
+Activate it before installing so the package is available on your path.
+:::
+
+### 4.2 Generate an API Key
+
+In the panel, select a **Token Duration** (e.g. 1 year) and click **Generate**. Copy the key immediately — it will not be shown again.
+
+### 4.3 Set Environment Variables
+
+```bash
+export AMP_OTEL_ENDPOINT=""
+export AMP_AGENT_API_KEY=""
+```
+
+Set the above env variables.
+
+### 4.4 Run Your Agent with Instrumentation
+
+Prefix your normal run command with `amp-instrument`:
+
+```bash
+amp-instrument
+```
+
+For example:
+
+```bash
+# Python script
+amp-instrument python my_agent.py
+
+# FastAPI / async service
+amp-instrument uvicorn app:main --reload
+
+# Poetry / uv managed projects
+amp-instrument poetry run python agent.py
+amp-instrument uv run python agent.py
+```
+
+No changes to your agent code are required.
+
+
+
+
+### 4.1 Import Amp Module
+
+Add the following import to your Ballerina program:
+
+```ballerina
+import ballerinax/amp as _;
+```
+
+### 4.2 Add Build Configuration to Ballerina.toml
+
+Add the following to your `Ballerina.toml` to enable observability when building:
+
+```toml
+[build-options]
+observabilityIncluded = true
+```
+
+### 4.3 Update Config.toml
+
+Enable tracing and set the provider to AMP in your `Config.toml`:
+
+```toml
+[ballerina.observe]
+tracingEnabled = true
+tracingProvider = "amp"
+```
+
+### 4.4 Generate an API Key
+
+In the panel, select a **Token Duration** (e.g. 1 year) and click **Generate**. Copy the key immediately — it will not be shown again.
+
+### 4.5 Set Environment Variables
+
+```bash
+export BAL_CONFIG_VAR_BALLERINAX_AMP_OTELENDPOINT=""
+export BAL_CONFIG_VAR_BALLERINAX_AMP_APIKEY=""
+```
+
+Set the above env variables.
+
+### 4.6 Run Your Agent
+
+Run your Ballerina program as usual:
+
+```bash
+bal run
+```
+
+The observability module is loaded automatically via the import.
+
+
+
+
+## Step 5: View Traces in the Console
+
+Once your agent has handled a few requests:
+
+1. In the left sidebar, under **OBSERVABILITY**, click **Traces**.
+2. Each trace represents one end-to-end agent invocation. Click any trace to expand it:
+ - The **root span** shows end-to-end latency and the agent name.
+ - **LLM spans** show the model, token counts, and call latency.
+ - **Tool spans** show tool name, input, and output.
diff --git a/documentation/versioned_docs/version-v0.10.x/tutorials/register-ai-gateway.mdx b/documentation/versioned_docs/version-v0.10.x/tutorials/register-ai-gateway.mdx
new file mode 100644
index 000000000..f9c4f2de2
--- /dev/null
+++ b/documentation/versioned_docs/version-v0.10.x/tutorials/register-ai-gateway.mdx
@@ -0,0 +1,136 @@
+---
+sidebar_position: 4
+---
+
+# Register an AI Gateway
+
+AI Gateways are organization-level infrastructure components that route LLM traffic through a controlled proxy. You can register multiple gateways (e.g., for different environments or teams), and each LLM Service Provider is exposed through a gateway's invoke URL.
+
+Agent manager currently supports **WSO2 AI Gateway** (https://github.com/wso2/api-platform/tree/gateway/v0.9.0/docs/ai-gateway)
+
+## Prerequisites
+
+Before registering a gateway, ensure you have:
+
+- Admin access to the WSO2 Agent Manager Console
+- One of the following available depending on your chosen deployment method:
+ - **Quick Start / Docker**: cURL, unzip, Docker installed and running
+ - **Virtual Machine**: cURL, unzip, and a Docker-compatible container runtime (Docker Desktop, Rancher Desktop, Colima, or Docker Engine + Compose plugin)
+ - **Kubernetes**: cURL, unzip, Kubernetes 1.32+, Helm 3.18+
+
+---
+
+## Step 1: Navigate to AI Gateways
+
+1. Log in to the WSO2 Agent Manager Console (`http://localhost:3000`).
+2. Go to Organization level by closing the projects section from the top navigation
+3. In the left sidebar, click **AI Gateways** under the **INFRASTRUCTURE** section.
+
+ > The AI Gateways page lists all registered gateways with their Name, Status, and Last Updated time.
+
+
+ Agent manager comes with a pre-configured AI gateway which is ready to be used out-of-the-box.
+
+
+---
+
+## Step 2: Add a New AI Gateway
+
+1. Click the **+ Add AI Gateway** button (top right).
+2. Fill in the **Gateway Details** form:
+
+ | Field | Description | Example |
+ |---|---|---|
+ | **Name** | A descriptive name for the gateway | `Production AI Gateway` |
+ | **Virtual Host** | The FQDN or IP address where the gateway will be reachable | `api.production.example.com` |
+ | **Critical production gateway** | Toggle to mark this gateway as critical for production deployments | Enabled / Disabled |
+
+3. Click **Create AI Gateway**.
+
+---
+
+## Step 3: Configure and Start the Gateway
+
+After creating the gateway, you are taken to the gateway detail page. It shows:
+
+- **Virtual Host**: The internal cluster URL for the gateway runtime.
+- **Environments**: The environments (e.g., `Default`) this gateway serves.
+
+The **Get Started** section provides instructions to deploy the gateway process using one of the methods below.
+
+### Quick Start (Docker)
+
+**Prerequisites**: cURL, unzip, Docker installed and running.
+
+**Step 1 – Download the Gateway**
+
+```bash
+curl -sLO https://github.com/wso2/api-platform/releases/download/ai-gateway/v0.9.0/ai-gateway-v0.9.0.zip && \
+unzip ai-gateway-v0.9.0.zip
+```
+
+**Step 2 – Configure the Gateway**
+
+Generate a registration token by clicking **Reconfigure** on the gateway detail page. This produces a `configs/keys.env` file with the token and connection details.
+
+**Step 3 – Start the Gateway**
+
+```bash
+cd ai-gateway-v0.9.0
+docker compose --env-file configs/keys.env up
+```
+
+---
+
+### Virtual Machine
+
+**Prerequisites**: cURL, unzip, and a Docker-compatible container runtime:
+
+- Docker Desktop (Windows / macOS)
+- Rancher Desktop (Windows / macOS)
+- Colima (macOS)
+- Docker Engine + Compose plugin (Linux)
+
+Verify the runtime is available:
+
+```bash
+docker --version
+docker compose version
+```
+
+Then follow the same **Download → Configure → Start** steps as Quick Start above.
+
+---
+
+### Kubernetes
+
+**Prerequisites**: cURL, unzip, Kubernetes 1.32+, Helm 3.18+.
+
+**Configure**: Click **Reconfigure** to generate a gateway registration token.
+
+**Install the Helm chart**:
+
+```bash
+helm install gateway oci://ghcr.io/wso2/api-platform/helm-charts/gateway --version 0.9.0 \
+ --set gateway.controller.controlPlane.host="" \
+ --set gateway.controller.controlPlane.port=443 \
+ --set gateway.controller.controlPlane.token.value="your-gateway-token" \
+ --set gateway.config.analytics.enabled=true
+```
+
+Replace `your-gateway-token` with the token generated in the Reconfigure step.
+
+---
+
+## Verifying the Gateway
+
+Once running, the gateway appears in the **AI Gateways** list with status **Active**. The gateway detail page shows the virtual host URL, which is the base URL for all LLM provider invoke URLs routed through this gateway.
+
+---
+
+## Notes
+
+- A **Default AI Gateway** is pre-provisioned in new organizations.
+- Each gateway can serve multiple environments (e.g., `Default`, `Production`).
+- The registration token (generated via **Reconfigure**) is environment-specific and must be kept secret.
+- Marking a gateway as **Critical production gateway** helps signal its importance for operational monitoring.
\ No newline at end of file
diff --git a/documentation/versioned_docs/version-v0.10.x/tutorials/register-llm-service-provider.mdx b/documentation/versioned_docs/version-v0.10.x/tutorials/register-llm-service-provider.mdx
new file mode 100644
index 000000000..dc0c4ad96
--- /dev/null
+++ b/documentation/versioned_docs/version-v0.10.x/tutorials/register-llm-service-provider.mdx
@@ -0,0 +1,152 @@
+---
+sidebar_position: 5
+---
+
+# Register an LLM Service Provider
+
+LLM Service Providers are organization-level resources that represent connections to upstream LLM APIs (e.g., OpenAI, Anthropic, AWS Bedrock). Once registered, they are exposed through an AI Gateway and can be attached to agents across any project in the organization.
+
+## Prerequisites
+
+- Admin access to the WSO2 Agent Manager Console
+- At least one AI Gateway registered and active (see [Register an AI Gateway](./register-ai-gateway.mdx))
+- API credentials for the target LLM provider (e.g., an OpenAI API key)
+
+---
+
+## Step 1: Navigate to LLM Service Providers
+
+1. Log in to the WSO2 Agent Manager Console (`http://localhost:3000`).
+2. Go to the Organization level by closing the projects section from the top navigation.
+3. In the left sidebar, click **LLM Service Providers** under the **RESOURCES** section.
+
+ > The LLM Service Providers page lists all registered providers with their Name, Template, and Last Updated time.
+
+---
+
+## Step 2: Add a New Provider
+
+1. Click the **+ Add Service Provider** button.
+2. Fill in the **Basic Details**:
+
+ | Field | Description | Example |
+ |---|---|---|
+ | **Name** *(required)* | A descriptive name for this provider configuration | `Production OpenAI Provider` |
+ | **Version** *(required)* | Version identifier for this provider configuration | `v1.0` |
+ | **Short description** | Optional description of the provider's purpose | `Primary LLM provider for production` |
+ | **Context path** | The API path prefix for this provider (must start with `/`, no trailing slash) | `/my-provider` |
+
+3. Under **Provider Template**, select one of the pre-built provider templates:
+
+ | Template | Description |
+ |---|---|
+ | **Anthropic** | Claude models via Anthropic API |
+ | **AWS Bedrock** | AWS-hosted foundation models |
+ | **Azure AI Foundry** | Azure AI model deployments |
+ | **Azure OpenAI** | OpenAI models hosted on Azure |
+ | **Gemini** | Google Gemini models |
+ | **Mistral** | Mistral AI models |
+ | **OpenAI** | OpenAI models (GPT-4, etc.) |
+
+ Selecting a template auto-populates the upstream URL, authentication type, and API specification.
+
+5. Provide the Credentials for the selected template. (Follow the official documentation of the respective providers for getting an api key/ credential)
+
+4. Click **Add provider**.
+
+---
+
+## Step 3: Configure Provider Settings
+
+After creation, the provider detail page appears with six configuration tabs.
+
+### Overview Tab
+
+Displays a summary of the provider:
+
+| Field | Description |
+|---|---|
+| **Context** | The context path (e.g., `/test`) |
+| **Upstream URL** | The backend LLM API endpoint (e.g., `https://api.openai.com/v1`) |
+| **Auth Type** | Authentication method (e.g., `api-key`) |
+| **Access Control** | Current access policy (e.g., `allow_all`) |
+
+The **Invoke URL & API Key** section shows:
+
+- **Gateway**: Select which AI Gateway exposes this provider.
+- **Invoke URL**: The full URL agents use to call this provider through the gateway (auto-generated).
+- **Generate API Key**: Generate a client API key for agents to authenticate against this provider.
+
+---
+
+### Connection Tab
+
+Configure the upstream connection to the LLM Provider API:
+
+| Field | Description | Example |
+|---|---|---|
+| **Provider Endpoint** | The base URL of the upstream LLM API | `https://api.openai.com/v1` |
+| **Authentication** | Auth method for the upstream call | `API Key` |
+| **Authentication Header** | HTTP header used to pass the credential | `Authorization` |
+| **Credentials** | The API key or secret for the upstream LLM provider | `sk-...` |
+
+Click **Save** to persist changes.
+
+---
+
+### Access Control Tab
+
+Control which API resources are accessible through this provider:
+
+- **Mode**: Choose `Allow all` (default – all resources permitted) or `Deny all` (whitelist only).
+- **Allowed Resources**: List of API operations permitted (e.g., `GET /assistants`, `POST /chat/completions`).
+- **Denied Resources**: List of API operations explicitly blocked.
+
+Use the arrow buttons to move resources between the Allowed and Denied lists. You can also **Import from specification** to populate the resource list from an OpenAPI spec.
+
+---
+
+### Security Tab
+
+Configure how to authenticate to this provider via the gateway:
+
+| Field | Description | Example |
+|---|---|---|
+| **Authentication** | Auth scheme for inbound calls | `apiKey` |
+| **Header Key** | HTTP header name carrying the API key | `X-API-Key` |
+| **Key Location** | Where the key is passed | `header` |
+
+---
+
+### Rate Limiting Tab
+
+Set backend rate limits to protect the upstream LLM API:
+
+- **Mode**: `Provider-wide` (single limit for all resources) or `Per Resource` (limits per endpoint).
+- **Request Counts**: Configure request-per-window thresholds.
+- **Token Count**: Configure token-per-window thresholds.
+- **Cost**: *(Coming soon)* Cost-based limits.
+
+---
+
+### Guardrails Tab
+
+Attach content safety policies to this provider:
+
+- **Global Guardrails**: Apply to all API resources under this provider. Click **+ Add Guardrail** to attach one.
+- **Resource-wise Guardrails**: Per-operation guardrails for individual API endpoints (e.g., `POST /chat/completions`).
+
+---
+
+## Verifying the Provider
+
+The registered provider appears in the **LLM Service Providers** list showing its name and the template used (e.g., `OpenAI`). From the Overview tab, select your active AI Gateway to see the **Invoke URL** — this is the endpoint agents use to call the LLM through the gateway.
+
+---
+
+## Notes
+
+- The **context path** must be unique per organization. It forms part of the invoke URL: ``.
+- Credentials entered in the Connection tab are stored securely and never exposed in the UI.
+- A provider must be associated with at least one AI Gateway to be callable by agents.
+- Multiple providers can share the same gateway but must have distinct context paths.
diff --git a/documentation/versioned_sidebars/version-v0.10.x-sidebars.json b/documentation/versioned_sidebars/version-v0.10.x-sidebars.json
new file mode 100644
index 000000000..b6e7c866a
--- /dev/null
+++ b/documentation/versioned_sidebars/version-v0.10.x-sidebars.json
@@ -0,0 +1,67 @@
+{
+ "docsSidebar": [
+ {
+ "type": "category",
+ "label": "Overview",
+ "collapsed": false,
+ "items": [
+ "overview/what-is-amp"
+ ]
+ },
+ {
+ "type": "category",
+ "label": "Getting Started",
+ "collapsed": false,
+ "items": [
+ "getting-started/quick-start",
+ {
+ "type": "category",
+ "label": "Installation",
+ "collapsed": false,
+ "items": [
+ "getting-started/self-hosted-cluster",
+ "getting-started/managed-cluster"
+ ]
+ }
+ ]
+ },
+ {
+ "type": "category",
+ "label": "Concepts",
+ "collapsed": false,
+ "items": [
+ "concepts/observability",
+ "concepts/evaluation"
+ ]
+ },
+ {
+ "type": "category",
+ "label": "Components",
+ "collapsed": false,
+ "items": [
+ "components/amp-instrumentation"
+ ]
+ },
+ {
+ "type": "category",
+ "label": "Tutorials",
+ "collapsed": false,
+ "items": [
+ "tutorials/observe-first-agent",
+ "tutorials/evaluation-monitors",
+ "tutorials/custom-evaluators",
+ "tutorials/register-ai-gateway",
+ "tutorials/register-llm-service-provider",
+ "tutorials/configure-agent-llm-configuration"
+ ]
+ },
+ {
+ "type": "category",
+ "label": "Contributing",
+ "collapsed": false,
+ "items": [
+ "contributing/contributing"
+ ]
+ }
+ ]
+}
diff --git a/documentation/versions.json b/documentation/versions.json
index 85d917570..5fbd68dda 100644
--- a/documentation/versions.json
+++ b/documentation/versions.json
@@ -1,4 +1,5 @@
[
+ "v0.10.x",
"v0.9.x",
"v0.8.x",
"v0.7.x",