Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
115 changes: 115 additions & 0 deletions docs/advanced/system-prompts.md
Original file line number Diff line number Diff line change
Expand Up @@ -135,6 +135,121 @@ def generate_context_prompt(user_id: str, session_type: str) -> str:
return f"You are assisting {user_data.name} with {session_type}."
```

### Provider-Based Dynamic Instructions

ResourceProviders can provide dynamic instructions that are re-evaluated on each agent run with access to runtime context. This is different from function-generated prompts because instructions have access to AgentContext and RunContext.

```yaml
agents:
context_aware_agent:
system_prompt:
- "You are a helpful assistant."

toolsets:
- type: custom
import_path: myapp.providers.UserContextProvider
name: user_provider

# Add provider-based dynamic instructions
instructions:
- type: provider
ref: user_provider
```

Provider implementation:

```python
from agentpool.resource_providers import ResourceProvider
from agentpool.prompts.instructions import InstructionFunc
from agentpool.agents.context import AgentContext

class UserContextProvider(ResourceProvider):
async def get_instructions(self) -> list[InstructionFunc]:
"""Return dynamic instruction functions.

Each function is re-evaluated on each run with access
to runtime context (AgentContext, RunContext, or both).
"""
return [
self._get_user_context, # With AgentContext
self._get_system_status, # No context
]

async def _get_user_context(self, ctx: AgentContext) -> str:
"""Generate context based on agent state."""
# Access agent name, model, conversation history, etc.
return f"Agent: {ctx.name}, Model: {ctx.model_name}"

def _get_system_status(self) -> str:
"""Return static instruction."""
return "System: Online"
```

#### Instruction Function Context Types

Instruction functions can receive different context types:

- **No context**: `() -> str`
- **AgentContext only**: `(AgentContext) -> str`
- **RunContext only**: `(RunContext) -> str`
- **Both contexts**: `(AgentContext, RunContext) -> str`

```python
# No context
def simple() -> str:
return "Be helpful."

# AgentContext only
async def with_agent(ctx: AgentContext) -> str:
return f"Agent: {ctx.name}"

# RunContext only
async def with_run(ctx: RunContext) -> str:
return f"Model: {ctx.model_name}"

# Both contexts
async def with_both(agent_ctx: AgentContext, run_ctx: RunContext) -> str:
return f"Agent {agent_ctx.name} using {run_ctx.model.model_name}"
```

#### Function-Generated vs Provider-Based Instructions

| Feature | Function-Generated | Provider-Based |
|---------|-------------------|-----------------|
| **Location** | In `system_prompt` field | In `instructions` field |
| **Context Access** | No runtime context | AgentContext, RunContext, or both |
| **Re-evaluation** | Evaluated once at agent start | Re-evaluated on each run |
| **Best For** | Simple dynamic content | Context-aware instructions |

#### Order of Instructions

Instructions are processed in this order:

1. **Static system prompts** (from `system_prompt` field)
2. **Provider instructions** (in order defined in `instructions` list)

```yaml
# Resulting instruction order:
instructions:
- "You are an expert." # 1
- type: provider
ref: provider_a # 2
- "Follow these guidelines:" # 3
- type: provider
ref: provider_b # 4
```

#### Error Handling

If an instruction function fails, the error is logged and the instruction is skipped. Agent initialization continues without crashing.

!!! tip "Use Provider-Based Instructions When"
You need access to runtime state (AgentContext, RunContext) or want instructions that change on each run based on context like conversation history, available tools, or session state.

!!! note "See Also"
- [Dynamic Instructions Example](../../examples/dynamic-instructions/)
- [Resource Providers](../configuration/resources.md)

## Callable Prompts

System prompts can include callables that are evaluated when the agent context starts:
Expand Down
97 changes: 97 additions & 0 deletions docs/configuration/resources.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,3 +54,100 @@ Resources are loaded on-demand when agents request them, supporting parameteriza
- Source resources automatically extract docstrings and type hints
- LangChain resources leverage the extensive LangChain loader ecosystem
- Callable resources provide maximum flexibility for custom logic

## Dynamic Instructions from Resource Providers

ResourceProviders can now provide dynamic instructions that are re-evaluated on each agent run. This allows providers to generate context-aware instructions based on runtime state.

### How It Works

ResourceProviders can implement the `get_instructions()` method to return instruction functions:

```python
from agentpool.resource_providers import ResourceProvider
from agentpool.prompts.instructions import InstructionFunc
from agentpool.agents.context import AgentContext

class MyProvider(ResourceProvider):
async def get_instructions(self) -> list[InstructionFunc]:
"""Return dynamic instruction functions."""
return [
self._get_static_instruction, # No context
self._get_context_instruction, # With AgentContext
]

def _get_static_instruction(self) -> str:
"""Instruction without context access."""
return "Always be helpful."

async def _get_context_instruction(self, ctx: AgentContext) -> str:
"""Instruction with context access."""
return f"Agent: {ctx.name}, Model: {ctx.model_name}"
```

### YAML Configuration

Configure providers to provide instructions using the `instructions` field:

```yaml
agents:
my_agent:
type: native
model: openai:gpt-4o
toolsets:
- type: custom
import_path: myapp.providers.MyProvider
name: my_provider

# Add provider-based instructions
instructions:
- type: provider
ref: my_provider
```

### Instruction Function Types

Instruction functions can accept different context types:

- **No context**: `() -> str`
- **AgentContext only**: `(AgentContext) -> str`
- **RunContext only**: `(RunContext) -> str`
- **Both contexts**: `(AgentContext, RunContext) -> str`

```python
# No context
def simple() -> str:
return "Be helpful."

# AgentContext only
async def with_agent(ctx: AgentContext) -> str:
return f"Agent: {ctx.name}"

# RunContext only
async def with_run(ctx: RunContext) -> str:
return f"Model: {ctx.model.model_name}"

# Both contexts
async def with_both(agent_ctx: AgentContext, run_ctx: RunContext) -> str:
return f"Agent {agent_ctx.name} using {run_ctx.model.model_name}"
```

### Benefits

- **Context-aware**: Instructions adapt to runtime state (conversation history, tools used, etc.)
- **Per-run re-evaluation**: Unlike static prompts, dynamic instructions regenerate on each run
- **Provider integration**: Toolsets and other providers can inject their own contextual instructions
- **Flexible context access**: Choose what context you need (AgentContext, RunContext, or both)

### Error Handling

If an instruction function fails:
- Error is logged with context
- Agent initialization continues
- Failed instruction is skipped (uses empty string fallback)

### See Also

- [Dynamic Instructions Example](../../examples/dynamic-instructions/)
- [ResourceProvider Base Class](../api/resource_providers.md)
- [Instruction Types](../api/instructions.md)
3 changes: 2 additions & 1 deletion docs/examples/mcp_sampling_elicitation/demo.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,10 @@

from __future__ import annotations

import anyio
from pathlib import Path

import anyio

from agentpool import Agent
from agentpool_config.mcp_server import StdioMCPServerConfig

Expand Down
110 changes: 74 additions & 36 deletions src/agentpool/agents/native_agent/agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,8 @@
from uuid import uuid4

import logfire
from pydantic_ai import Agent as PydanticAgent, CallToolsNode, ModelRequestNode, RunContext
from pydantic_ai import Agent as PydanticAgent, CallToolsNode, ModelRequestNode
from pydantic_ai.models import Model
from pydantic_ai.tools import ToolDefinition

from agentpool.agents.base_agent import BaseAgent
from agentpool.agents.events import RunStartedEvent, StreamCompleteEvent
Expand All @@ -26,7 +25,6 @@
from agentpool.storage import StorageManager
from agentpool.tools import Tool, ToolManager
from agentpool.tools.exceptions import ToolError
from agentpool.utils.inspection import get_argument_key
from agentpool.utils.result_utils import to_type
from agentpool.utils.streams import merge_queue_into_iterator

Expand Down Expand Up @@ -609,6 +607,7 @@ async def get_agentlet[AgentOutputType](
) -> PydanticAgent[TDeps, AgentOutputType]:
"""Create pydantic-ai agent from current state."""
from agentpool.agents.native_agent.tool_wrapping import wrap_tool
from agentpool.utils.context_wrapping import wrap_instruction

tools = await self.tools.get_tools(state="enabled")
final_type = to_type(output_type) if output_type not in [None, str] else self._output_type
Expand All @@ -617,50 +616,89 @@ async def get_agentlet[AgentOutputType](
model_, _settings = self._resolve_model_string(actual_model)
else:
model_ = actual_model
agent = PydanticAgent(

context_for_tools = self.get_context(input_provider=input_provider)

# Collect pydantic_ai.tools.Tool instances using Tool.to_pydantic_ai()
pydantic_ai_tools = []
for tool in tools:
wrapped = wrap_tool(tool, context_for_tools, hooks=self._hook_manager)
pydantic_ai_tool = tool.to_pydantic_ai(function_override=wrapped)
pydantic_ai_tools.append(pydantic_ai_tool)

# Collect and wrap instructions from all resource providers
all_instructions: list[Any] = []

# Start with formatted system prompt as a static instruction
if self._formatted_system_prompt:
all_instructions.append(self._formatted_system_prompt)

# Collect instructions from all providers
for provider in self.tools.providers:
try:
provider_instructions = await provider.get_instructions()
# Wrap each instruction for pydantic-ai compatibility
for instruction_fn in provider_instructions:
try:
wrapped_instruction = wrap_instruction(instruction_fn, fallback="")
all_instructions.append(wrapped_instruction)
except Exception:
# Wrap failure - log and skip this instruction
logger.exception(
"Failed to wrap instruction, skipping",
provider=provider.name,
instruction=instruction_fn,
)
continue
except Exception as e:
# Provider failure - log and continue
logger.exception(
"Failed to get instructions from provider",
provider=provider.name,
error=str(e),
)
continue

return PydanticAgent( # type: ignore[misc]
name=self.name,
model=model_,
model_settings=self.model_settings,
instructions=self._formatted_system_prompt,
instructions=all_instructions,
retries=self._retries,
end_strategy=self._end_strategy,
output_retries=self._output_retries,
deps_type=self.deps_type or NoneType,
output_type=final_type,
deps_type=self.deps_type or NoneType, # type: ignore[arg-type]
output_type=final_type, # type: ignore[arg-type]
tools=pydantic_ai_tools,
builtin_tools=self._builtin_tools,
)

context_for_tools = self.get_context(input_provider=input_provider)

for tool in tools:
wrapped = wrap_tool(tool, context_for_tools, hooks=self._hook_manager)

prepare_fn = None
if tool.schema_override:

def create_prepare(
t: Tool,
) -> Callable[[RunContext[Any], ToolDefinition], Awaitable[ToolDefinition | None]]:
async def prepare_schema(
ctx: RunContext[Any], tool_def: ToolDefinition
) -> ToolDefinition | None:
if not t.schema_override:
return None
return ToolDefinition(
name=t.schema_override.get("name") or t.name,
description=t.schema_override.get("description") or t.description,
parameters_json_schema=t.schema_override.get("parameters"),
)

return prepare_schema
async def _process_node_stream(
self,
node_stream: AsyncIterator[Any],
*,
file_tracker: FileTracker,
pending_tcs: dict[str, BaseToolCallPart],
message_id: str,
) -> AsyncIterator[RichAgentStreamEvent[OutputDataT]]:
"""Process events from a node stream (ModelRequest or CallTools).

prepare_fn = create_prepare(tool)
Args:
node_stream: Stream of events from the node
file_tracker: Tracker for file operations
pending_tcs: Dictionary of pending tool calls
message_id: Current message ID

if get_argument_key(wrapped, RunContext):
agent.tool(prepare=prepare_fn)(wrapped)
else:
agent.tool_plain(prepare=prepare_fn)(wrapped)
return agent # type: ignore[return-value]
Yields:
Processed stream events
"""
async with merge_queue_into_iterator(node_stream, self._event_queue) as merged:
async for event in file_tracker(merged):
if self._cancelled:
break
yield event
if combined := process_tool_event(self.name, event, pending_tcs, message_id):
yield combined

async def _stream_events(
self,
Expand Down
1 change: 1 addition & 0 deletions src/agentpool/docs/gen_examples.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@
DocStyle = Literal["simple", "full"]
EXAMPLES_DIR = Path("src/agentpool_docs/examples")


def create_example_doc(name: str, *, style: DocStyle = "full") -> mk.MkContainer:
"""Create documentation for an example file.

Expand Down
Loading
Loading