feat: add dynamic instructions support for runtime context-aware prompts#23
feat: add dynamic instructions support for runtime context-aware prompts#23Leoyzen wants to merge 1 commit intophil65:mainfrom
Conversation
4b1586a to
624fed1
Compare
…re prompts Add runtime_checkable instruction function types supporting 4 context patterns: - No context, AgentContext, RunContext, or both contexts - Both sync and async variants Add context wrapping utility to adapt instruction functions for PydanticAI. Extend ResourceProvider with get_instructions() method returning list[InstructionFunc]. Add ProviderInstructionConfig for YAML-based configuration: - type: provider with ref or import_path - Mutual exclusion validation Integrate with NativeAgent: - Collect and wrap instructions from providers in get_agentlet() - Pass to PydanticAgent constructor - Fail-safe error handling Add comprehensive documentation and 45+ tests.
624fed1 to
162c433
Compare
|
Modifying system prompt during a session breaks caching (some context here for example: https://x.com/trq212/status/2024574133011673516 ) . what is this needed for? usually it makes more sense to append stuff to the history instead of modifying it. |
|
Hi @phil65, thanks for raising the caching concern! Let me explain the rationale behind this feature and why it won't negatively impact prefix caching in practice. Why Dynamic Instructions Are NeededCurrently, agentpool only supports static or static function system prompts. However, there are many real-world scenarios that require context-aware system prompts which cannot be statically injected: Use Case 1: Delegation Toolset ContextFor a delegation toolset (like the one I plan to implement in a follow-up), the agent needs to:
This information is inherently dynamic — you don't know which agents will be available until runtime, so it cannot be hardcoded. Use Case 2: Dynamic Skills Metadata InjectionAnother example is injecting partial skills metadata. The provider needs to:
Again, this depends on runtime state and cannot be determined statically. Why This Doesn't Hurt Prefix CachingI understand your concern about prefix caching, but these dynamic instructions have important characteristics:
Why This Approach vs. Case-by-Case ExtensionsWhile we could extend agentpool's existing logic to handle these scenarios case by case (e.g., special handling for delegation, special handling for skills), the dynamic instruction approach provides:
Current StatusThis PR is currently in draft state and depends on #24 for some underlying infrastructure. I'll keep it as draft until #24 is merged, then rebase and mark it ready for review. Happy to discuss further or explore alternative approaches if you have concerns! |
|
Wouldnt a tool like list_available_agents work? Its already implemented somewhere. |
|
@phil65 Good question! We've tested both approaches, and direct injection consistently outperforms tool-based discovery in practice. Why tools fall short: To make the model reliably call Direct injection is more efficient:
Standard practice: Dynamic context injection is common in coding agents (e.g., Cline, Claude Code). The skills system itself is designed this way — The dynamic instructions here are stable per session (agents/skills don't change mid-conversation), so prefix caching remains effective. |
|
I dont really know about Cline, but I am quite sure that claude code doesnt modify system prompts for any skill-related stuff mid-session. Claude code (like any other agent sdk) is optimized to keep the cache "working" (see that x post I linked). They always prefer to just "add" to the history, never modify it. The same applies to skills: the skills get listed in the system prompt on session start, and then the discovery is done by appending to the history, not by modifying it.
This here conflicts with the PR description? The PR description says "functions are re-evaluated on each agent run with access to runtime context"? If things are just evaluated once per session, and not for each Agent run, then I would be fine with this. |
Summary
This PR adds support for dynamic, context-aware instructions from ResourceProviders. Previously, ResourceProviders could only provide static prompts via
get_prompts(). Now they can also provide instruction functions that are re-evaluated on each agent run with access to runtime context (conversation state, user data, model info, etc.).Motivation
The current ResourceProvider system has a limitation:
get_prompts()returns staticBasePromptobjects that are evaluated once during agent initialization. There's no way for providers to generate instructions based on:This PR solves this by adding a new
get_instructions()method that returns functions instead of static strings. These functions receive runtime context and are called on every agent run, enabling truly dynamic, context-aware instructions.Key Changes
1. Instruction Function Types (
src/agentpool/prompts/instructions.py)Introduces typed protocols for instruction functions supporting 4 context patterns:
def instructions() -> strdef instructions(ctx: AgentContext) -> strdef instructions(ctx: RunContext) -> strdef instructions(agent_ctx: AgentContext, run_ctx: RunContext) -> strAll variants support both sync and async variants.
2. Context Wrapping Utility (
src/agentpool/utils/context_wrapping.py)wrap_instruction(): Adapts instruction functions to PydanticAI's expected signature3. ResourceProvider Extension (
src/agentpool/resource_providers/)get_instructions()method onResourceProviderbase classInstructionProviderclass for YAML-configured instruction providersget_prompts()continue to work4. YAML Configuration (
src/agentpool_config/instructions.py)5. Native Agent Integration (
src/agentpool/agents/native_agent/agent.py)get_agentlet()collects and wraps instruction functions from all providersinstructionsparameterUsage Example
Testing
Documentation
docs/advanced/system-prompts.mddocs/configuration/resources.md