Skip to content

Comments

feat: add dynamic instructions support for runtime context-aware prompts#23

Draft
Leoyzen wants to merge 1 commit intophil65:mainfrom
Leoyzen:feature/dynamic-instruction-for-provider
Draft

feat: add dynamic instructions support for runtime context-aware prompts#23
Leoyzen wants to merge 1 commit intophil65:mainfrom
Leoyzen:feature/dynamic-instruction-for-provider

Conversation

@Leoyzen
Copy link
Contributor

@Leoyzen Leoyzen commented Feb 8, 2026

Summary

This PR adds support for dynamic, context-aware instructions from ResourceProviders. Previously, ResourceProviders could only provide static prompts via get_prompts(). Now they can also provide instruction functions that are re-evaluated on each agent run with access to runtime context (conversation state, user data, model info, etc.).

Motivation

The current ResourceProvider system has a limitation: get_prompts() returns static BasePrompt objects that are evaluated once during agent initialization. There's no way for providers to generate instructions based on:

  • Current conversation state
  • Runtime user preferences
  • Dynamic system status
  • Model-specific context

This PR solves this by adding a new get_instructions() method that returns functions instead of static strings. These functions receive runtime context and are called on every agent run, enabling truly dynamic, context-aware instructions.

Key Changes

1. Instruction Function Types (src/agentpool/prompts/instructions.py)

Introduces typed protocols for instruction functions supporting 4 context patterns:

  • No context: def instructions() -> str
  • AgentContext only: def instructions(ctx: AgentContext) -> str
  • RunContext only: def instructions(ctx: RunContext) -> str
  • Both contexts: def instructions(agent_ctx: AgentContext, run_ctx: RunContext) -> str

All variants support both sync and async variants.

2. Context Wrapping Utility (src/agentpool/utils/context_wrapping.py)

  • wrap_instruction(): Adapts instruction functions to PydanticAI's expected signature
  • Auto-detects function signatures and injects appropriate contexts
  • Built-in error handling with fallback strings

3. ResourceProvider Extension (src/agentpool/resource_providers/)

  • New get_instructions() method on ResourceProvider base class
  • New InstructionProvider class for YAML-configured instruction providers
  • Fully backward compatible - existing get_prompts() continue to work

4. YAML Configuration (src/agentpool_config/instructions.py)

agents:
  my_agent:
    type: native
    model: openai:gpt-4o
    
    instructions:
      - "You are a helpful assistant."  # Static string (existing)
      - type: provider                  # Dynamic from provider (new)
        ref: my_provider               # Reference existing provider

5. Native Agent Integration (src/agentpool/agents/native_agent/agent.py)

  • get_agentlet() collects and wraps instruction functions from all providers
  • Combines static system prompts with dynamic instruction functions
  • Passes combined list to PydanticAgent's instructions parameter
  • Error isolation: one provider failing doesn't break others

Usage Example

from agentpool.resource_providers import ResourceProvider
from agentpool.prompts.instructions import InstructionFunc

class UserContextProvider(ResourceProvider):
    """Provider that injects user-specific context into instructions."""
    
    async def get_instructions(self) -> list[InstructionFunc]:
        return [
            self._get_user_context,
            self._get_system_status,
        ]
    
    async def _get_user_context(self, ctx: AgentContext) -> str:
        """Dynamic instruction with runtime context."""
        return f"The current user is {ctx.deps.user_name} with role {ctx.deps.role}"
    
    def _get_system_status(self) -> str:
        """Static instruction (evaluated each run but no context needed)."""
        return "System version: 2.0"
agents:
  assistant:
    type: native
    model: openai:gpt-4o
    toolsets:
      - type: custom
        import_path: myapp.UserContextProvider
        name: user_provider
    
    instructions:
      - "You are a helpful assistant for our platform."
      - type: provider
        ref: user_provider  # Injects user context on each run

Testing

  • 45+ new unit tests covering all context patterns and error scenarios
  • Full type safety with mypy strict mode
  • All existing tests continue to pass

Documentation

  • Added comprehensive documentation in docs/advanced/system-prompts.md
  • Added configuration guide in docs/configuration/resources.md
  • Inline docstrings for all public APIs

@Leoyzen Leoyzen force-pushed the feature/dynamic-instruction-for-provider branch from 4b1586a to 624fed1 Compare February 9, 2026 01:33
…re prompts

Add runtime_checkable instruction function types supporting 4 context patterns:
- No context, AgentContext, RunContext, or both contexts
- Both sync and async variants
Add context wrapping utility to adapt instruction functions for PydanticAI.
Extend ResourceProvider with get_instructions() method returning list[InstructionFunc].
Add ProviderInstructionConfig for YAML-based configuration:
- type: provider with ref or import_path
- Mutual exclusion validation
Integrate with NativeAgent:
- Collect and wrap instructions from providers in get_agentlet()
- Pass to PydanticAgent constructor
- Fail-safe error handling
Add comprehensive documentation and 45+ tests.
@Leoyzen Leoyzen force-pushed the feature/dynamic-instruction-for-provider branch from 624fed1 to 162c433 Compare February 9, 2026 01:41
@Leoyzen Leoyzen marked this pull request as draft February 9, 2026 04:12
@phil65
Copy link
Owner

phil65 commented Feb 20, 2026

Modifying system prompt during a session breaks caching (some context here for example: https://x.com/trq212/status/2024574133011673516 ) . what is this needed for? usually it makes more sense to append stuff to the history instead of modifying it.

@Leoyzen
Copy link
Contributor Author

Leoyzen commented Feb 20, 2026

Hi @phil65, thanks for raising the caching concern! Let me explain the rationale behind this feature and why it won't negatively impact prefix caching in practice.

Why Dynamic Instructions Are Needed

Currently, agentpool only supports static or static function system prompts. However, there are many real-world scenarios that require context-aware system prompts which cannot be statically injected:

Use Case 1: Delegation Toolset Context

For a delegation toolset (like the one I plan to implement in a follow-up), the agent needs to:

  • Know which agents are currently active in the pool
  • Filter out itself from the available agents
  • Inject other agents' descriptions into its system prompt

This information is inherently dynamic — you don't know which agents will be available until runtime, so it cannot be hardcoded.

Use Case 2: Dynamic Skills Metadata Injection

Another example is injecting partial skills metadata. The provider needs to:

  • Inspect the current context to discover available skills
  • Selectively inject only relevant metadata into the system prompt

Again, this depends on runtime state and cannot be determined statically.

Why This Doesn't Hurt Prefix Caching

I understand your concern about prefix caching, but these dynamic instructions have important characteristics:

  1. Stable within a session: While the content is computed at runtime, it remains relatively stable throughout the conversation. The available agents and skills don't change often during a single session.

  2. Computed once per turn, not mid-inference: The instruction functions are evaluated at the start of each agent run, not during token generation. This is different from modifying prompts mid-conversation.

  3. Controlled variability: The dynamic nature is limited to specific provider-decided content. The core system prompt structure remains static.

Why This Approach vs. Case-by-Case Extensions

While we could extend agentpool's existing logic to handle these scenarios case by case (e.g., special handling for delegation, special handling for skills), the dynamic instruction approach provides:

  • A unified extension point: Toolset providers can inject dynamic prompts through a consistent interface
  • Better separation of concerns: Each provider decides what context it needs, rather than the core framework guessing
  • Composability: Multiple providers can contribute dynamic instructions without interfering with each other

Current Status

This PR is currently in draft state and depends on #24 for some underlying infrastructure. I'll keep it as draft until #24 is merged, then rebase and mark it ready for review.

Happy to discuss further or explore alternative approaches if you have concerns!

@phil65
Copy link
Owner

phil65 commented Feb 20, 2026

Wouldnt a tool like list_available_agents work? Its already implemented somewhere.

@Leoyzen
Copy link
Contributor Author

Leoyzen commented Feb 20, 2026

@phil65 Good question! We've tested both approaches, and direct injection consistently outperforms tool-based discovery in practice.

Why tools fall short:

To make the model reliably call list_available_agents or list_skills at the right moment, you still need to inject guiding rules in the system prompt — essentially telling it "when to call what." By the time you've added those instructions, you might as well just inject the metadata directly.

Direct injection is more efficient:

  • Token count: Injecting compact metadata (agent roles, skill descriptions) is often comparable in size to verbose tool-calling instructions
  • Latency: One forward pass vs. potential tool call roundtrip
  • Reliability: No risk of the model forgetting to check

Standard practice:

Dynamic context injection is common in coding agents (e.g., Cline, Claude Code). The skills system itself is designed this way — frontmatter for static metadata injection, instruction for dynamic context. This PR extends that pattern to ResourceProviders.

The dynamic instructions here are stable per session (agents/skills don't change mid-conversation), so prefix caching remains effective.

@phil65
Copy link
Owner

phil65 commented Feb 20, 2026

I dont really know about Cline, but I am quite sure that claude code doesnt modify system prompts for any skill-related stuff mid-session. Claude code (like any other agent sdk) is optimized to keep the cache "working" (see that x post I linked). They always prefer to just "add" to the history, never modify it. The same applies to skills: the skills get listed in the system prompt on session start, and then the discovery is done by appending to the history, not by modifying it.

The dynamic instructions here are stable per session (agents/skills don't change mid-conversation), so prefix caching remains effective.

This here conflicts with the PR description? The PR description says "functions are re-evaluated on each agent run with access to runtime context"?

If things are just evaluated once per session, and not for each Agent run, then I would be fine with this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants