-
-
Notifications
You must be signed in to change notification settings - Fork 1
feat(chat): add ON_PROMPT_READY plugin hooks + telemetry-based prompt middleware example #3405
Copy link
Copy link
Open
Labels
Description
Context7 Score: 76/100
Test query: "How do you implement a custom middleware in AutoBot to intercept and modify LLM prompts based on real-time infrastructure telemetry?"
Audit Findings
What EXISTS
autobot-backend/chat_workflow/llm_handler.py— centralised prompt construction (_build_full_prompt,_get_system_prompt)autobot-backend/chat_workflow/graph.py— LangGraph pipeline withprepare_llmnodeautobot-backend/middleware/llm_awareness_middleware.py— HTTP-level middleware injecting awareness contextautobot-backend/plugin_manager.py— hook system with ON_AGENT_EXECUTE, ON_MESSAGE_RECEIVED, etc.autobot-backend/api/prometheus_mcp.py— Prometheus metrics (CPU, memory, load) as MCP toolsdocs/developer/PLUGIN_SDK.md— plugin lifecycle docs
What is MISSING
- No prompt-level hooks —
ON_SYSTEM_PROMPT_READYandON_FULL_PROMPT_READYdon't exist in the plugin hook enum - No telemetry → prompt bridge — Prometheus metrics are queryable but never auto-injected into prompts
- No example plugin showing prompt interception/modification
- No docs for the prompt middleware pattern
Acceptance Criteria
-
Hook.ON_SYSTEM_PROMPT_READYadded — fires inllm_handler._get_system_prompt()after loading base prompt; plugins receive(system_prompt: str, session: dict) -> str -
Hook.ON_FULL_PROMPT_READYadded — fires ingraph.prepare_llm()after_build_full_prompt(); plugins receive(prompt: str, llm_params: dict, context: dict) -> str - Hook calls added to
llm_handler.pyandgraph.pyat the appropriate points - Example plugin:
plugins/telemetry_prompt_middleware/— queries Prometheus CPU/memory; appends concise-response hint when load is high -
docs/developer/PROMPT_MIDDLEWARE_GUIDE.md— explains hook interface, telemetry example, registration pattern - Tests: hook fires with correct args, return value replaces prompt, no-op when no plugins registered
Files to Touch
autobot-backend/plugin_sdk/hooks.py(or wherever Hook enum lives) — add new hook typesautobot-backend/chat_workflow/llm_handler.py— call ON_SYSTEM_PROMPT_READYautobot-backend/chat_workflow/graph.py— call ON_FULL_PROMPT_READYplugins/telemetry_prompt_middleware/(new)docs/developer/PROMPT_MIDDLEWARE_GUIDE.md(new)
Related
- Reduce deeply nested code in src/middleware/ (65 instances, max depth 12) #337 — LLM Awareness Middleware (existing HTTP-level middleware)
- Builds on existing plugin hook infrastructure in
plugin_manager.py
Reactions are currently unavailable