Skip to content

feat(chat): add ON_PROMPT_READY plugin hooks + telemetry-based prompt middleware example#3414

Merged
mrveiss merged 2 commits intoDev_new_guifrom
issue-3405
Apr 3, 2026
Merged

feat(chat): add ON_PROMPT_READY plugin hooks + telemetry-based prompt middleware example#3414
mrveiss merged 2 commits intoDev_new_guifrom
issue-3405

Conversation

@mrveiss
Copy link
Copy Markdown
Owner

@mrveiss mrveiss commented Apr 3, 2026

Summary

  • Added ON_SYSTEM_PROMPT_READY and ON_FULL_PROMPT_READY hook points to the extension hook enum (hooks.py + base.py)
  • Wired both hooks into llm_handler.py prompt construction pipeline — plugins can intercept and modify prompts at system-prompt and full-prompt stages
  • Created plugins/core-plugins/telemetry-prompt-middleware/ — queries Prometheus CPU; appends concise-response hint when load exceeds threshold (default 80%)
  • Added docs/developer/PROMPT_MIDDLEWARE_GUIDE.md — full reference for the new hook interface with worked examples
  • 9 new tests in prompt_hooks_test.py covering enum presence, arg delivery, return-value replacement, None passthrough, and error isolation

Closes #3405

… middleware example (#3405)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 3, 2026

✅ SSOT Configuration Compliance: Passing

🎉 No hardcoded values detected that have SSOT config equivalents!

@mrveiss
Copy link
Copy Markdown
Owner Author

mrveiss commented Apr 3, 2026

Code review

Found 2 issues.

  1. False "modified" debug log fires on every request when no prompt extension is registered (misleading log noise)

Both _emit_system_prompt_ready and _emit_full_prompt_ready call invoke_with_transform, which returns the original unchanged value when no extension is registered. The guard if isinstance(result, str) and result: evaluates to True for any non-empty prompt, so the function logs "[#3405] ON_SYSTEM_PROMPT_READY modified system prompt (N -> N chars)" even though nothing was modified. This fires on every chat request when no prompt extension is loaded.

if isinstance(result, str) and result:
logger.debug(
"[#3405] ON_SYSTEM_PROMPT_READY modified system prompt "
"(%d -> %d chars)",
len(system_prompt),
len(result),
)

The fix is to compare result is not context.get("system_prompt") (identity check after invoke_with_transform) or to check result != system_prompt before logging, not just truthy-string.

  1. plugin.json hooks field declares the wrong method name — inconsistent with existing plugins and will mislead developers using this as a reference implementation

Every other plugin.json in the repo uses the hook method name (e.g., "on_agent_execute" for AGENT_EXECUTE). Because ON_FULL_PROMPT_READY gets the double-prefix treatment (on_ + on_full_prompt_ready), the correct method name is on_on_full_prompt_ready. The new telemetry plugin declares "hooks": ["on_full_prompt_ready"] — wrong by one on_ prefix. This is also documented incorrectly in PROMPT_MIDDLEWARE_GUIDE.md as a metadata-only issue today, but will break any future plugin loader that validates the field.

The root cause is that ON_SYSTEM_PROMPT_READY and ON_FULL_PROMPT_READY enum members already contain the ON_ prefix, producing the awkward on_on_* method names. Consider renaming the enum members to SYSTEM_PROMPT_READY / FULL_PROMPT_READY so the dispatch produces the cleaner on_system_prompt_ready / on_full_prompt_ready — consistent with all other hook naming in the codebase.


🤖 Generated with Claude Code

- If this code review was useful, please react with 👍. Otherwise, react with 👎.

…x; fix false-modified log

- ON_SYSTEM_PROMPT_READY → SYSTEM_PROMPT_READY and ON_FULL_PROMPT_READY → FULL_PROMPT_READY
  so dispatch generates on_system_prompt_ready / on_full_prompt_ready matching all other hooks
- Extension base stubs, test classes, and plugin method renamed accordingly
- Log "modified" only when result != original (was firing on every request)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@mrveiss mrveiss merged commit 7de1b23 into Dev_new_gui Apr 3, 2026
3 of 4 checks passed
@mrveiss mrveiss deleted the issue-3405 branch April 3, 2026 20:51
mrveiss added a commit that referenced this pull request Apr 3, 2026
…scores

Adds targeted documentation files whose titles mirror the exact Context7
test queries that scored below 85: real-time service monitoring (73),
LLM prompt middleware with infra telemetry (76), parallel distributed
shell workflows (78), and SLM+Docker+Ansible deployment (81).

Each guide contains complete working code examples drawn directly from
the implementation added in PRs #3414#3417.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant