feat(platform-integrations): add codex evolve-lite installer#111
feat(platform-integrations): add codex evolve-lite installer#111visahak wants to merge 7 commits intoAgentToolkit:mainfrom
Conversation
📝 WalkthroughWalkthroughAdds first-class Codex platform support: new evolve-lite Codex plugin (manifest, skills, scripts, shared lib copy), installer logic to install/uninstall/upsert marketplace and hooks with atomic JSON upserts, updated INSTALL_SPEC and sentinel markers, and comprehensive platform integration tests (install, idempotency, preservation, dry-run, status). Changes
Sequence Diagram(s)sequenceDiagram
participant CLI as User CLI
participant Installer as install.sh
participant FS as File System
participant JSON as Config JSON
CLI->>Installer: install --platform codex
Installer->>FS: Detect Codex (codex on PATH or .codex/ or .agents/plugins/marketplace.json)
Installer->>FS: Copy plugin dir -> plugins/evolve-lite
Installer->>FS: Copy shared lib -> plugins/evolve-lite/lib/entity_io.py
Installer->>JSON: Load .agents/plugins/marketplace.json
JSON-->>Installer: current entries
Installer->>Installer: Upsert evolve-lite marketplace entry (atomic read-modify-write)
Installer->>JSON: Write marketplace.json
Installer->>JSON: Load .codex/hooks.json
JSON-->>Installer: current hooks
Installer->>Installer: Upsert UserPromptSubmit hook referencing retrieve_entities.py (atomic read-modify-write)
Installer->>JSON: Write hooks.json
Installer-->>CLI: Installation complete
sequenceDiagram
participant Codex as Codex
participant Hook as UserPromptSubmit Hook
participant Recall as retrieve_entities.py
participant FS as File System
Codex->>Hook: Trigger UserPromptSubmit
Hook->>Recall: Pipe prompt JSON to stdin
Recall->>FS: Read .evolve/entities/ files
FS-->>Recall: Return entity files with YAML frontmatter
Recall->>Recall: Format entities as Markdown
Recall-->>Codex: Print formatted entities (stdout)
Codex->>Codex: Inject entities into prompt as developer context
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
🧹 Nitpick comments (5)
platform-integrations/codex/plugins/evolve-lite/skills/learn/scripts/save_entities.py (2)
76-88: Consider handling write failures gracefully.Per the
write_entity_fileimplementation (context snippet fromentity_io.py:256-303), the function can raiseBaseExceptionon file operation failures. Currently, an exception mid-batch will abort the loop, leaving a partially written state without reporting how many entities were successfully saved.Consider wrapping the write in a try/except to continue processing remaining entities and report failures:
♻️ Suggested fix
added_count = 0 + failed_count = 0 for entity in new_entities: content = entity.get("content") if not content: log(f"Skipping entity without content: {entity}") continue if normalize(content) in existing_contents: log(f"Skipping duplicate: {content[:60]}") continue - path = write_entity_file(entities_dir, entity) - existing_contents.add(normalize(content)) - added_count += 1 - log(f"Wrote: {path}") + try: + path = write_entity_file(entities_dir, entity) + existing_contents.add(normalize(content)) + added_count += 1 + log(f"Wrote: {path}") + except Exception as e: + failed_count += 1 + log(f"Failed to write entity: {e}") + print(f"Warning: Failed to write entity '{content[:40]}...': {e}", file=sys.stderr) total = len(existing_entities) + added_count log(f"Added {added_count} new entities. Total: {total}") print(f"Added {added_count} new entity(ies). Total: {total}") + if failed_count: + print(f"Warning: {failed_count} entity(ies) failed to save.", file=sys.stderr) print(f"Entities stored in: {entities_dir}")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@platform-integrations/codex/plugins/evolve-lite/skills/learn/scripts/save_entities.py` around lines 76 - 88, The loop over new_entities should handle write failures from write_entity_file gracefully: wrap the call to write_entity_file(entities_dir, entity) in a try/except (catch BaseException or Exception per project style), log the error including entity identifier/content and the exception, increment a new failed_count on error and continue so remaining entities are processed, and only add to existing_contents and added_count when the write succeeds; also update the final summary/log to report added_count and failed_count.
36-36: Module-level side effect on import.Same issue as in
retrieve_entities.py— thelog("Script started")call executes on import rather than on script execution.♻️ Suggested fix
Move inside
main():-log("Script started") - - def normalize(text):def main(): log("Script started") try:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@platform-integrations/codex/plugins/evolve-lite/skills/learn/scripts/save_entities.py` at line 36, The module-level call log("Script started") causes a side effect at import time; move that call into the script's entrypoint so it runs only when executed (e.g., place log("Script started") at the start of main()). Update the function named main() in this module to include the log call before the try block so importing the module no longer triggers the log; mirror the same change applied in retrieve_entities.py.platform-integrations/codex/plugins/evolve-lite/skills/recall/scripts/retrieve_entities.py (2)
64-70: Environment variable logging may be noisy in production.Logging all environment variables (even with masking) on every hook invocation could create significant log noise. Consider guarding this with a debug flag or removing it before release.
♻️ Suggested fix
- log("=== Environment Variables ===") - for key, value in sorted(os.environ.items()): - if any(sensitive in key.upper() for sensitive in ["PASSWORD", "SECRET", "TOKEN", "KEY", "API"]): - log(f" {key}=***MASKED***") - else: - log(f" {key}={value}") - log("=== End Environment Variables ===") + if os.environ.get("EVOLVE_DEBUG"): + log("=== Environment Variables ===") + for key, value in sorted(os.environ.items()): + if any(sensitive in key.upper() for sensitive in ["PASSWORD", "SECRET", "TOKEN", "KEY", "API"]): + log(f" {key}=***MASKED***") + else: + log(f" {key}={value}") + log("=== End Environment Variables ===")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@platform-integrations/codex/plugins/evolve-lite/skills/recall/scripts/retrieve_entities.py` around lines 64 - 70, The code block in retrieve_entities.py currently logs all environment variables via log(...) which can create noisy/unsafe production logs; modify the logic in the function that calls log (the environment dump loop) to only run when a debug flag is enabled (e.g., an existing DEBUG or a new verbose flag) or remove it entirely before release; specifically update the loop that iterates os.environ.items() so it is gated by a check like if DEBUG: ... (or remove the block), and ensure the masking behavior for keys containing "PASSWORD","SECRET","TOKEN","KEY","API" remains if you keep debug logging.
27-27: Module-level side effect on import.The
log("Script started")call at line 27 executes whenever this module is imported, not just when run as a script. Consider moving this insidemain()or guarding it withif __name__ == "__main__".♻️ Suggested fix
-log("Script started") - - def format_entities(entities):Then add at the start of
main():def main(): log("Script started") try:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@platform-integrations/codex/plugins/evolve-lite/skills/recall/scripts/retrieve_entities.py` at line 27, The module currently calls log("Script started") at import time; move that statement into the program entry so it only runs when executed. Either relocate the log(...) call to the start of the main() function (e.g., add log("Script started") as the first line inside main()) or remove it from module scope and wrap the script invocation with if __name__ == "__main__": so the log runs only on direct execution; update references to the log symbol and main() accordingly.tests/platform_integrations/test_idempotency.py (1)
180-187: PotentialKeyErrorif hooks structure is malformed.If
second_hooks["hooks"]doesn't contain"UserPromptSubmit", this will raise aKeyError. Consider using.get()for defensive access, consistent with the helper_hooks_have_evolve_recallintest_codex.py.♻️ Defensive access suggestion
- prompt_hooks = second_hooks["hooks"]["UserPromptSubmit"] + prompt_hooks = second_hooks.get("hooks", {}).get("UserPromptSubmit", [])🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/platform_integrations/test_idempotency.py` around lines 180 - 187, The test currently does direct indexing into second_hooks["hooks"]["UserPromptSubmit"], which can raise KeyError if the structure is missing; change the access to use defensive .get calls (e.g., prompt_hooks = second_hooks.get("hooks", {}).get("UserPromptSubmit", [])) and keep using group.get("hooks", []) when building evolve_hooks so the list comprehension for evolve_hooks and the subsequent assert won't crash on malformed input—mirror the defensive pattern used in _hooks_have_evolve_recall to locate the issue around the prompt_hooks and evolve_hooks variables.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In
`@platform-integrations/codex/plugins/evolve-lite/skills/learn/scripts/save_entities.py`:
- Around line 76-88: The loop over new_entities should handle write failures
from write_entity_file gracefully: wrap the call to
write_entity_file(entities_dir, entity) in a try/except (catch BaseException or
Exception per project style), log the error including entity identifier/content
and the exception, increment a new failed_count on error and continue so
remaining entities are processed, and only add to existing_contents and
added_count when the write succeeds; also update the final summary/log to report
added_count and failed_count.
- Line 36: The module-level call log("Script started") causes a side effect at
import time; move that call into the script's entrypoint so it runs only when
executed (e.g., place log("Script started") at the start of main()). Update the
function named main() in this module to include the log call before the try
block so importing the module no longer triggers the log; mirror the same change
applied in retrieve_entities.py.
In
`@platform-integrations/codex/plugins/evolve-lite/skills/recall/scripts/retrieve_entities.py`:
- Around line 64-70: The code block in retrieve_entities.py currently logs all
environment variables via log(...) which can create noisy/unsafe production
logs; modify the logic in the function that calls log (the environment dump
loop) to only run when a debug flag is enabled (e.g., an existing DEBUG or a new
verbose flag) or remove it entirely before release; specifically update the loop
that iterates os.environ.items() so it is gated by a check like if DEBUG: ...
(or remove the block), and ensure the masking behavior for keys containing
"PASSWORD","SECRET","TOKEN","KEY","API" remains if you keep debug logging.
- Line 27: The module currently calls log("Script started") at import time; move
that statement into the program entry so it only runs when executed. Either
relocate the log(...) call to the start of the main() function (e.g., add
log("Script started") as the first line inside main()) or remove it from module
scope and wrap the script invocation with if __name__ == "__main__": so the log
runs only on direct execution; update references to the log symbol and main()
accordingly.
In `@tests/platform_integrations/test_idempotency.py`:
- Around line 180-187: The test currently does direct indexing into
second_hooks["hooks"]["UserPromptSubmit"], which can raise KeyError if the
structure is missing; change the access to use defensive .get calls (e.g.,
prompt_hooks = second_hooks.get("hooks", {}).get("UserPromptSubmit", [])) and
keep using group.get("hooks", []) when building evolve_hooks so the list
comprehension for evolve_hooks and the subsequent assert won't crash on
malformed input—mirror the defensive pattern used in _hooks_have_evolve_recall
to locate the issue around the prompt_hooks and evolve_hooks variables.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: a28a611c-7029-4457-8144-71b8c77a7ffe
📒 Files selected for processing (12)
platform-integrations/INSTALL_SPEC.mdplatform-integrations/codex/plugins/evolve-lite/.codex-plugin/plugin.jsonplatform-integrations/codex/plugins/evolve-lite/README.mdplatform-integrations/codex/plugins/evolve-lite/skills/learn/SKILL.mdplatform-integrations/codex/plugins/evolve-lite/skills/learn/scripts/save_entities.pyplatform-integrations/codex/plugins/evolve-lite/skills/recall/SKILL.mdplatform-integrations/codex/plugins/evolve-lite/skills/recall/scripts/retrieve_entities.pyplatform-integrations/install.shtests/platform_integrations/conftest.pytests/platform_integrations/test_codex.pytests/platform_integrations/test_idempotency.pytests/platform_integrations/test_preservation.py
There was a problem hiding this comment.
🧹 Nitpick comments (2)
platform-integrations/install.sh (2)
856-861: Hard dependency on Claude plugin source tree.The Codex installer requires the Claude plugin's
lib/directory to exist for the sharedentity_io.py. While this is documented and fails clearly, consider documenting this dependency more prominently or adding a note in the error message about why the Claude plugin must be present.📝 Clearer error message
shared_lib = Path(source_dir) / "platform-integrations" / "claude" / "plugins" / "evolve-lite" / "lib" if not shared_lib.is_dir(): - error(f"Shared lib not found: {shared_lib} — is the Claude plugin present in the source tree?") + error(f"Shared lib not found: {shared_lib}") + error("The Codex plugin requires entity_io.py from the Claude plugin source tree.") + error("Ensure platform-integrations/claude/plugins/evolve-lite/lib/ exists.") sys.exit(1)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@platform-integrations/install.sh` around lines 856 - 861, The installer currently aborts when shared_lib (Path(source_dir) / "platform-integrations" / "claude" / "plugins" / "evolve-lite" / "lib") is missing; update the error call so the message explains that the Claude plugin's lib/ (which contains shared entity_io.py used by Codex) is a required dependency and suggest actions (e.g., add the Claude plugin to the source tree or point SOURCE_DIR correctly), and keep the failing behavior (sys.exit(1)); ensure the message refers to shared_lib, plugin_target, and entity_io.py so it's easy to find in logs and add a short note in docs or README about this dependency.
328-336: Inconsistent error handling betweeninterfaceandpluginsfields.If
interfaceis not a dict, the code silently replaces it with an empty dict (lines 328-330). But ifpluginsis not a list, aValueErroris raised (line 336). Consider handling both consistently — either raise for both malformed fields or silently fix both.♻️ Consistent handling suggestion
interface = data.setdefault("interface", {}) if not isinstance(interface, dict): - interface = {} - data["interface"] = interface + raise ValueError(f"{path} field 'interface' must be an object.") data.setdefault("name", "evolve-local") interface.setdefault("displayName", "Evolve Local Plugins") plugins = data.setdefault("plugins", []) if not isinstance(plugins, list): raise ValueError(f"{path} field 'plugins' must be an array.")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@platform-integrations/install.sh` around lines 328 - 336, The code currently coalesces a non-dict interface into an empty dict while raising on a non-list plugins; make the handling consistent by validating interface the same way as plugins: if interface is not a dict, raise a ValueError (e.g. ValueError(f"{path} field 'interface' must be an object.")) instead of silently replacing it; locate the logic around data["interface"], interface = {}, data.setdefault("plugins", []), and the existing plugins ValueError and update the interface branch to perform a type check and raise the error with the same path-based message pattern.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@platform-integrations/install.sh`:
- Around line 856-861: The installer currently aborts when shared_lib
(Path(source_dir) / "platform-integrations" / "claude" / "plugins" /
"evolve-lite" / "lib") is missing; update the error call so the message explains
that the Claude plugin's lib/ (which contains shared entity_io.py used by Codex)
is a required dependency and suggest actions (e.g., add the Claude plugin to the
source tree or point SOURCE_DIR correctly), and keep the failing behavior
(sys.exit(1)); ensure the message refers to shared_lib, plugin_target, and
entity_io.py so it's easy to find in logs and add a short note in docs or README
about this dependency.
- Around line 328-336: The code currently coalesces a non-dict interface into an
empty dict while raising on a non-list plugins; make the handling consistent by
validating interface the same way as plugins: if interface is not a dict, raise
a ValueError (e.g. ValueError(f"{path} field 'interface' must be an object."))
instead of silently replacing it; locate the logic around data["interface"],
interface = {}, data.setdefault("plugins", []), and the existing plugins
ValueError and update the interface branch to perform a type check and raise the
error with the same path-based message pattern.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 2dc704fb-7b3e-45ae-945a-012543846c5a
📒 Files selected for processing (7)
platform-integrations/INSTALL_SPEC.mdplatform-integrations/codex/plugins/evolve-lite/README.mdplatform-integrations/codex/plugins/evolve-lite/skills/recall/SKILL.mdplatform-integrations/install.shtests/platform_integrations/test_codex.pytests/platform_integrations/test_idempotency.pytests/platform_integrations/test_preservation.py
✅ Files skipped from review due to trivial changes (1)
- platform-integrations/codex/plugins/evolve-lite/README.md
🚧 Files skipped from review as they are similar to previous changes (2)
- tests/platform_integrations/test_preservation.py
- tests/platform_integrations/test_codex.py
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@platform-integrations/install.sh`:
- Around line 402-407: The current logic replaces an entire UserPromptSubmit
group when detecting the Evolve hook via _group_contains_codex_recall_command;
instead, locate the matching group (the same detection using
_group_contains_codex_recall_command) and upsert only the Evolve hook inside
that group's hooks array/dict (merge by hook key/name) so other hooks and
matcher settings remain untouched, and on uninstall remove only the Evolve hook
entry from that group's hooks rather than deleting the whole group; apply the
same key-upsert approach when persisting to JSON files like mcp.json and
.roomodes.
- Around line 321-325: The code treats any falsy JSON ([], false, null, "") as
missing and overwrites user files; change the logic around read_json and
_default_codex_marketplace to only apply the default when read_json returns None
(meaning file missing/unreadable) and to raise a ValueError when read_json
returns a non-dict (e.g., list, boolean) to fail fast on schema mismatches; when
adding new Evolve keys (the existing fallback/merge logic for
interface/hooks/UserPromptSubmit), perform JSON key upserts into the existing
dict (update specific keys) instead of replacing or coercing the user's
container so user config is preserved.
- Around line 348-359: The hook _codex_recall_hook_command should export the
project entities path so the called retrieve_entities.py sees the same location
as find_entities_dir(); modify the generated shell to set
EVOLVE_ENTITIES_DIR="$d/.evolve/entities" (or cd "$d") just before exec python3
so the environment matches the lookup used by
claude/plugins/evolve-lite/lib/entity_io.py::find_entities_dir(), then exec
python3 "plugins/evolve-lite/skills/recall/scripts/retrieve_entities.py".
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 5cc9b99c-a50a-4875-9116-20c6c763f655
📒 Files selected for processing (5)
platform-integrations/INSTALL_SPEC.mdplatform-integrations/codex/plugins/evolve-lite/README.mdplatform-integrations/codex/plugins/evolve-lite/skills/recall/SKILL.mdplatform-integrations/install.shtests/platform_integrations/test_codex.py
✅ Files skipped from review due to trivial changes (2)
- platform-integrations/codex/plugins/evolve-lite/README.md
- platform-integrations/INSTALL_SPEC.md
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/platform_integrations/test_codex.py
| data = read_json(path) | ||
| if not data: | ||
| data = _default_codex_marketplace() | ||
| if not isinstance(data, dict): | ||
| raise ValueError(f"{path} must contain a JSON object.") |
There was a problem hiding this comment.
Fail fast on unexpected Codex JSON shapes instead of resetting them.
read_json() can validly return [], false, null, etc. The if not data branches convert those files into fresh objects, and the later interface / hooks / UserPromptSubmit fallback code rewrites wrong-typed containers to {} or []. That silently drops parseable user config instead of surfacing the schema mismatch.
🛡️ Suggested fix
def upsert_codex_marketplace_entry(path, item):
"""Upsert a Codex marketplace plugin entry by name."""
data = read_json(path)
- if not data:
+ if data == {}:
data = _default_codex_marketplace()
if not isinstance(data, dict):
raise ValueError(f"{path} must contain a JSON object.")
interface = data.setdefault("interface", {})
if not isinstance(interface, dict):
- interface = {}
- data["interface"] = interface
+ raise ValueError(f"{path} field 'interface' must be an object.")
@@
def upsert_codex_user_prompt_hook(path, group):
"""Upsert the Evolve UserPromptSubmit hook into a Codex hooks.json file."""
data = read_json(path)
- if not data:
+ if data == {}:
data = {"hooks": {}}
if not isinstance(data, dict):
raise ValueError(f"{path} must contain a JSON object.")
hooks = data.setdefault("hooks", {})
if not isinstance(hooks, dict):
- hooks = {}
- data["hooks"] = hooks
+ raise ValueError(f"{path} field 'hooks' must be an object.")
groups = hooks.setdefault("UserPromptSubmit", [])
if not isinstance(groups, list):
- groups = []
- hooks["UserPromptSubmit"] = groups
+ raise ValueError(f"{path} field 'hooks.UserPromptSubmit' must be an array.")
@@
def remove_codex_user_prompt_hook(path):
"""Remove the Evolve UserPromptSubmit hook from a Codex hooks.json file."""
@@
data = read_json(path)
+ if not isinstance(data, dict):
+ raise ValueError(f"{path} must contain a JSON object.")
hooks = data.get("hooks")Also applies to: 327-330, 386-400, 417-420
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@platform-integrations/install.sh` around lines 321 - 325, The code treats any
falsy JSON ([], false, null, "") as missing and overwrites user files; change
the logic around read_json and _default_codex_marketplace to only apply the
default when read_json returns None (meaning file missing/unreadable) and to
raise a ValueError when read_json returns a non-dict (e.g., list, boolean) to
fail fast on schema mismatches; when adding new Evolve keys (the existing
fallback/merge logic for interface/hooks/UserPromptSubmit), perform JSON key
upserts into the existing dict (update specific keys) instead of replacing or
coercing the user's container so user config is preserved.
| def _codex_recall_hook_command(): | ||
| return ( | ||
| "sh -lc '" | ||
| 'd=\"$PWD\"; ' | ||
| "while :; do " | ||
| 'candidate=\"$d/plugins/evolve-lite/skills/recall/scripts/retrieve_entities.py\"; ' | ||
| 'if [ -f \"$candidate\" ]; then exec python3 \"$candidate\"; fi; ' | ||
| '[ \"$d\" = \"/\" ] && break; ' | ||
| 'd=\"$(dirname \"$d\")\"; ' | ||
| "done; " | ||
| "exit 1'" | ||
| ) |
There was a problem hiding this comment.
Export the project entities path before invoking the recall script.
The walk-up loop finds plugins/evolve-lite/.../retrieve_entities.py, but the shared platform-integrations/claude/plugins/evolve-lite/lib/entity_io.py::find_entities_dir() only checks EVOLVE_ENTITIES_DIR, CLAUDE_PROJECT_ROOT, then .evolve/entities relative to the current working directory. From repo/subdir, this hook resolves the script in repo/plugins/... and then looks for repo/subdir/.evolve/entities, so automatic recall returns no project entities. Set EVOLVE_ENTITIES_DIR="$d/.evolve/entities" (or cd "$d") before exec python3 ....
🔧 Suggested fix
def _codex_recall_hook_command():
return (
"sh -lc '"
'd=\"$PWD\"; '
"while :; do "
'candidate=\"$d/plugins/evolve-lite/skills/recall/scripts/retrieve_entities.py\"; '
- 'if [ -f \"$candidate\" ]; then exec python3 \"$candidate\"; fi; '
+ 'if [ -f \"$candidate\" ]; then EVOLVE_ENTITIES_DIR=\"$d/.evolve/entities\" exec python3 \"$candidate\"; fi; '
'[ \"$d\" = \"/\" ] && break; '
'd=\"$(dirname \"$d\")\"; '
"done; "
"exit 1'"
)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@platform-integrations/install.sh` around lines 348 - 359, The hook
_codex_recall_hook_command should export the project entities path so the called
retrieve_entities.py sees the same location as find_entities_dir(); modify the
generated shell to set EVOLVE_ENTITIES_DIR="$d/.evolve/entities" (or cd "$d")
just before exec python3 so the environment matches the lookup used by
claude/plugins/evolve-lite/lib/entity_io.py::find_entities_dir(), then exec
python3 "plugins/evolve-lite/skills/recall/scripts/retrieve_entities.py".
| for index, existing in enumerate(groups): | ||
| if isinstance(existing, dict) and _group_contains_codex_recall_command(existing): | ||
| groups[index] = group | ||
| break | ||
| else: | ||
| groups.append(group) |
There was a problem hiding this comment.
Only replace/remove the Evolve hook, not the whole UserPromptSubmit group.
A valid group can contain multiple hooks. Replacing the matched group on install and deleting the whole group on uninstall will remove user hooks and matcher settings that happen to live next to the Evolve command.
🧩 Suggested fix
- for index, existing in enumerate(groups):
- if isinstance(existing, dict) and _group_contains_codex_recall_command(existing):
- groups[index] = group
- break
+ evolve_hook = group["hooks"][0]
+ for existing in groups:
+ if not isinstance(existing, dict):
+ continue
+ existing_hooks = existing.get("hooks", [])
+ if not isinstance(existing_hooks, list):
+ raise ValueError(f"{path} group 'hooks' must be an array.")
+ for i, hook in enumerate(existing_hooks):
+ if isinstance(hook, dict) and _is_codex_recall_command(hook.get("command")):
+ existing_hooks[i] = evolve_hook
+ break
+ else:
+ continue
+ break
else:
groups.append(group)
@@
- hooks["UserPromptSubmit"] = [
- group for group in groups if not (isinstance(group, dict) and _group_contains_codex_recall_command(group))
- ]
+ kept_groups = []
+ for group in groups:
+ if not isinstance(group, dict):
+ kept_groups.append(group)
+ continue
+ group_hooks = group.get("hooks", [])
+ if not isinstance(group_hooks, list):
+ raise ValueError(f"{path} group 'hooks' must be an array.")
+ group["hooks"] = [
+ hook for hook in group_hooks
+ if not (isinstance(hook, dict) and _is_codex_recall_command(hook.get("command")))
+ ]
+ if group["hooks"]:
+ kept_groups.append(group)
+ hooks["UserPromptSubmit"] = kept_groups
if not hooks["UserPromptSubmit"]:
hooks.pop("UserPromptSubmit", None)Also applies to: 426-430
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@platform-integrations/install.sh` around lines 402 - 407, The current logic
replaces an entire UserPromptSubmit group when detecting the Evolve hook via
_group_contains_codex_recall_command; instead, locate the matching group (the
same detection using _group_contains_codex_recall_command) and upsert only the
Evolve hook inside that group's hooks array/dict (merge by hook key/name) so
other hooks and matcher settings remain untouched, and on uninstall remove only
the Evolve hook entry from that group's hooks rather than deleting the whole
group; apply the same key-upsert approach when persisting to JSON files like
mcp.json and .roomodes.
Plugins that code and run on codex.
Summary by CodeRabbit
New Features
Documentation
Tests