Skip to content

feat: Add OpenClaw skill for iloom CLI#642

Open
NoahCardoza wants to merge 13 commits intoiloom-ai:mainfrom
NoahCardoza:feature/626-openclaw-skill
Open

feat: Add OpenClaw skill for iloom CLI#642
NoahCardoza wants to merge 13 commits intoiloom-ai:mainfrom
NoahCardoza:feature/626-openclaw-skill

Conversation

@NoahCardoza
Copy link
Copy Markdown
Contributor

@NoahCardoza NoahCardoza commented Feb 19, 2026

Status

TODO

  • Create il openclaw command to install OpenClaw skill.
  • Research further integrations with plugins and the ability to mention the OpenClaw agent in issues to trigger iloom usage.

Summary

Implements the OpenClaw skill that enables AI agents to manage iloom workspaces programmatically. This skill provides comprehensive documentation of iloom's CLI commands, flags, and interaction patterns.

Closes #626

Changes

  • Created openclaw-skill/ directory structure at repository root
  • Added SKILL.md with OpenClaw frontmatter and quick-start patterns
  • Created reference documentation files:
    • core-workflow.md - init, start, finish, cleanup, list commands
    • development-commands.md - spin, commit, rebase, build, test, etc.
    • planning-and-issues.md - plan, add-issue, enhance, issues
    • configuration.md - settings, env vars, global flags
    • non-interactive-patterns.md - PTY, background, autonomous operation
  • Documented agent patterns for initialization, planning, and streaming
  • Added safety rules for session management

Key Features

  • Init-first workflow: Clear documentation of project setup sequence
  • Ideation routing: Directs agents to use il plan for feature decomposition
  • PTY patterns: All commands documented with pty:true requirement
  • Background sessions: Commands that launch Claude use background:true with monitoring
  • Decision bypass: Every interactive prompt mapped to its bypass flag
  • Autonomous operation: Recommended flag combinations for headless workflows

Test Plan

  • Verify SKILL.md has valid OpenClaw frontmatter
  • Confirm all reference files are present and complete
  • Test that skill documentation accurately reflects current iloom CLI behavior
  • Validate PTY and background session patterns work as documented
  • Confirm autonomous flag combinations successfully bypass prompts

🤖 Generated with Claude Code

@acreeger
Copy link
Copy Markdown
Collaborator

@NoahCardoza is this one ready for review?

@NoahCardoza
Copy link
Copy Markdown
Contributor Author

NoahCardoza commented Feb 23, 2026

@acreeger I forgot to link #635 as a blocker. I'm waiting for this to be merged so I can update the instructions and test with these flags.

Also, in my testing, every so often OpenClaw forgets to use --no-terminal and --no-code... even when them set as default it seems to sometimes open... unless OpenClaw is explicitly opening them, I need to do more thorough debugging and reproduction to be sure. Other than that, it's been working very well.

@acreeger
Copy link
Copy Markdown
Collaborator

acreeger commented Feb 23, 2026

I'm waiting for this to be merged

@NoahCardoza Done!

@NoahCardoza NoahCardoza force-pushed the feature/626-openclaw-skill branch from b560abd to f76bcca Compare February 24, 2026 04:29
@NoahCardoza NoahCardoza marked this pull request as ready for review February 24, 2026 05:23
@NoahCardoza
Copy link
Copy Markdown
Contributor Author

@acreeger I believe I've been able to test most of the commands. And as you can see, I actually generated most of this via prompts to my OpenClaw assistant @helixclaw.

We can always refine, fine tune. It's probably good to get this in so the skill documentation can grow with the new features you're working on.

Down the line, I'd like to look into how we can have OpenClaw respond to mentions in PRs and issues, but I think that could be accomplished via webhook without any modifications to iloom itself.

@acreeger
Copy link
Copy Markdown
Collaborator

acreeger commented Feb 24, 2026

Awesome! Let me take a look and see how much the changes affect the normal flow. If it's a minimal, I can merge it super quickly. (but not tonight because I'm on the East Coast)

@NoahCardoza
Copy link
Copy Markdown
Contributor Author

NoahCardoza commented Feb 24, 2026

One thing I realized this morning when running and monitoring the tool calling of OpenClaw was that it didn't realize the epic I assigned it would be fully completed using a swarm and essentially replicated that functionality. I should update the docs to explain this, is there a flag to auto accept the swarm prompt? (I haven't looked yet.) I also felt like it was polling too often... which I think contributed to a ballooning context window (I'm not sure why it wasn't auto-compacting, it was using 200k/200k 😬). We should explain that the spin command takes a while and it should probably only poll once a minute, or it should just wait till the processes ends and trigger a notification to itself.

I noticed it was also trying to write/fix files and run tests... I'm not exactly sure why. Maybe it saw that iloom encountered failing tests and tried to solve them itself. Then, it ended up committing with --no-verify...

Normally it's performed very well, but this morning it just went haywire. I'm going to try to fine tune the skill to account for these and instruct it to use iloom exclusively when instructed since it can handle the whole development pipeline on it's own.

Edit

I was just looked for the "auto swarm" flag and found --epic. I'll need to update the PR to mention this flag in the skill.

@NoahCardoza
Copy link
Copy Markdown
Contributor Author

NoahCardoza commented Feb 25, 2026

It seems like evaluating the whole JSON stream blows up the context window and for some reason compaction doesn't seem to happen automatically when monitoring the iloom process. At the moment, I've instructed OpenClaw to run il spin with --yolo which means maybe we shouldn't have it read the JSON stream, since it can't even interact. I wonder, would there be a way to surface the questions to the user? Maybe by using an MCP server to send the question to OpenClaw and have it pass it along to the user?

Regardless, it seems like we have two overarching approaches:

  1. Monitor-based: Have OpenClaw monitor the output using the process.poll tool and provide updates to the user at some defined interval. There could be multiple approaches to this besides just polling.

    1. Tail: Instruct OpenClaw to use only tail the last few lines at some predefined interval to keep tabs without overwhelming the context window.
    2. Heuristics: Instruct OpenClaw to use jq with some filtering logic to remove file writes/edits/tool calls from the output, etc. Maybe filter everything except assistant messages?
    3. Sub-agent: Delegate the monitoring to a subagent using a cheaper model. Instruct it to only notify the main agent if something interesting happens.
    4. Hybrid: Apply heuristics and then use the sub-agent to determine if the main agent should be notified.
  2. Notification-based: Have OpenClaw register a notification to itself when the il spin process finishes.

    1. Special Tooling: Integrate more tightly with OpenClaw and support a mode where questions from the architecture and planning agents are sent off as notifications to OpenClaw. We'd also need a way for OpenClaw to pass a message back with the user response, possibly augmenting with additional information it adds to the mix.

I'll focus on 1.1 since it solves the immediate problem, but I'll continue exploring the other approaches.

@acreeger
Copy link
Copy Markdown
Collaborator

I have no idea why it's reading that JSON stream either - the yolo flag will simply use interactive mode, but with dangerously skipped permissions, and some changes to the system prompt to continue without asking for any help from the user.

I would not try to solve the problem of getting questions answered - the real solution for that is to use the SDK, but then you can't use your anthropic subscription, you're limited to API usage only, which will be prohibitively expensive for most people.

I would encourage the use of the hub issues to review the assumptions.

If you were to do it, you would wanna run it in headless mode with -p, redirect the JSON stream to a file, have the claw just wait for that instance to shut down, tail the json to get the input required from the user or the latest status , get the answer from the user, then resume the agent with a prompt that contains the answer. Spin will actually auto resume the last session, however, you can't pass a prompt to it - if you could, you could pass the prompt with the answer.

If that makes any sense at all 😂

@NoahCardoza
Copy link
Copy Markdown
Contributor Author

I was actually playing with that exact flow after I posted the comment with pure Claude and was initially thinking we could add the ability for the spin command to accept a prompt when resuming. However, I'm leaning towards using a special MCP server that would only run when it was detected that OpenClaw was controlling it.

  1. ask_human
    This tool would accept a question, just like the built-in tool. It would write this to a temp directory as question.json. Then a notification would be sent to OpenClaw to read this question file from claude, which would also contain some metadata about the related loom. The notification would direct OpenClaw to get an answer from the user and write it to answer.md within the temp folder. The tool would then poll that file until the answer appeared, blocking the claude process until then.
  2. (optional) update_human This may be a way to prevent the need for OpenClaw to poll the process. We could instruct the "main loom agent" (not sure you have a term for that) to send updates to the "controller" as it makes progress. Then we'd just inform OpenClaw that it doesn't need to monitor the process. It will receive notifications. And all it needs to do is set up a notification for when the process ends and maybe some health checks every 10 minutes incase it hangs for some reason or something like that.

We could also use a temporary HTTP server to receive the answer back from OpenClaw.

What do you think?

For now, I've found this jq command works will to filter out only the assistant messages while also flagging logging outputs from the iloom process in the case of a rebase or finish command. This should tremendously reduce the bloat in the context window when monitoring.

il spin | jq -crRC '. as $raw | try (fromjson | select(.type == "assistant").message.content[] | select(.text).text) catch "iloom: "+ $raw'

NoahCardoza and others added 13 commits February 24, 2026 23:40
Refs iloom-ai#626

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ning, and streaming

- Add initialization.md reference with full settings schema and manual setup guide
- Replace il init recommendation with direct settings file creation for AI agents
- Update plan/spin commands to use --print --json-stream in background mode
- Document sizeable vs small change workflows (plan→review→start→spin vs inline start)
- Mark il init as human-only, not recommended for AI agents
- Add 'GitHub Remote Configuration' section to SKILL.md explaining fork
  workflows and why agents should ask users instead of auto-configuring
- Document settings.local.json for per-developer remote preferences
- Add --no-terminal to all autonomous start patterns (SKILL.md and
  non-interactive-patterns.md)
- Add fork workflow step to initialization.md manual setup guide
- Add terminal window bypass to decision bypass map

Closes #7
Update all skill docs to reflect upstream v0.10.0 adding --json-stream
support to commit, finish, and rebase commands:

- SKILL.md: update finish/commit examples with --json-stream + background,
  expand safety rule #2 and Important note to cover all extended commands
- non-interactive-patterns.md: add 'Background Commands — Extended Operations'
  section for commit/finish/rebase, move rebase out of 'Foreground Only',
  add autonomous rebase pattern, update JSON Output table
- development-commands.md: add --json-stream flag to commit and rebase tables,
  update examples to use background mode
- core-workflow.md: add --json-stream flag to finish table, update examples

Refs #3
--json and --json-stream are mutually exclusive (per upstream iloom-ai#635).
Prefer --json-stream for these commands since it provides incremental
progress visibility. --json remains for commands that don't support
--json-stream (list, cleanup, start, etc.).

Refs #3
- Clarify issueManagement.github.remote vs mergeBehavior.remote semantics:
  issueManagement = canonical repo for issues/PRs/comments,
  mergeBehavior = where branches are pushed (cross-fork PR support)
- Add table showing common workflow patterns (fork, direct, fork+local issues)
- Document that GitHub labels must exist before use; guidance on creating
  labels when user has write/triage permissions, or listing existing ones
- Document --json and --json-stream mutual exclusivity in safety rules
- Add required prompt argument to all il plan --yolo examples
- Update planning references with prompt requirement note
Replace blanket 'always use pty:true' guidance with three categories:
- No PTY needed: list, issues, projects, recap, start --no-claude,
  cleanup, finish, commit --no-review, build, test, lint, etc.
- Background required: plan, spin, start (with Claude), summary, enhance
- Foreground PTY only: init, shell, rebase (interactive, not for agents)

Updates both SKILL.md and references/non-interactive-patterns.md.

Closes #4
@NoahCardoza NoahCardoza force-pushed the feature/626-openclaw-skill branch from 90df2ff to 48376c1 Compare February 25, 2026 07:41
@acreeger
Copy link
Copy Markdown
Collaborator

@NoahCardoza what are we blocked on here - what do you need from me? Just a review/merge?

@NoahCardoza
Copy link
Copy Markdown
Contributor Author

NoahCardoza commented Mar 2, 2026

I think we could merge as is, but we may want to warn that it seems to eat through tokens. I think there are more optimizations in order. However, the linking to OpenClaw works and it's able to use iloom. It's just that sometimes it tries to take matters into it's own hands and kills the iloom process when it's running for a long time, especially in swarm mode where the subprocesses are spun up and there isn't much output in the parent process...

I'll continue to tinker with it. It's up to you if we want to get it in or keep trying to optimize it before introducing it into main.

Maybe I need to simplify the instructions and just give it the happy path flows instead of everything about iloom when it really only needs to know how to use plan, start, spin, finish and possibly cleanup (in that order).

@acreeger
Copy link
Copy Markdown
Collaborator

acreeger commented Mar 4, 2026

Let's get it in there first and then optimize. I've already made some optimizations to swarm mode. It no longer uses sub-processes and uses a simpler workflow.

If you think the documentation could be simplified though I would do that. Thanks!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: No status

Development

Successfully merging this pull request may close these issues.

Epic: Create OpenClaw Skill for iloom CLI

3 participants