Skip to content

fix: wire Anthropic API as model provider for WASM agents#1449

Open
h22fred wants to merge 2 commits intoruvnet:mainfrom
h22fred:fix/wasm-agent-model-provider
Open

fix: wire Anthropic API as model provider for WASM agents#1449
h22fred wants to merge 2 commits intoruvnet:mainfrom
h22fred:fix/wasm-agent-model-provider

Conversation

@h22fred
Copy link
Copy Markdown

@h22fred h22fred commented Mar 26, 2026

Summary

  • WASM agents (wasm_agent_create / wasm_gallery_create) currently echo input back instead of performing LLM inference
  • Root cause: createWasmAgent() never calls agent.set_model_provider(), leaving the agent in echo-only test mode
  • The WasmAgent WASM module already supports set_model_provider(callback) — it just was never wired up

Changes

  • src/ruvector/agent-wasm.ts: Added createAnthropicProvider() that calls the Anthropic Messages API via fetch(), and wires it into createWasmAgent() when ANTHROPIC_API_KEY is set
  • __tests__/ruvector/agent-wasm.test.ts: Added test for model provider being set when API key is present

How it works

createWasmAgent()
  → new WasmAgent(config)
  → createAnthropicProvider(modelId, instructions)  // NEW
  → agent.set_model_provider(provider)              // NEW
  → prompts now go to real LLM instead of echoing

When ANTHROPIC_API_KEY is not set, the agent falls back to echo mode (backwards compatible).

Additional context

Also discovered during testing:

  • @ruvector/rvagent-wasm is listed in package.json but has ESM resolution issues (missing exports field in the WASM package — needs fix in @ruvector/rvagent-wasm itself, adding index.js symlink works around it)
  • agent_spawn + task_assign workflows don't execute — agents stay idle (separate issue)

Fixes #1448

Test plan

  • Verify existing tests pass with mocked WASM module
  • Test with ANTHROPIC_API_KEY set: wasm_agent_createwasm_agent_prompt should return real LLM response
  • Test without ANTHROPIC_API_KEY: should fall back to echo mode (no regression)

🤖 Generated with Claude Code

WASM agents created via wasm_agent_create/wasm_gallery_create echo
input back instead of performing LLM inference because no model
provider is attached after construction.

The WasmAgent WASM module supports set_model_provider(callback) which
accepts a JS async function for LLM calls, but createWasmAgent() never
called it — leaving the agent in echo-only test mode.

This commit:
- Adds createAnthropicProvider() that calls the Anthropic Messages API
- Calls agent.set_model_provider() in createWasmAgent() when
  ANTHROPIC_API_KEY is set
- Adds test coverage for the provider wiring

Fixes ruvnet#1448

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…M execution

The MCP tool `hooks_worker-dispatch` was using setTimeout to fake
progress (0→50→100% over 1.5s) instead of actually executing workers.

The HeadlessWorkerExecutor class already exists and spawns
`claude --print` as a subprocess with process pooling, timeouts,
context building, and output parsing — but was never called from
the MCP tool layer.

This commit:
- Imports HeadlessWorkerExecutor in hooks-tools.ts
- Creates a lazy-initialized executor singleton
- Routes headless worker types (audit, optimize, testgaps, document,
  ultralearn, refactor, deepdive, predict) through the real executor
- Keeps local workers (map, consolidate, benchmark, preload) as
  immediate completion (they don't need AI)
- Falls back gracefully with a warning when Claude Code CLI is not
  available
- Returns actual output and parsedOutput in worker status
- Supports both blocking and background execution modes

Fixes ruvnet#1448

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

agent_spawn results not retrievable — no way to get agent output after completion

1 participant