Conversation
Adds integration with Claude Code as an AI model provider, enabling users to chat with Claude Code alongside other AI models in Chorus.
There was a problem hiding this comment.
Additional Comments (3)
-
src-tauri/src/command.rs, line 789-790 (link)syntax:
HOMEenv var doesn't exist on Windows, useUSERPROFILEinstead -
src-tauri/src/command.rs, line 841-843 (link)syntax:
HOMEenv var doesn't exist on Windows, useUSERPROFILEinstead -
src/core/chorus/ModelProviders/ProviderClaudeCode.ts, line 68-75 (link)syntax: The return type of
streamResponseshould bePromise<ModelDisabled | void>to match theIProviderinterface, but this implementation only returnsPromise<void>
8 files reviewed, 3 comments
bab23f3 to
c9c0f32
Compare
There was a problem hiding this comment.
This is exciting!
Trying it out locally: It's very possible that I'm "holding it wrong", but even with a logged-in claude code instance I wasn't able to get the output to show up in the Chorus dev. I do get valid stream-json output from running claude -p --output-format stream-json --verbose --permission-mode bypassPermissions "hello" in the same terminal window though
| switch (modelPart) { | ||
| case "opus": | ||
| case "opus-4.5": | ||
| return "opus"; | ||
| case "sonnet": | ||
| case "sonnet-4.5": | ||
| return "sonnet"; | ||
| case "haiku": | ||
| return "haiku"; | ||
| default: | ||
| // Let Claude CLI use its default | ||
| return undefined; |
There was a problem hiding this comment.
Any way we can make this a bit more flexible to future-proof it against future model version releases?
There was a problem hiding this comment.
Good point! I have simplified this and just pass the model part directly to the Claude CLI.
|
|
||
| /// Check if Claude Code CLI is installed and authenticated | ||
| #[tauri::command] | ||
| pub fn check_claude_code_available() -> Result<serde_json::Value, String> { |
…el mapping - Use `which claude` to check CLI availability instead of credentials.json - Disable Claude Code in model picker when CLI not installed - Surface errors from stderr, non-zero exit codes, and error results - Simplify model name mapping to pass through directly to CLI
Very strange! This isn't the case for me. Will take a closer look at this and get back. |
Use spawn_blocking to run process checks on a separate thread pool, preventing the sync command from blocking Tauri's async runtime (which caused SQL queries to hang).
@bcongdon could you give this another spin? Should be working fine now. |
There was a problem hiding this comment.
Looks like this has regressed to lockfileVersion: '6.0'. Can you update your pnpm so we continue using version 9.0 (e.g. at main)
| -- Add Claude (via Claude Code) model | ||
| -- This uses the local Claude Code CLI and subscription instead of an API key | ||
| INSERT OR REPLACE INTO models (id, display_name, is_enabled, supported_attachment_types) VALUES | ||
| ('claude-code::default', 'Claude (via Claude Code)', 1, '["text", "webpage"]'); |
There was a problem hiding this comment.
Not sure how you plan on using this, but do you think it's worth having the different models (Sonnet, Opus) exposed here?
…output - Remove mapModelName function since we only expose one model and always use CLI default - Remove --verbose flag from Claude Code CLI invocation to prevent tool calls from showing in chat
Implements clean, collapsible UI for displaying Claude Code tool calls: - Group consecutive tool calls with summary (e.g., "Used 19 tools · Read (10×), Bash (5×)") - Individual tool blocks with JSON syntax highlighting via CodeBlock - Shared helper for extracting summaries from tool parameters - Base64-encoded data attributes for robust HTML parsing Enable Claude Code tools by removing --tools="" restriction and adding --verbose flag to expose tool_use blocks in stream output.
The tool call components were parsing and processing children on every render, causing severe performance issues during streaming when many tool calls were present. Changes: - Add useMemo to ToolCallGroup to cache parsed tool calls based on children prop - Add useMemo to ToolCallGroup summary calculation - Add useMemo to ToolCallBlock and ToolCallItem for extractToolSummary calls This prevents expensive regex matching, base64 decoding, and child traversal from running on every render, which was causing the app to freeze.
|
this is a great, wonderful idea - and we'll then need to add Codex CLI too; that way we can use our regular suscription quotas in Chorus... (similar to what Conductor does!) |
|
@rmichelena - Thank you
Good point! On quota exceeded handling - wouldn't that error just bubble up to the chat normally? Curious what special handling Conductor does that we'd want to replicate. |
@bcongdon Thanks for flagging this! Fixed now - tool calls are no longer rendered plainly. |
|
Thanks for continuing to iterate on this! I haven't been able to determine why, but I seem to be getting a hanging issue again when I try to use this. Repro steps:
|
High-frequency Tauri event emission caused the app to freeze. Now we collect all Claude CLI output and emit only the final assistant message, avoiding the freeze entirely.
Thanks for the repro details — that helped a lot. I’ve narrowed this down to an issue specifically around the Tauri What I was able to confirm:
Claude Code is the only provider that goes through Rust + Tauri events due to spawning a local CLI. Other providers stream via Fetch/SSE directly in JavaScript and don’t hit this path, which explains why the issue only appears here and not with the other providers. Current behavior after the change:
Attempts at real-time streaming (emitting events per line of CLI output) continued to reproduce hangs, even with a delay workaround. |





Adds Claude Code as an AI model provider to Chorus.
Summary
Test Plan