Conversation
Groqの高速推論APIを使用してテキスト生成するLLMプラグインを追加。 OpenAI互換APIを利用し、Llama 3.3/3.1、Mixtral、Gemma 2モデルに対応。 - プラグイン本体(manifest.json + node.ts) - グローバル設定対応(APIキー・モデル) - 設定画面にGroqセクションを追加 - 日英ローカライズ対応 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
📝 WalkthroughWalkthroughThis PR introduces comprehensive Groq LLM provider support by adding a new plugin with manifest and node implementation, UI configuration components across the web application, server-side settings mapping, and English and Japanese localizations. Changes
Sequence DiagramsequenceDiagram
participant Client as Client/UI
participant Node as GroqLLMNode
participant OpenAI as OpenAI Client
participant GroqAPI as Groq API
Client->>Node: setup(config with apiKey)
Note over Node: Initialize OpenAI client<br/>with Groq base URL
Node->>OpenAI: Create client instance
Client->>Node: execute(inputs with prompt)
Note over Node: Construct full prompt<br/>from sections or raw input
Node->>Node: Append systemPrompt<br/>and character context
Node->>OpenAI: createChatCompletion()
OpenAI->>GroqAPI: POST /chat/completions
GroqAPI-->>OpenAI: response with message
OpenAI-->>Node: completion result
Note over Node: Extract text response
Node->>Node: Emit response.generated event
Node-->>Client: Return { response: text }
Client->>Node: teardown()
Note over Node: Clear client reference
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (1)
apps/web/components/ui/SettingsModal.tsx (1)
308-329: Use Tailwind utilities for the new Groq card wrapper.The new section adds another inline background style in
apps/web/**/*.tsx. Please move this wrapper styling to Tailwind so the new UI follows the repo convention.As per coding guidelines, "apps/web/**/*.tsx: Use Tailwind CSS for styling React components"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/components/ui/SettingsModal.tsx` around lines 308 - 329, The Groq card wrapper in SettingsModal.tsx currently uses an inline style background: 'rgba(0,0,0,0.2)'; replace that inline style with Tailwind utilities by moving the background into the className (e.g., change the div with className "p-3 rounded-lg" and style={{ background: 'rgba(0,0,0,0.2)' }} to include a Tailwind background like "p-3 rounded-lg bg-black/20" or an equivalent bg-[rgba(...)] utility) and remove the style prop so styling follows the apps/web/**/*.tsx Tailwind convention.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/web/components/panels/NodeSettings.tsx`:
- Around line 885-918: The fallback nodeConfigs["groq-llm"] is missing the
maxTokens field declared in plugins/groq-llm/manifest.json; update the groq
entry to include a field with key "maxTokens", type "number", label "Max Tokens"
and a sensible placeholder/default that matches the manifest's type/shape, and
ensure any related TypeScript types used for this config are updated to keep the
manifest and this fallback in sync.
In `@apps/web/locales/ja.json`:
- Line 103: The Japanese locale is missing Groq config labels so the Groq config
panel falls back to English; add the keys expected by NodeSettings.tsx under the
existing "groq-llm" entry: "groqLlm.label", "groqLlm.apiKey", "groqLlm.model",
"groqLlm.systemPrompt", "groqLlm.promptBuilder", "groqLlm.temperature", and
"groqLlm.maxTokens" (translate them into Japanese consistent with the style of
the surrounding locale strings) so NodeSettings.tsx can resolve
nodeConfig.groqLlm.* without fallback.
In `@plugins/groq-llm/manifest.json`:
- Around line 43-47: The manifest currently marks the "apiKey" field as required
which conflicts with plugins/groq-llm/node.ts that supports running in demo mode
when the key is absent; update the manifest's "apiKey" entry so it's not
required (e.g., remove or set "required": false) and optionally clarify in the
"description" that leaving it blank enables demo mode to match the behavior in
node.ts.
In `@plugins/groq-llm/node.ts`:
- Around line 81-84: The demo-mode early return skips emitting the node's event
payload (so listeners miss response.generated); when this.client is falsy, build
the same response object used in the normal path containing
GroqLLMNode.DEMO_RESPONSE and a populated response.generated, emit it using the
same event/emit mechanism the real path uses, then return that response — i.e.,
replace the direct return with code that logs the message, constructs the
response with generated content, calls the normal emit method (the same one used
elsewhere in this class), and finally returns the response.
---
Nitpick comments:
In `@apps/web/components/ui/SettingsModal.tsx`:
- Around line 308-329: The Groq card wrapper in SettingsModal.tsx currently uses
an inline style background: 'rgba(0,0,0,0.2)'; replace that inline style with
Tailwind utilities by moving the background into the className (e.g., change the
div with className "p-3 rounded-lg" and style={{ background: 'rgba(0,0,0,0.2)'
}} to include a Tailwind background like "p-3 rounded-lg bg-black/20" or an
equivalent bg-[rgba(...)] utility) and remove the style prop so styling follows
the apps/web/**/*.tsx Tailwind convention.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: af818d9b-2714-4ec5-86f1-f63c60119ff4
📒 Files selected for processing (8)
apps/server-ts/src/engine/executor.tsapps/web/components/panels/NodeSettings.tsxapps/web/components/ui/SettingsModal.tsxapps/web/lib/constants.tsapps/web/locales/en.jsonapps/web/locales/ja.jsonplugins/groq-llm/manifest.jsonplugins/groq-llm/node.ts
| "groq-llm": { | ||
| label: "LLM (Groq)", | ||
| fields: [ | ||
| { | ||
| key: "apiKey", | ||
| type: "password", | ||
| label: "API Key", | ||
| placeholder: "gsk_...", | ||
| }, | ||
| { | ||
| key: "model", | ||
| type: "select", | ||
| label: "Model", | ||
| options: LLM_MODEL_OPTIONS.groq, | ||
| }, | ||
| { | ||
| key: "systemPrompt", | ||
| type: "textarea", | ||
| label: "System Prompt", | ||
| placeholder: "Enter character settings...", | ||
| }, | ||
| { | ||
| key: "promptSections", | ||
| type: "prompt-builder", | ||
| label: "Prompt Builder", | ||
| }, | ||
| { | ||
| key: "temperature", | ||
| type: "number", | ||
| label: "Temperature", | ||
| placeholder: "0.7", | ||
| }, | ||
| ], | ||
| }, |
There was a problem hiding this comment.
Keep the Groq fallback schema in sync with the manifest.
plugins/groq-llm/manifest.json exposes maxTokens, but this fallback nodeConfigs["groq-llm"] omits it. Until the plugin manifest is available, the editor renders from this fallback, so the Groq node config is incomplete.
💡 Suggested fallback field
{
key: "temperature",
type: "number",
label: "Temperature",
placeholder: "0.7",
},
+ {
+ key: "maxTokens",
+ type: "number",
+ label: "Max Tokens",
+ placeholder: "1024",
+ },
],
},🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/web/components/panels/NodeSettings.tsx` around lines 885 - 918, The
fallback nodeConfigs["groq-llm"] is missing the maxTokens field declared in
plugins/groq-llm/manifest.json; update the groq entry to include a field with
key "maxTokens", type "number", label "Max Tokens" and a sensible
placeholder/default that matches the manifest's type/shape, and ensure any
related TypeScript types used for this config are updated to keep the manifest
and this fallback in sync.
| "anthropic-llm": "Anthropic APIでテキスト生成するLLMノード。\n【使い方】promptに質問を接続。設定でモデルとシステムプロンプトを指定。", | ||
| "google-llm": "Google AI APIでテキスト生成するLLMノード。\n【使い方】promptに質問を接続。設定でモデルとシステムプロンプトを指定。", | ||
| "ollama-llm": "ローカルOllamaでテキスト生成するLLMノード。\n【使い方】promptに質問を接続。設定でモデルを指定。※Ollama起動が必要", | ||
| "groq-llm": "Groqの高速推論APIでテキスト生成するLLMノード。\n【使い方】promptに質問を接続。設定でモデルとシステムプロンプトを指定。", |
There was a problem hiding this comment.
Japanese Groq locale coverage is still incomplete.
This adds the node description, but NodeSettings.tsx resolves Groq field labels from nodeConfig.groqLlm.*. Without those keys here, the Groq config panel falls back to English in the Japanese UI. Please add the Groq label, apiKey, model, systemPrompt, promptBuilder, temperature, and maxTokens entries as well.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/web/locales/ja.json` at line 103, The Japanese locale is missing Groq
config labels so the Groq config panel falls back to English; add the keys
expected by NodeSettings.tsx under the existing "groq-llm" entry:
"groqLlm.label", "groqLlm.apiKey", "groqLlm.model", "groqLlm.systemPrompt",
"groqLlm.promptBuilder", "groqLlm.temperature", and "groqLlm.maxTokens"
(translate them into Japanese consistent with the style of the surrounding
locale strings) so NodeSettings.tsx can resolve nodeConfig.groqLlm.* without
fallback.
| "apiKey": { | ||
| "type": "password", | ||
| "label": "API Key", | ||
| "description": "Your Groq API key", | ||
| "required": true |
There was a problem hiding this comment.
apiKey cannot stay required if demo mode is supported.
plugins/groq-llm/node.ts intentionally handles a missing key by entering demo mode, but this manifest marks the same field as required. Any manifest-driven validation/UI will block the no-key flow before the node ever runs.
💡 Suggested manifest fix
"apiKey": {
"type": "password",
"label": "API Key",
- "description": "Your Groq API key",
- "required": true
+ "description": "Your Groq API key. Leave empty to use demo mode."
},📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "apiKey": { | |
| "type": "password", | |
| "label": "API Key", | |
| "description": "Your Groq API key", | |
| "required": true | |
| "apiKey": { | |
| "type": "password", | |
| "label": "API Key", | |
| "description": "Your Groq API key. Leave empty to use demo mode." |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@plugins/groq-llm/manifest.json` around lines 43 - 47, The manifest currently
marks the "apiKey" field as required which conflicts with
plugins/groq-llm/node.ts that supports running in demo mode when the key is
absent; update the manifest's "apiKey" entry so it's not required (e.g., remove
or set "required": false) and optionally clarify in the "description" that
leaving it blank enables demo mode to match the behavior in node.ts.
| if (!this.client) { | ||
| await context.log("[デモモード] 定型文応答を返します", "info"); | ||
| return { response: GroqLLMNode.DEMO_RESPONSE }; | ||
| } |
There was a problem hiding this comment.
Demo mode should preserve the node's event contract.
This early return skips response.generated, so listeners only work when a real API key is configured. Emit the canned response before returning so demo mode behaves like the normal path.
💡 Suggested fix
if (!this.client) {
await context.log("[デモモード] 定型文応答を返します", "info");
+ await context.emitEvent(
+ createEvent("response.generated", {
+ text: GroqLLMNode.DEMO_RESPONSE,
+ model: this.model,
+ }),
+ );
return { response: GroqLLMNode.DEMO_RESPONSE };
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@plugins/groq-llm/node.ts` around lines 81 - 84, The demo-mode early return
skips emitting the node's event payload (so listeners miss response.generated);
when this.client is falsy, build the same response object used in the normal
path containing GroqLLMNode.DEMO_RESPONSE and a populated response.generated,
emit it using the same event/emit mechanism the real path uses, then return that
response — i.e., replace the direct return with code that logs the message,
constructs the response with generated content, calls the normal emit method
(the same one used elsewhere in this class), and finally returns the response.
Summary
変更内容
plugins/groq-llm/— プラグイン本体(manifest.json + node.ts)apps/server-ts/src/engine/executor.ts— グローバル設定マッピング追加apps/web/components/ui/SettingsModal.tsx— 設定画面にGroqセクション追加apps/web/components/panels/NodeSettings.tsx— ノード設定パネル対応apps/web/lib/constants.ts— Groqモデルリスト追加apps/web/locales/{en,ja}.json— ローカライズ追加Test plan
npm test— 全116テスト通過npm run lint— 新規エラーなしcloses #39 (Groq部分)
🤖 Generated with Claude Code
Summary by CodeRabbit