Skip to content

feat: Mistral AI LLMプラグインを追加#129

Open
oboroge0 wants to merge 1 commit intomainfrom
feat/mistral-llm
Open

feat: Mistral AI LLMプラグインを追加#129
oboroge0 wants to merge 1 commit intomainfrom
feat/mistral-llm

Conversation

@oboroge0
Copy link
Copy Markdown
Owner

@oboroge0 oboroge0 commented Mar 9, 2026

Summary

  • Mistral AIのAPIを使用するLLMプラグインを新規追加
  • OpenAI互換APIを利用し、Mistral Large/Medium/Small、Codestral、Ministralモデルに対応
  • グローバル設定画面にMistral AIセクション(APIキー・モデル選択)を追加
  • デモモード対応(APIキー未設定時は定型文応答)

変更内容

  • plugins/mistral-llm/ — プラグイン本体(manifest.json + node.ts)
  • apps/server-ts/src/engine/executor.ts — グローバル設定マッピング追加
  • apps/web/components/ui/SettingsModal.tsx — 設定画面にMistralセクション追加
  • apps/web/components/panels/NodeSettings.tsx — ノード設定パネル対応
  • apps/web/lib/constants.ts — Mistralモデルリスト追加
  • apps/web/locales/{en,ja}.json — ローカライズ追加

Test plan

  • npm test — 全116テスト通過
  • npm run lint — 新規エラーなし
  • エディタでMistral LLMノードを追加し、設定画面が正しく表示されることを確認
  • グローバル設定でMistral APIキー・モデルを設定し、ノードに反映されることを確認
  • APIキー設定時にMistral APIからレスポンスが返ることを確認

closes #39 (Mistral部分)

🤖 Generated with Claude Code

Summary by CodeRabbit

Release Notes

New Features

  • Added Mistral AI as a new LLM provider with full integration
  • Users can now configure Mistral with API key input and model selection
  • Six Mistral models available: Large, Medium, Small, Codestral, Ministral 8B/3B
  • Configure system prompt, temperature, and max tokens for Mistral LLM nodes
  • Localization support added for English and Japanese interfaces

Mistral AIのAPIを使用してテキスト生成するLLMプラグインを追加。
OpenAI互換APIを利用し、Mistral Large/Medium/Small、Codestral、Ministralモデルに対応。

- プラグイン本体(manifest.json + node.ts)
- グローバル設定対応(APIキー・モデル)
- 設定画面にMistral AIセクションを追加
- 日英ローカライズ対応

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Mar 9, 2026

📝 Walkthrough

Walkthrough

This PR adds Mistral AI as a new LLM provider to the system. Changes include backend configuration mappings, frontend UI components for settings, model option definitions, localization entries for English and Japanese, and a new Mistral LLM plugin with manifest and node implementation following existing provider patterns.

Changes

Cohort / File(s) Summary
Backend Configuration
apps/server-ts/src/engine/executor.ts
Added "mistral-llm" mapping in GLOBAL_SETTINGS_MAP linking apiKey and model config fields to global mistral settings.
Frontend Node Settings
apps/web/components/panels/NodeSettings.tsx
Extended nodeConfigs with "mistral-llm" entry including fields for API key (password), model (select dropdown), system prompt (textarea), prompt builder, and temperature (number). Updated GLOBAL_SETTINGS_MAP with mistral-llm mapping.
Settings UI Panel
apps/web/components/ui/SettingsModal.tsx
Added Mistral AI configuration panel with API key input and model selection dropdowns. Added spacing adjustment (mb-4) to Ollama panel.
Model Definitions
apps/web/lib/constants.ts
Added mistral key to LLM_MODEL_OPTIONS with six model options: Mistral Large, Medium, Small, Codestral, Ministral 8B, and Ministral 3B.
Localizations
apps/web/locales/en.json, apps/web/locales/ja.json
Added mistral-llm node descriptions in English and Japanese locales describing the node functionality and usage requirements.
Plugin Manifest
plugins/mistral-llm/manifest.json
Defined complete plugin manifest with metadata, UI configuration (label, icon, color), node I/O specification (prompt input, response output), event emissions (response.generated), and config schema with apiKey, model, systemPrompt, promptSections, temperature, and maxTokens fields.
Plugin Implementation
plugins/mistral-llm/node.ts
Implemented MistralLLMNode class extending BaseNode with setup/execute/teardown lifecycle. Handles OpenAI-compatible Mistral API client initialization, demo-mode fallback, prompt construction from sections, system prompt augmentation with character context, API invocation, response emission, and comprehensive error handling.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

  • PR #117: Mistral LLM node integrates with shared SDK LLM error handler (handleLLMError) and follows plugin initialization patterns from this release PR.
  • PR #93: Both PRs modify NodeSettings.tsx integration and plugin manifest handling for dynamic field derivation from plugin configurations.
  • PR #95: Main PR extends GLOBAL_SETTINGS_MAP and model constants introduced in that PR with new Mistral entries.

Poem

🐰 A whisper through the fields of code,
Mistral winds now find their road,
With prompts and keys, a node takes flight,
Six models strong, the LLM's delight!
The config spreads from east to west,
New patterns prove the system's best.

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly summarizes the main change: adding a Mistral AI LLM plugin. It is concise and specific about the primary feature introduced.
Linked Issues check ✅ Passed The PR successfully implements Mistral AI support as outlined in issue #39: adds manifest.json, node.ts implementation, model selection in UI, global settings support, and localization [#39].
Out of Scope Changes check ✅ Passed All changes are within scope of adding Mistral AI support. No unrelated modifications or out-of-scope changes detected.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/mistral-llm

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
apps/web/locales/ja.json (1)

103-103: Node description looks good, but nodeConfig.mistralLlm section may be missing.

The mistral-llm description correctly follows the pattern of other LLM nodes. However, other LLM providers (e.g., openaiLlm, anthropicLlm, googleLlm, ollamaLlm) have corresponding nodeConfig.* sections (lines 180-212) for field label translations. Consider adding a mistralLlm section for consistency:

"mistralLlm": {
  "label": "LLM (Mistral)",
  "apiKey": "APIキー",
  "model": "モデル",
  "systemPrompt": "システムプロンプト",
  "temperature": "温度"
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/locales/ja.json` at line 103, Add a corresponding nodeConfig entry
for the Mistral LLM to mirror other providers: create a "mistralLlm" object
under nodeConfig with keys "label", "apiKey", "model", "systemPrompt", and
"temperature" (use the provided Japanese translations: "LLM (Mistral)", "APIキー",
"モデル", "システムプロンプト", "温度") so the "mistral-llm" node description has consistent
field label translations like openaiLlm/anthropicLlm/googleLlm/ollamaLlm.
plugins/mistral-llm/node.ts (1)

105-129: Consider adding a defensive check for empty choices array.

The access to response.choices[0] (line 115) could throw if the API returns an empty choices array (edge case, but possible with certain error conditions).

🛡️ Optional defensive check
-      const result = response.choices[0].message.content ?? "";
+      const result = response.choices[0]?.message?.content ?? "";
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@plugins/mistral-llm/node.ts` around lines 105 - 129, Add a defensive check
before accessing response.choices[0] in the method that calls
this.client.chat.completions.create: verify that response.choices is an array
and has length > 0; if not, log an error via context.log (or call handleLLMError
with the original response/error) and return a safe fallback (e.g., empty string
or an error response) instead of indexing into choices. Update the block that
computes result (currently using response.choices[0].message.content) to first
guard on Array.isArray(response.choices) && response.choices.length > 0 and then
extract content, otherwise handle the empty case consistently with
handleLLMError and emit/return the same shaped response object.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@apps/web/locales/ja.json`:
- Line 103: Add a corresponding nodeConfig entry for the Mistral LLM to mirror
other providers: create a "mistralLlm" object under nodeConfig with keys
"label", "apiKey", "model", "systemPrompt", and "temperature" (use the provided
Japanese translations: "LLM (Mistral)", "APIキー", "モデル", "システムプロンプト", "温度") so
the "mistral-llm" node description has consistent field label translations like
openaiLlm/anthropicLlm/googleLlm/ollamaLlm.

In `@plugins/mistral-llm/node.ts`:
- Around line 105-129: Add a defensive check before accessing
response.choices[0] in the method that calls
this.client.chat.completions.create: verify that response.choices is an array
and has length > 0; if not, log an error via context.log (or call handleLLMError
with the original response/error) and return a safe fallback (e.g., empty string
or an error response) instead of indexing into choices. Update the block that
computes result (currently using response.choices[0].message.content) to first
guard on Array.isArray(response.choices) && response.choices.length > 0 and then
extract content, otherwise handle the empty case consistently with
handleLLMError and emit/return the same shaped response object.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 67bfd94a-d74d-4c6d-bfff-254c7d708ecc

📥 Commits

Reviewing files that changed from the base of the PR and between b45d01e and ddfdf10.

📒 Files selected for processing (8)
  • apps/server-ts/src/engine/executor.ts
  • apps/web/components/panels/NodeSettings.tsx
  • apps/web/components/ui/SettingsModal.tsx
  • apps/web/lib/constants.ts
  • apps/web/locales/en.json
  • apps/web/locales/ja.json
  • plugins/mistral-llm/manifest.json
  • plugins/mistral-llm/node.ts

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

feat: 新LLMプロバイダー追加(Groq, Mistral等)

1 participant