Skip to content

feat(ai): add MiniMax as third LLM provider#111

Open
octo-patch wants to merge 1 commit intoIliasHad:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat(ai): add MiniMax as third LLM provider#111
octo-patch wants to merge 1 commit intoIliasHad:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

Summary

Adds MiniMax as a third cloud LLM provider alongside Google Gemini and Ollama. MiniMax offers an OpenAI-compatible API, making integration clean via the existing openai SDK.

  • New provider: packages/ai/src/services/minimax.ts — full AIModel interface implementation with think-tag stripping and response_format: json_object for structured outputs
  • Router update: modelRouter.ts now selects MiniMax when USE_MINIMAX=true and MINIMAX_API_KEY is set
  • Environment config: MINIMAX_API_KEY, USE_MINIMAX, MINIMAX_MODEL (defaults to MiniMax-M1)
  • Tests: 14 unit tests + 3 integration tests for MiniMax provider, plus MiniMax selection test in model router
  • Docs: Updated .env.example and README.md with MiniMax configuration
  • Test fix: Fixed vitest alias to resolve @shared from source (was pointing to non-existent dist/), fixed model router test mock path

Configuration

# MiniMax Option
MINIMAX_API_KEY="your-api-key"
USE_MINIMAX="true"
MINIMAX_MODEL="MiniMax-M1"  # or MiniMax-M1-highspeed

Test Plan

  • All 14 MiniMax unit tests pass (mock-based)
  • All 3 MiniMax integration tests pass (live API)
  • All 10 existing model router tests pass (including new MiniMax selection test)
  • Provider selection priority: Ollama > MiniMax > Gemini
  • Manual test with Docker Compose setup

Add MiniMax AI as an alternative cloud LLM provider alongside Gemini and
Ollama via OpenAI-compatible API using the existing OpenAI SDK.

- New MiniMax provider implementing AIModel interface with think-tag
  stripping and JSON mode support
- Model router updated with MiniMax selection via USE_MINIMAX env var
- MINIMAX_API_KEY, USE_MINIMAX, MINIMAX_MODEL environment variables
- Updated .env.example and README with MiniMax configuration
- Fixed vitest config to resolve @shared from source for testing
- 14 unit tests + 3 integration tests for MiniMax provider
- Updated model router tests with MiniMax selection test case
@IliasHad
Copy link
Copy Markdown
Owner

@octo-patch Thank you for your contribution, much appreciated. Would you like to share your feedback on using Mini Max vs Gemini 2.5 Pro or 3.1 Pro? Why did you choose Mini Max specifically, and what feedback about contributing to this project (I wanna make sure that it's easy for other people to contribute)?

PS: I'll review the code once it passes all the GitHub workflows

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants