Skip to content

feat: add MiniMax as LLM provider for evaluation and examples#159

Merged
ycjcl868 merged 1 commit intoagent-infra:mainfrom
octo-patch:feature/add-minimax-provider
Apr 10, 2026
Merged

feat: add MiniMax as LLM provider for evaluation and examples#159
ycjcl868 merged 1 commit intoagent-infra:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown
Contributor

Summary

Adds MiniMax as an LLM provider for the evaluation framework and a new integration example.

MiniMax offers OpenAI-compatible endpoints (https://api.minimax.io/v1) with 204K-context models (M2.7, M2.7-highspeed, M2.5, M2.5-highspeed), making it a drop-in alternative for any workflow that already uses the openai SDK.

Changes

Area What changed
evaluation/agent_loop.py New OpenAIAgentLoop — works with standard OpenAI and any compatible API. Includes MiniMax-specific temperature clamping (> 0) and <think> tag stripping for M2.x models.
evaluation/main.py --agent openai, --openai-base-url, --openai-model CLI flags so evaluations can run against MiniMax (or any OpenAI-compat provider) without code changes.
examples/minimax-integration/ Standalone example showing MiniMax function-calling with sandbox code execution (mirrors openai-integration).
evaluation/tests/ 19 unit tests + 4 live integration tests (skipped when MINIMAX_API_KEY is not set).
READMEs Updated README.md, examples/README.md, and evaluation/README.md with MiniMax usage docs.

Usage

# Run evaluation with MiniMax
export OPENAI_API_KEY="your_minimax_key"
uv run main.py --agent openai \
    --openai-base-url https://api.minimax.io/v1 \
    --openai-model MiniMax-M2.7

# Run the example
cd examples/minimax-integration
export MINIMAX_API_KEY="your_key"
uv run main.py

Test plan

  • 19 unit tests pass (python -m pytest evaluation/tests/test_openai_agent_loop.py)
  • 4 integration tests pass with live MiniMax API key
  • Existing AzureOpenAIAgentLoop behaviour unchanged (verified by unit test)
  • Run examples/minimax-integration/main.py against a live sandbox

13 files changed, 1019 additions(+), 12 deletions(-)

Add OpenAIAgentLoop to the evaluation framework, supporting standard
OpenAI and any OpenAI-compatible API (e.g. MiniMax) via configurable
base_url. This complements the existing AzureOpenAIAgentLoop with
temperature clamping for MiniMax and <think> tag stripping for M2.x
models.

- evaluation/agent_loop.py: new OpenAIAgentLoop class
- evaluation/main.py: --agent/--openai-base-url/--openai-model CLI flags
- examples/minimax-integration: standalone example with MiniMax
- 19 unit tests + 4 integration tests
- README/docs updated
@ycjcl868 ycjcl868 merged commit d6f970a into agent-infra:main Apr 10, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants