Skip to content

feat: add MiniMax LLM provider support (M2.7 default)#1964

Open
octo-patch wants to merge 2 commits intoFoundationAgents:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax LLM provider support (M2.7 default)#1964
octo-patch wants to merge 2 commits intoFoundationAgents:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

@octo-patch octo-patch commented Mar 13, 2026

Summary

Add MiniMax as a first-class LLM provider for MetaGPT via its OpenAI-compatible API.

Changes

  • Add LLMType.MINIMAX enum value in llm_config.py
  • Register MINIMAX in OpenAILLM provider routing in openai_api.py
  • Add temperature clamping for MiniMax (requires > 0)
  • Add token costs and context window limits for MiniMax-M1, M2.5, M2.5-highspeed, and M2.7 (default recommended)
  • Add MiniMax config example in config2.example.yaml
  • Add MiniMax usage example in README

Models Supported

Model Context Window Notes
MiniMax-M2.7 204K Default recommended - latest model
MiniMax-M2.5 204K Previous generation
MiniMax-M2.5-highspeed 204K Fast variant
MiniMax-M1 1M Legacy

Configuration

llm:
  api_type: "minimax"
  model: "MiniMax-M2.7"
  base_url: "https://api.minimax.io/v1"
  api_key: "YOUR_MINIMAX_API_KEY"

Testing

  • Verified MiniMax-M2.7 model responds correctly via API
  • Token counter and context window values validated programmatically
  • LLMType enum verified

Add MiniMax as a new LLM provider using its OpenAI-compatible API.
MiniMax offers models like MiniMax-M2.5 with 204K context window.

Changes:
- Add MINIMAX to LLMType enum
- Register MiniMax provider in OpenAILLM (OpenAI-compatible)
- Handle MiniMax temperature constraint (must be > 0)
- Add token costs and context lengths for MiniMax models
- Add MiniMax configuration example in config2.example.yaml
- Update README with MiniMax usage example
Add MiniMax-M2.7 to token costs and context window definitions.
Update config examples and README to recommend M2.7 as the default model.
@octo-patch octo-patch changed the title feat: add MiniMax LLM provider support feat: add MiniMax LLM provider support (M2.7 default) Mar 18, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant