什么是 MindBridge?
通过连接任意模型统一 LLM 工作流,可在多家提供商间灵活切换,发挥各自优势,避免 vendor lock-in。
核心功能 (3 个工具)
getSecondOpinionGet responses from various LLM providers
listProvidersList all configured LLM providers and their available models
listReasoningModelsList all available models that support reasoning capabilities
README
MindBridge MCP Server ⚡ The AI Router for Big Brain Moves
MindBridge is your AI command hub — a Model Context Protocol (MCP) server built to unify, organize, and supercharge your LLM workflows.
Forget vendor lock-in. Forget juggling a dozen APIs.
MindBridge connects your apps to any model, from OpenAI and Anthropic to Ollama and DeepSeek — and lets them talk to each other like a team of expert consultants.
Need raw speed? Grab a cheap model.
Need complex reasoning? Route it to a specialist.
Want a second opinion? MindBridge has that built in.
This isn't just model aggregation. It's model orchestration.
Core Features 🔥
| What it does | Why you should use it |
|---|---|
| Multi-LLM Support | Instantly switch between OpenAI, Anthropic, Google, DeepSeek, OpenRouter, Ollama (local models), and OpenAI-compatible APIs. |
| Reasoning Engine Aware | Smart routing to models built for deep reasoning like Claude, GPT-4o, DeepSeek Reasoner, etc. |
| getSecondOpinion Tool | Ask multiple models the same question to compare responses side-by-side. |
| OpenAI-Compatible API Layer | Drop MindBridge into any tool expecting OpenAI endpoints (Azure, Together.ai, Groq, etc.). |
| Auto-Detects Providers | Just add your keys. MindBridge handles setup & discovery automagically. |
| Flexible as Hell | Configure everything via env vars, MCP config, or JSON — it's your call. |
Why MindBridge?
"Every LLM is good at something. MindBridge makes them work together."
Perfect for:
- Agent builders
- Multi-model workflows
- AI orchestration engines
- Reasoning-heavy tasks
- Building smarter AI dev environments
- LLM-powered backends
- Anyone tired of vendor walled gardens
Installation 🛠️
Option 1: Install from npm (Recommended)
# Install globally
npm install -g @pinkpixel/mindbridge
# use with npx
npx @pinkpixel/mindbridge
Installing via Smithery
To install mindbridge-mcp for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @pinkpixel-dev/mindbridge-mcp --client claude
Option 2: Install from source
-
Clone the repository:
bashgit clone https://github.com/pinkpixel-dev/mindbridge.git cd mindbridge -
Install dependencies:
bashchmod +x install.sh ./install.sh -
Configure environment variables:
bashcp .env.example .envEdit
.envand add your API keys for the providers you want to use.
Configuration ⚙️
Environment Variables
The server supports the following environment variables:
OPENAI_API_KEY: Your OpenAI API keyANTHROPIC_API_KEY: Your Anthropic API keyDEEPSEEK_API_KEY: Your DeepSeek API keyGOOGLE_API_KEY: Your Google AI API keyOPENROUTER_API_KEY: Your OpenRouter API keyOLLAMA_BASE_URL: Ollama instance URL (default: http://localhost:11434)OPENAI_COMPATIBLE_API_KEY: (Optional) API key for OpenAI-compatible servicesOPENAI_COMPATIBLE_API_BASE_URL: Base URL for OpenAI-compatible servicesOPENAI_COMPATIBLE_API_MODELS: Comma-separated list of available models
MCP Configuration
For use with MCP-compatible IDEs like Cursor or Windsurf, you can use the following configuration in your mcp.json file:
{
"mcpServers": {
"mindbridge": {
"command": "npx",
"args": [
"-y",
"@pinkpixel/mindbridge"
],
"env": {
"OPENAI_API_KEY": "OPENAI_API_KEY_HERE",
"ANTHROPIC_API_KEY": "ANTHROPIC_API_KEY_HERE",
"GOOGLE_API_KEY": "GOOGLE_API_KEY_HERE",
"DEEPSEEK_API_KEY": "DEEPSEEK_API_KEY_HERE",
"OPENROUTER_API_KEY": "OPENROUTER_API_KEY_HERE"
},
"provider_config": {
"openai": {
"default_model": "gpt-4o"
},
"anthropic": {
"default_model": "claude-3-5-sonnet-20241022"
},
"google": {
"default_model": "gemini-2.0-flash"
},
"deepseek": {
"default_model": "deepseek-chat"
},
"openrouter": {
"default_model": "openai/gpt-4o"
},
"ollama": {
"base_url": "http://localhost:11434",
"default_model": "llama3"
},
"openai_compatible": {
"api_key": "API_KEY_HERE_OR_REMOVE_IF_NOT_NEEDED",
"base_url": "FULL_API_URL_HERE",
"available_models": ["MODEL1", "MODEL2"],
"default_model": "MODEL1"
}
},
"default_params": {
"temperature": 0.7,
"reasoning_effort": "medium"
},
"alwaysAllow": [
"getSecondOpinion",
"listProviders",
"listReasoningModels"
]
}
}
}
Replace the API keys with your actual keys. For the OpenAI-compatible configuration, you can remove the api_key field if the service doesn't require authentication.
Usage 💫
Starting the Server
Development mode with auto-reload:
npm run dev
Production mode:
npm run build
npm start
When installed globally:
mindbridge
Available Tools
-
getSecondOpinion
typescript{ provider: string; // LLM provider name model: string; // Model identifier prompt: string; // Your question or prompt systemPrompt?: string; // Optional system instructions temperature?: number; // Response randomness (0-1) maxTokens?: number; // Maximum response length reasoning_effort?: 'low' | 'medium' | 'high'; // For reasoning models } -
listProviders
- Lists all configured providers and their available models
- No parameters required
-
listReasoningModels
- Lists models optimized for reasoning tasks
- No parameters required
Example Usage 📝
// Get an opinion from GPT-4o
{
"provider": "openai",
"model": "gpt-4o",
"prompt": "What are the key considerations for database sharding?",
"temperature": 0.7,
"maxTokens": 1000
}
// Get a reasoned response from OpenAI's o1 model
{
"provider": "openai",
"model": "o1",
"prompt": "Explain the mathematical principles behind database indexing",
"reasoning_effort": "high",
"maxTokens": 4000
}
// Get a reasoned response from DeepSeek
{
"provider": "deepseek",
"model": "deepseek-reasoner",
"prompt": "What are the tradeoffs between microservices and monoliths?",
"reasoning_effort": "high",
"maxTokens": 2000
}
// Use an OpenAI-compatible provider
{
"provider": "openaiCompatible",
"model": "YOUR_MODEL_NAME",
"prompt": "Explain the concept of eventual consistency in distributed systems",
"temperature": 0.5,
"maxTokens": 1500
}
Development 🔧
npm run lint: Run ESLintnpm run format: Format code with Prettiernpm run clean: Clean build artifactsnpm run build: Build the project
Contributing
PRs welcome! Help us make AI workflows less dumb.
License
MIT — do whatever, just don't be evil.
Made with ❤️ by Pink Pixel
常见问题
MindBridge 是什么?
通过连接任意模型统一 LLM 工作流,可在多家提供商间灵活切换,发挥各自优势,避免 vendor lock-in。
MindBridge 提供哪些工具?
提供 3 个工具,包括 getSecondOpinion、listProviders、listReasoningModels。
相关 Skills
Claude接口
by anthropics
面向接入 Claude API、Anthropic SDK 或 Agent SDK 的开发场景,自动识别项目语言并给出对应示例与默认配置,快速搭建 LLM 应用。
✎ 想把Claude能力接进应用或智能体,用claude-api上手快、兼容Anthropic与Agent SDK,集成路径清晰又省心
提示工程专家
by alirezarezvani
覆盖Prompt优化、Few-shot设计、结构化输出、RAG评测与Agent工作流编排,适合分析token成本、评估LLM输出质量,并搭建可落地的AI智能体系统。
✎ 把提示优化、LLM评测到RAG与智能体设计串成一套方法,适合想系统提升AI开发效率的人。
智能体流程设计
by alirezarezvani
面向生产级多 Agent 编排,梳理顺序、并行、分层、事件驱动、共识五种工作流设计,覆盖 handoff、状态管理、容错重试、上下文预算与成本优化,适合搭建复杂 AI 协作系统。
✎ 帮你把多智能体流程设计、编排和自动化统一起来,复杂工作流也能更稳地落地,适合追求强控制力的团队。
相关 MCP Server
顺序思维
编辑精选by Anthropic
Sequential Thinking 是让 AI 通过动态思维链解决复杂问题的参考服务器。
✎ 这个服务器展示了如何让 Claude 像人类一样逐步推理,适合开发者学习 MCP 的思维链实现。但注意它只是个参考示例,别指望直接用在生产环境里。
知识图谱记忆
编辑精选by Anthropic
Memory 是一个基于本地知识图谱的持久化记忆系统,让 AI 记住长期上下文。
✎ 帮 AI 和智能体补上“记不住”的短板,用本地知识图谱沉淀长期上下文,连续对话更聪明,数据也更可控。
PraisonAI
编辑精选by mervinpraison
PraisonAI 是一个支持自反思和多 LLM 的低代码 AI 智能体框架。
✎ 如果你需要快速搭建一个能 24/7 运行的 AI 智能体团队来处理复杂任务(比如自动研究或代码生成),PraisonAI 的低代码设计和多平台集成(如 Telegram)让它上手极快。但作为非官方项目,它的生态成熟度可能不如 LangChain 等主流框架,适合愿意尝鲜的开发者。