io.github.nesquikm/rubber-duck
编码与调试by nesquikm
连接多个 OpenAI-compatible LLMs 的 MCP server,可作为你的 AI rubber duck 调试面板。
把多个 OpenAI 兼容模型接进同一套 MCP 调试面板,帮你在橡皮鸭式排错时快速获得多视角代码诊断,卡住时尤其好用
什么是 io.github.nesquikm/rubber-duck?
连接多个 OpenAI-compatible LLMs 的 MCP server,可作为你的 AI rubber duck 调试面板。
README
MCP Rubber Duck
An MCP (Model Context Protocol) server that acts as a bridge to query multiple LLMs -- both OpenAI-compatible HTTP APIs and CLI coding agents. Just like rubber duck debugging, explain your problems to various AI "ducks" and get different perspectives!
<p align="center"> <img src="assets/mcp-rubber-duck.jpg" alt="MCP Rubber Duck - AI ducks helping debug code" width="600"> </p>Features
- Universal OpenAI Compatibility -- Works with any OpenAI-compatible API endpoint
- CLI Agent Support -- Use CLI coding agents (Claude Code, Codex, Gemini CLI, Grok, Aider) as ducks
- Multiple Ducks -- Configure and query multiple LLM providers simultaneously
- Conversation Management -- Maintain context across multiple messages
- Duck Council -- Get responses from all your configured LLMs at once
- Consensus Voting -- Multi-duck voting with reasoning and confidence scores
- LLM-as-Judge -- Have ducks evaluate and rank each other's responses
- Iterative Refinement -- Two ducks collaboratively improve responses
- Structured Debates -- Oxford, Socratic, and adversarial debate formats
- MCP Prompts -- 8 reusable prompt templates for multi-LLM workflows
- Vision Input -- Send images alongside prompts to vision-capable models (docs)
- Automatic Failover -- Falls back to other providers if primary fails
- Health Monitoring -- Real-time health checks for all providers
- Usage Tracking -- Track requests, tokens, and estimated costs per provider
- MCP Bridge -- Connect ducks to other MCP servers for extended functionality (docs)
- Guardrails -- Pluggable safety layer with rate limiting, token limits, pattern blocking, and PII redaction (docs)
- Granular Security -- Per-server approval controls with session-based approvals
- Interactive UIs -- Rich HTML panels for compare, vote, debate, and usage tools (via MCP Apps)
- Tool Annotations -- MCP-compliant hints for tool behavior (read-only, destructive, etc.)
- Structured Output --
outputSchemaon tools returning structured JSON for client-side validation (Cursor, VS Code/Copilot)
Supported Providers
HTTP Providers (OpenAI-compatible API)
Any provider with an OpenAI-compatible API endpoint, including:
- OpenAI (GPT-5.1, o3, o4-mini)
- Google Gemini (Gemini 3, Gemini 2.5 Pro/Flash)
- Anthropic (via OpenAI-compatible endpoints)
- Groq (Llama 4, Llama 3.3)
- Together AI (Llama 4, Qwen, and more)
- Perplexity (Online models with web search)
- Anyscale, Azure OpenAI, Ollama, LM Studio, Custom
CLI Providers (Coding Agents)
Command-line coding agents that run as local processes:
- Claude Code (
claude) -- Codex (codex) -- Gemini CLI (gemini) -- Grok CLI (grok) -- Aider (aider) -- Custom
See CLI Providers for full setup and configuration.
Quick Start
# Install globally
npm install -g mcp-rubber-duck
# Or use npx directly in Claude Desktop config
npx mcp-rubber-duck
Using Claude Desktop? Jump to Claude Desktop Configuration. Using Cursor, VS Code, Windsurf, or another tool? See the Setup Guide.
Installation
Prerequisites
- Node.js 20 or higher
- npm or yarn
- At least one API key for an HTTP provider, or a CLI coding agent installed locally
Install from NPM
npm install -g mcp-rubber-duck
Install from Source
git clone https://github.com/nesquikm/mcp-rubber-duck.git
cd mcp-rubber-duck
npm install
npm run build
npm start
Configuration
Create a .env file or config/config.json. Key environment variables:
| Variable | Description |
|---|---|
OPENAI_API_KEY | OpenAI API key |
GEMINI_API_KEY | Google Gemini API key |
GROQ_API_KEY | Groq API key |
DEFAULT_PROVIDER | Default provider (e.g., openai) |
DEFAULT_TEMPERATURE | Default temperature (e.g., 0.7) |
LOG_LEVEL | debug, info, warn, error |
MCP_SERVER | Set to true for MCP server mode |
MCP_BRIDGE_ENABLED | Enable MCP Bridge (ducks access external MCP servers) |
CUSTOM_{NAME}_* | Custom HTTP providers |
CLI_{AGENT}_ENABLED | Enable CLI agents (CLAUDE, CODEX, GEMINI, GROK, AIDER) |
Full reference: Configuration docs
Interactive UIs (MCP Apps)
Four tools -- compare_ducks, duck_vote, duck_debate, and get_usage_stats -- can render rich interactive HTML panels inside supported MCP clients via MCP Apps. Once this MCP server is configured in a supporting client, the UIs appear automatically -- no additional setup is required. Clients without MCP Apps support still receive the same plain text output (no functionality is lost). See the MCP Apps repo for an up-to-date list of supported clients.
Compare Ducks
Compare multiple model responses side-by-side, with latency indicators, token counts, model badges, and error states.
<p align="center"> <img src="assets/ext-apps-compare.png" alt="Compare Ducks interactive UI" width="600"> </p>Duck Vote
Have multiple ducks vote on options, displayed as a visual vote tally with bar charts, consensus badge, winner card, confidence bars, and collapsible reasoning.
<p align="center"> <img src="assets/ext-apps-vote.png" alt="Duck Vote interactive UI" width="600"> </p>Duck Debate
Structured multi-round debate between ducks, shown as a round-by-round view with format badge, participant list, collapsible rounds, and synthesis section.
<p align="center"> <img src="assets/ext-apps-debate.png" alt="Duck Debate interactive UI" width="600"> </p>Usage Stats
Usage analytics with summary cards, provider breakdown with expandable rows, token distribution bars, and estimated costs.
<p align="center"> <img src="assets/ext-apps-usage-stats.png" alt="Usage Stats interactive UI" width="600"> </p>Available Tools
| Tool | Description |
|---|---|
ask_duck | Ask a single question to a specific LLM provider |
chat_with_duck | Conversation with context maintained across messages |
clear_conversations | Clear all conversation history |
list_ducks | List configured providers and health status |
list_models | List available models for providers |
compare_ducks | Ask the same question to multiple providers simultaneously |
duck_council | Get responses from all configured ducks |
get_usage_stats | Usage statistics and estimated costs |
duck_vote | Multi-duck voting with reasoning and confidence |
duck_judge | Have one duck evaluate and rank others' responses |
duck_iterate | Iteratively refine a response between two ducks |
duck_debate | Structured multi-round debate between ducks |
mcp_status | MCP Bridge status and connected servers |
get_pending_approvals | Pending MCP tool approval requests |
approve_mcp_request | Approve or deny a duck's MCP tool request |
Full reference with input schemas: Tools docs
Available Prompts
| Prompt | Purpose | Required Arguments |
|---|---|---|
perspectives | Multi-angle analysis with assigned lenses | problem, perspectives |
assumptions | Surface hidden assumptions in plans | plan |
blindspots | Hunt for overlooked risks and gaps | proposal |
tradeoffs | Structured option comparison | options, criteria |
red_team | Security/risk analysis from multiple angles | target |
reframe | Problem reframing at different levels | problem |
architecture | Design review across concerns | design, workloads, priorities |
diverge_converge | Divergent exploration then convergence | challenge |
Full reference with examples: Prompts docs
Development
npm run dev # Development with watch mode
npm test # Run all tests
npm run lint # ESLint
npm run typecheck # Type check without emit
Documentation
| Topic | Link |
|---|---|
| Setup guide (all tools) | docs/setup.md |
| Full configuration reference | docs/configuration.md |
| Claude Desktop setup | docs/claude-desktop.md |
| All tools with schemas | docs/tools.md |
| Prompt templates | docs/prompts.md |
| CLI coding agents | docs/cli-providers.md |
| MCP Bridge | docs/mcp-bridge.md |
| Guardrails | docs/guardrails.md |
| Docker deployment | docs/docker.md |
| Provider-specific setup | docs/provider-setup.md |
| Usage examples | docs/usage-examples.md |
| Architecture | docs/architecture.md |
| Roadmap | docs/roadmap.md |
Troubleshooting
Provider Not Working
- Check API key is correctly set
- Verify endpoint URL is correct
- Run health check:
list_ducks({ check_health: true }) - Check logs for detailed error messages
Connection Issues
- For local providers (Ollama, LM Studio), ensure they're running
- Check firewall settings for local endpoints
- Verify network connectivity to cloud providers
Rate Limiting
- Configure failover to alternate providers
- Adjust
max_retriesandtimeoutsettings - See Guardrails for rate limiting configuration
Contributing
__
<(o )___
( ._> /
`---' Quack! Ready to debug!
We love contributions! Whether you're fixing bugs, adding features, or teaching our ducks new tricks, we'd love to have you join the flock.
Check out our Contributing Guide to get started.
Quick start for contributors:
- Fork the repository
- Create a feature branch
- Follow our conventional commit guidelines
- Add tests for new functionality
- Submit a pull request
License
MIT License - see LICENSE file for details
Acknowledgments
- Inspired by the rubber duck debugging method
- Built on the Model Context Protocol (MCP)
- Uses OpenAI SDK for HTTP provider compatibility
- Supports CLI coding agents (Claude Code, Codex, Gemini CLI, Grok, Aider)
Changelog
See CHANGELOG.md for a detailed history of changes and releases.
Registry & Directory
- NPM Package: npmjs.com/package/mcp-rubber-duck
- Docker Images: ghcr.io/nesquikm/mcp-rubber-duck
- MCP Registry: Official MCP server
io.github.nesquikm/rubber-duck - Glama Directory: glama.ai/mcp/servers/@nesquikm/mcp-rubber-duck
- Awesome MCP Servers: Listed in the community directory
Support
- Report issues: https://github.com/nesquikm/mcp-rubber-duck/issues
- Documentation: https://github.com/nesquikm/mcp-rubber-duck/wiki
- Discussions: https://github.com/nesquikm/mcp-rubber-duck/discussions
Happy Debugging with your AI Duck Panel!
常见问题
io.github.nesquikm/rubber-duck 是什么?
连接多个 OpenAI-compatible LLMs 的 MCP server,可作为你的 AI rubber duck 调试面板。
相关 Skills
前端设计
by anthropics
面向组件、页面、海报和 Web 应用开发,按鲜明视觉方向生成可直接落地的前端代码与高质感 UI,适合做 landing page、Dashboard 或美化现有界面,避开千篇一律的 AI 审美。
✎ 想把页面做得既能上线又有设计感,就用前端设计:组件到整站都能产出,难得的是能避开千篇一律的 AI 味。
网页构建器
by anthropics
面向复杂 claude.ai HTML artifact 开发,快速初始化 React + Tailwind CSS + shadcn/ui 项目并打包为单文件 HTML,适合需要状态管理、路由或多组件交互的页面。
✎ 在 claude.ai 里做复杂网页 Artifact 很省心,多组件、状态和路由都能顺手搭起来,React、Tailwind 与 shadcn/ui 组合效率高、成品也更精致。
网页应用测试
by anthropics
用 Playwright 为本地 Web 应用编写自动化测试,支持启动开发服务器、校验前端交互、排查 UI 异常、抓取截图与浏览器日志,适合调试动态页面和回归验证。
✎ 借助 Playwright 一站式验证本地 Web 应用前端功能,调 UI 时还能同步查看日志和截图,定位问题更快。
相关 MCP Server
GitHub
编辑精选by GitHub
GitHub 是 MCP 官方参考服务器,让 Claude 直接读写你的代码仓库和 Issues。
✎ 这个参考服务器解决了开发者想让 AI 安全访问 GitHub 数据的问题,适合需要自动化代码审查或 Issue 管理的团队。但注意它只是参考实现,生产环境得自己加固安全。
Context7 文档查询
编辑精选by Context7
Context7 是实时拉取最新文档和代码示例的智能助手,让你告别过时资料。
✎ 它能解决开发者查找文档时信息滞后的问题,特别适合快速上手新库或跟进更新。不过,依赖外部源可能导致偶尔的数据延迟,建议结合官方文档使用。
by tldraw
tldraw 是让 AI 助手直接在无限画布上绘图和协作的 MCP 服务器。
✎ 这解决了 AI 只能输出文本、无法视觉化协作的痛点——想象让 Claude 帮你画流程图或白板讨论。最适合需要快速原型设计或头脑风暴的开发者。不过,目前它只是个基础连接器,你得自己搭建画布应用才能发挥全部潜力。