io.github.tjp2021/mcp-thinkgate
编码与调试by tjp2021
Auto-classifies prompt complexity and routes to the right Claude thinking mode. No API key needed.
什么是 io.github.tjp2021/mcp-thinkgate?
Auto-classifies prompt complexity and routes to the right Claude thinking mode. No API key needed.
README
ThinkGate
Automatic reasoning mode selection for Claude agents.
The problem
You built an AI agent. It handles everything — status checks, quick lookups, complex architecture questions, deep debugging sessions. But under the hood it runs every single message through the same model with the same thinking settings.
That means you're burning extended thinking tokens on "what time is it in Tokyo?" and getting shallow answers on "help me design the entire auth system."
You could manually tag requests — ULTRATHINK: before the hard ones. But you forget. Your users definitely won't do it. And if you're building agents for other people, you can't train every end user to manage thinking modes.
ThinkGate fixes this at the infrastructure layer. It sits between the incoming message and your model call, classifies the complexity in ~200ms, and returns exactly which model and thinking depth to use. Automatically. Every time.
Who this is for
- Agent builders running Claude on a mix of simple and complex tasks who are tired of one-size-fits-all model settings
- Teams running 24/7 agents (WhatsApp bots, Slack assistants, Telegram agents) where message complexity varies wildly and cost/latency actually matters
- Anyone who's ever typed
ULTRATHINKmanually and thought: this should just happen on its own
How it works
Incoming message
↓
Haiku call (~200ms, ~$0.0001)
"How complex is this?"
↓
fast → no extended thinking
think → medium effort
ultrathink → max effort
↓
Claude runs with the right settings
A cheap, fast Haiku call reads your prompt and decides which tier it needs. Then your main Claude call runs with the right effort level. You pay almost nothing for the classification, and save real money (and latency) on the 60%+ of messages that don't need extended reasoning.
The classifier is the IP here — not which model runs it. Three tiers. A system prompt trained on the boundary between "this needs thinking" and "this doesn't." Works out of the box.
Tiers
| Tier | Claude effort | When |
|---|---|---|
fast | none | Factual, conversational, simple edits |
think | medium | Architecture, debugging, multi-step analysis |
ultrathink | max | System design, proofs, open-ended complexity |
Use as an MCP tool (Claude Desktop / Claude Code)
Add to ~/.claude/settings.json (Claude Code) or ~/Library/Application Support/Claude/claude_desktop_config.json (Claude Desktop):
{
"mcpServers": {
"thinkgate": {
"command": "npx",
"args": ["-y", "mcp-thinkgate"],
"env": {
"ANTHROPIC_API_KEY": "your-api-key-here"
}
}
}
}
Restart Claude. Now you can ask it to classify before it answers:
"Before responding, classify the complexity of this task: design a rate limiter for a public API"
Tier: think
Effort: medium
Suggested model: claude-sonnet-4-6
Confidence: 92%
Why: Requires structured design reasoning and trade-off analysis, but has well-defined scope.
Use as a library (agent frameworks)
Install:
npm install mcp-thinkgate
Import and use:
import { classifyPrompt, setLogLevel } from 'mcp-thinkgate';
// Optional: silence logs (default level is 'info', writes to stderr)
setLogLevel('error');
const result = await classifyPrompt(userMessage, process.env.ANTHROPIC_API_KEY!);
// result.tier → 'fast' | 'think' | 'ultrathink'
// result.effort → 'none' | 'medium' | 'max'
// result.confidence → 0.0 - 1.0
// result.reasoning → one sentence explanation
// Works without an API key too (rule-based fallback):
const quickResult = await classifyPrompt(userMessage);
Reference implementation: TinyClaw
TinyClaw is an open-source multi-agent framework for Claude. ThinkGate is wired into its invokeAgent() function — every message is automatically classified before the Claude CLI runs, and --effort is set accordingly.
Three lines added. Zero config required. Every agent in every team automatically gets the right thinking depth.
See the integration at src/lib/invoke.ts.
Requirements
- Node.js 20+
- Anthropic API key (optional — falls back to rule-based classification)
Local development
git clone https://github.com/tjp2021/mcp-thinkgate
cd mcp-thinkgate
npm install
npm test
npm run build
Contributing
See CONTRIBUTING.md for dev setup, commands, and PR process.
Security
See SECURITY.md for vulnerability reporting.
License
MIT — see LICENSE
常见问题
io.github.tjp2021/mcp-thinkgate 是什么?
Auto-classifies prompt complexity and routes to the right Claude thinking mode. No API key needed.
相关 Skills
网页构建器
by anthropics
面向复杂 claude.ai HTML artifact 开发,快速初始化 React + Tailwind CSS + shadcn/ui 项目并打包为单文件 HTML,适合需要状态管理、路由或多组件交互的页面。
✎ 在 claude.ai 里做复杂网页 Artifact 很省心,多组件、状态和路由都能顺手搭起来,React、Tailwind 与 shadcn/ui 组合效率高、成品也更精致。
前端设计
by anthropics
面向组件、页面、海报和 Web 应用开发,按鲜明视觉方向生成可直接落地的前端代码与高质感 UI,适合做 landing page、Dashboard 或美化现有界面,避开千篇一律的 AI 审美。
✎ 想把页面做得既能上线又有设计感,就用前端设计:组件到整站都能产出,难得的是能避开千篇一律的 AI 味。
网页应用测试
by anthropics
用 Playwright 为本地 Web 应用编写自动化测试,支持启动开发服务器、校验前端交互、排查 UI 异常、抓取截图与浏览器日志,适合调试动态页面和回归验证。
✎ 借助 Playwright 一站式验证本地 Web 应用前端功能,调 UI 时还能同步查看日志和截图,定位问题更快。
相关 MCP Server
GitHub
编辑精选by GitHub
GitHub 是 MCP 官方参考服务器,让 Claude 直接读写你的代码仓库和 Issues。
✎ 这个参考服务器解决了开发者想让 AI 安全访问 GitHub 数据的问题,适合需要自动化代码审查或 Issue 管理的团队。但注意它只是参考实现,生产环境得自己加固安全。
Context7 文档查询
编辑精选by Context7
Context7 是实时拉取最新文档和代码示例的智能助手,让你告别过时资料。
✎ 它能解决开发者查找文档时信息滞后的问题,特别适合快速上手新库或跟进更新。不过,依赖外部源可能导致偶尔的数据延迟,建议结合官方文档使用。
by tldraw
tldraw 是让 AI 助手直接在无限画布上绘图和协作的 MCP 服务器。
✎ 这解决了 AI 只能输出文本、无法视觉化协作的痛点——想象让 Claude 帮你画流程图或白板讨论。最适合需要快速原型设计或头脑风暴的开发者。不过,目前它只是个基础连接器,你得自己搭建画布应用才能发挥全部潜力。