io.github.backspacevenkat/perspectives
AI 与智能体by polydev-ai
可并行查询多个 AI 模型,如 GPT-4、Claude、Gemini 与 Grok,以获得更丰富的多元观点。
什么是 io.github.backspacevenkat/perspectives?
可并行查询多个 AI 模型,如 GPT-4、Claude、Gemini 与 Grok,以获得更丰富的多元观点。
README
Polydev - Multi-Model AI Perspectives
Get unstuck faster. Query GPT 5.2, Claude Opus 4.5, Gemini 3, and Grok 4.1 simultaneously — one API call, four expert opinions.
Why Polydev?
Stop copy-pasting between ChatGPT, Claude, and Gemini. Get all their perspectives in your IDE with one request.
| Metric | Result |
|---|---|
| SWE-bench Verified | 74.6% Resolve@2 |
| Cost vs Claude Opus | 62% lower |
| Response time | 10-40 seconds |
"Different models have different blind spots. Combining their perspectives eliminates yours."
Supported Models
| Model | Provider | Strengths |
|---|---|---|
| GPT 5.2 | OpenAI | Reasoning, code generation |
| Claude Opus 4.5 | Anthropic | Analysis, nuanced thinking |
| Gemini 3 Pro | Multimodal, large context | |
| Grok 4.1 | xAI | Real-time knowledge, directness |
Quick Start
1. Get your free API token
polydev.ai/dashboard/mcp-tokens
| Tier | Messages/Month | Price |
|---|---|---|
| Free | 1,000 | $0 |
| Pro | 10,000 | $19/mo |
2. Install in your IDE
Claude Code
claude mcp add polydev -- npx -y polydev-ai@latest
Then set your token:
export POLYDEV_USER_TOKEN="pd_your_token_here"
Or add to ~/.claude.json:
{
"mcpServers": {
"polydev": {
"command": "npx",
"args": ["-y", "polydev-ai@latest"],
"env": {
"POLYDEV_USER_TOKEN": "pd_your_token_here"
}
}
}
}
Cursor
Add to ~/.cursor/mcp.json:
{
"mcpServers": {
"polydev": {
"command": "npx",
"args": ["-y", "polydev-ai@latest"],
"env": {
"POLYDEV_USER_TOKEN": "pd_your_token_here"
}
}
}
}
Windsurf
Add to your MCP configuration:
{
"mcpServers": {
"polydev": {
"command": "npx",
"args": ["-y", "polydev-ai@latest"],
"env": {
"POLYDEV_USER_TOKEN": "pd_your_token_here"
}
}
}
}
Cline (VS Code)
- Open Cline settings (gear icon)
- Go to "MCP Servers" → "Configure"
- Add the same JSON config as above
OpenAI Codex CLI
Add to ~/.codex/config.toml:
[mcp_servers.polydev]
command = "npx"
args = ["-y", "polydev-ai@latest"]
[mcp_servers.polydev.env]
POLYDEV_USER_TOKEN = "pd_your_token_here"
[mcp_servers.polydev.timeouts]
tool_timeout = 180
session_timeout = 600
Usage
Natural Language
Just mention "polydev" or "perspectives" in your prompt:
"Use polydev to debug this infinite loop"
"Get perspectives on: Should I use Redis or PostgreSQL for caching?"
"Use polydev to review this API for security issues"
MCP Tool
Call the get_perspectives tool directly:
{
"tool": "get_perspectives",
"arguments": {
"prompt": "How should I optimize this database query?",
"user_token": "pd_your_token_here"
}
}
Example Response
🤖 Multi-Model Analysis
┌─ GPT 5.2 ────────────────────────────────────────
│ The N+1 query pattern is causing performance issues.
│ Consider using eager loading or batch queries...
└──────────────────────────────────────────────────
┌─ Claude Opus 4.5 ────────────────────────────────
│ Looking at the execution plan, the table scan on
│ `users` suggests a missing index on `email`...
└──────────────────────────────────────────────────
┌─ Gemini 3 ───────────────────────────────────────
│ The query could benefit from denormalization for
│ this read-heavy access pattern...
└──────────────────────────────────────────────────
┌─ Grok 4.1 ───────────────────────────────────────
│ Just add an index. The real problem is you're
│ querying in a loop - fix that first.
└──────────────────────────────────────────────────
✅ Consensus: Add index on users.email, fix N+1 query
💡 Recommendation: Use eager loading with proper indexing
Research
Our approach achieves 74.6% on SWE-bench Verified (Resolve@2), matching Claude Opus at 62% lower cost.
| Approach | Resolution Rate | Cost/Instance |
|---|---|---|
| Claude Haiku (baseline) | 64.6% | $0.18 |
| + Polydev consultation | 66.6% | $0.24 |
| Resolve@2 (best of both) | 74.6% | $0.37 |
| Claude Opus (reference) | 74.4% | $0.97 |
Available Tools
| Tool | Description |
|---|---|
get_perspectives | Query multiple AI models simultaneously |
get_cli_status | Check status of local CLI tools |
force_cli_detection | Re-detect installed CLI tools |
send_cli_prompt | Send prompts to local CLIs with fallback |
Links
- Website: polydev.ai
- Dashboard: polydev.ai/dashboard
- npm: npmjs.com/package/polydev-ai
- Research: SWE-bench Paper
IDE Guides
License
MIT License - see LICENSE for details.
<p align="center"> <b>Built by <a href="https://polydev.ai">Polydev AI</a></b><br> <i>Multi-model consultation for better code</i> </p>
常见问题
io.github.backspacevenkat/perspectives 是什么?
可并行查询多个 AI 模型,如 GPT-4、Claude、Gemini 与 Grok,以获得更丰富的多元观点。
相关 Skills
Claude接口
by anthropics
面向接入 Claude API、Anthropic SDK 或 Agent SDK 的开发场景,自动识别项目语言并给出对应示例与默认配置,快速搭建 LLM 应用。
✎ 想把Claude能力接进应用或智能体,用claude-api上手快、兼容Anthropic与Agent SDK,集成路径清晰又省心
智能体流程设计
by alirezarezvani
面向生产级多 Agent 编排,梳理顺序、并行、分层、事件驱动、共识五种工作流设计,覆盖 handoff、状态管理、容错重试、上下文预算与成本优化,适合搭建复杂 AI 协作系统。
✎ 帮你把多智能体流程设计、编排和自动化统一起来,复杂工作流也能更稳地落地,适合追求强控制力的团队。
提示工程专家
by alirezarezvani
覆盖Prompt优化、Few-shot设计、结构化输出、RAG评测与Agent工作流编排,适合分析token成本、评估LLM输出质量,并搭建可落地的AI智能体系统。
✎ 把提示优化、LLM评测到RAG与智能体设计串成一套方法,适合想系统提升AI开发效率的人。
相关 MCP Server
知识图谱记忆
编辑精选by Anthropic
Memory 是一个基于本地知识图谱的持久化记忆系统,让 AI 记住长期上下文。
✎ 帮 AI 和智能体补上“记不住”的短板,用本地知识图谱沉淀长期上下文,连续对话更聪明,数据也更可控。
顺序思维
编辑精选by Anthropic
Sequential Thinking 是让 AI 通过动态思维链解决复杂问题的参考服务器。
✎ 这个服务器展示了如何让 Claude 像人类一样逐步推理,适合开发者学习 MCP 的思维链实现。但注意它只是个参考示例,别指望直接用在生产环境里。
PraisonAI
编辑精选by mervinpraison
PraisonAI 是一个支持自反思和多 LLM 的低代码 AI 智能体框架。
✎ 如果你需要快速搭建一个能 24/7 运行的 AI 智能体团队来处理复杂任务(比如自动研究或代码生成),PraisonAI 的低代码设计和多平台集成(如 Telegram)让它上手极快。但作为非官方项目,它的生态成熟度可能不如 LangChain 等主流框架,适合愿意尝鲜的开发者。