io.github.QuantuLabs/hivemind
AI 与智能体by quantulabs
同时查询多个 AI 模型(如 OpenAI、Anthropic、Google),汇总并生成共识性回答。
什么是 io.github.QuantuLabs/hivemind?
同时查询多个 AI 模型(如 OpenAI、Anthropic、Google),汇总并生成共识性回答。
README
Hivemind
Multi-model AI consensus platform that queries GPT-5.2, Claude Opus 4.5, and Gemini 3 Pro simultaneously to deliver synthesized, high-confidence responses.
MCP Server for Claude Code
Use Hivemind directly in Claude Code to get perspectives from GPT-5.2 and Gemini 3 Pro. Claude acts as the orchestrator and synthesizes the responses.
Requirements
- Node.js >= 18
- Claude Code CLI installed
- At least one API key: OpenAI or Google AI
Note: No Anthropic API key needed - Claude is already your host!
Installation
npm install -g @quantulabs/hivemind
claude mcp add hivemind -- hivemind
Configuration
You need at least one API key, but both are recommended for better consensus:
Option 1: Paste directly (recommended)
/hive-config sk-proj-xxx... # OpenAI key
/hive-config AIzaSy... # Google key
Option 2: Config file
Create ~/.config/hivemind/.env:
OPENAI_API_KEY=sk-...
GOOGLE_API_KEY=AIza...
# Optional: Override default models
OPENAI_MODEL=gpt-5.1
GOOGLE_MODEL=gemini-2.5-flash
<details> <summary>Using with other MCP clients (non-Claude Code)</summary>A
.env.exampletemplate is included in the package.
For standalone MCP usage, you can also add an Anthropic key to include Claude in the consensus:
ANTHROPIC_API_KEY=sk-ant-...
Disable Claude Code mode via /hive-config > Settings > Claude Code Mode.
Usage
/hive "Why is my WebSocket connection dropping?"
Claude orchestrates the consensus from GPT-5.2 and Gemini 3 Pro responses.
Available Tools
| Tool | Description |
|---|---|
hivemind | Query models and get synthesized consensus |
configure_keys | Set API keys (stored securely) |
check_status | Check configuration and active providers |
configure_hive | Toggle grounding search and settings |
check_stats | View token usage and cost statistics |
Claude Code Commands
/hive <question>- Orchestrate multi-model consensus with Claude as the synthesizer/hive-config- Configure API keys and settings/hivestats- View usage statistics
Automatic Hivemind Fallback
Copy CLAUDE.md.example to your project's .claude/CLAUDE.md to enable automatic Hivemind consultation when Claude is stuck (after 3+ failed attempts).
Prompt Caching
All providers use optimized caching for cost reduction on follow-up queries:
| Provider | Type | Savings | Min Tokens |
|---|---|---|---|
| OpenAI | Automatic | 50% | 1024 |
| Gemini 2.5+ | Implicit | 90% | - |
| Anthropic | Explicit | 90% | 1024 |
Web Interface
A full-featured web app with solo mode, hivemind mode, and conversation history.
Quick Start
# Clone the repository
git clone https://github.com/QuantuLabs/hivemind.git
cd hivemind
# Install dependencies (requires Bun >= 1.0)
bun install
# Start development server
bun dev
Open http://localhost:3000, click the settings icon, and enter your API keys.
Features
- Multi-Model Consensus: Query 3 leading AI models simultaneously
- Deliberation Algorithm: Up to 3 rounds of refinement to reach consensus
- Solo Mode: Chat with individual models (GPT, Claude, Gemini)
- Hivemind Mode: Get synthesized responses from all models
- Conversation History: Persistent chat sessions
- Dark/Light Theme: Full theme support
- Secure Storage: API keys encrypted with AES-GCM in browser
Security
- API keys are encrypted using AES-GCM with PBKDF2 key derivation
- Keys are stored locally in browser localStorage (never sent to servers)
- Session persistence uses sessionStorage (cleared on browser close)
How Consensus Works
- Initial Query: All 3 models receive the same question
- Analysis: An orchestrator analyzes responses for agreements/divergences
- Refinement: If no consensus, models see other perspectives and refine (up to 3 rounds)
- Synthesis: Final response synthesizes agreed points and addresses divergences
Supported Models
OpenAI
- GPT-5.2 (default)
- GPT-5.1, GPT-5, GPT-5 Mini, GPT-5 Nano
- O4 Mini
Anthropic
- Claude Opus 4.5 (default)
- Claude Sonnet 4.5, Claude Opus 4, Claude Sonnet 4
- Gemini 3 Pro (default)
- Gemini 3 Flash, Gemini 2.5 Pro/Flash/Flash Lite, Gemini 2.0 Flash
Project Structure
hivemind/
├── apps/
│ └── web/ # Next.js 14 frontend
├── packages/
│ ├── core/ # Shared consensus logic & providers
│ └── mcp/ # Model Context Protocol server
└── .claude/ # Claude Code integration
Development
# Run all tests
bun test
# Run tests with coverage
bun test:coverage
# Build all packages
bun build
# Lint code
bun lint
License
MIT
Developed by QuantuLabs
常见问题
io.github.QuantuLabs/hivemind 是什么?
同时查询多个 AI 模型(如 OpenAI、Anthropic、Google),汇总并生成共识性回答。
相关 Skills
Claude接口
by anthropics
面向接入 Claude API、Anthropic SDK 或 Agent SDK 的开发场景,自动识别项目语言并给出对应示例与默认配置,快速搭建 LLM 应用。
✎ 想把Claude能力接进应用或智能体,用claude-api上手快、兼容Anthropic与Agent SDK,集成路径清晰又省心
RAG架构师
by alirezarezvani
聚焦生产级RAG系统设计与优化,覆盖文档切块、检索链路、索引构建、召回评估等关键环节,适合搭建可扩展、高准确率的知识库问答与检索增强应用。
✎ 面向RAG落地,把知识库、向量检索和生成链路系统串联起来,做架构设计时更清晰,也更少踩坑。
计算机视觉
by alirezarezvani
聚焦目标检测、图像分割与视觉系统落地,覆盖 YOLO、DETR、Mask R-CNN、SAM 等方案,适合定制数据集训练、推理优化及 ONNX/TensorRT 部署。
✎ 把目标检测、图像分割到推理部署串成完整工程链路,主流框架与 YOLO、DETR、SAM 等方案都覆盖,落地视觉 AI 会省心很多。
相关 MCP Server
顺序思维
编辑精选by Anthropic
Sequential Thinking 是让 AI 通过动态思维链解决复杂问题的参考服务器。
✎ 这个服务器展示了如何让 Claude 像人类一样逐步推理,适合开发者学习 MCP 的思维链实现。但注意它只是个参考示例,别指望直接用在生产环境里。
知识图谱记忆
编辑精选by Anthropic
Memory 是一个基于本地知识图谱的持久化记忆系统,让 AI 记住长期上下文。
✎ 帮 AI 和智能体补上“记不住”的短板,用本地知识图谱沉淀长期上下文,连续对话更聪明,数据也更可控。
PraisonAI
编辑精选by mervinpraison
PraisonAI 是一个支持自反思和多 LLM 的低代码 AI 智能体框架。
✎ 如果你需要快速搭建一个能 24/7 运行的 AI 智能体团队来处理复杂任务(比如自动研究或代码生成),PraisonAI 的低代码设计和多平台集成(如 Telegram)让它上手极快。但作为非官方项目,它的生态成熟度可能不如 LangChain 等主流框架,适合愿意尝鲜的开发者。