PraisonAI
AI 与智能体编辑精选by mervinpraison
PraisonAI 是一个支持自反思和多 LLM 的低代码 AI 智能体框架。
如果你需要快速搭建一个能 24/7 运行的 AI 智能体团队来处理复杂任务(比如自动研究或代码生成),PraisonAI 的低代码设计和多平台集成(如 Telegram)让它上手极快。但作为非官方项目,它的生态成熟度可能不如 LangChain 等主流框架,适合愿意尝鲜的开发者。
什么是 PraisonAI?
PraisonAI 是一个支持自反思和多 LLM 的低代码 AI 智能体框架。
README
PraisonAI 🦞
<a href="https://trendshift.io/repositories/9130" target="_blank"><img src="https://trendshift.io/api/badge/repositories/9130" alt="MervinPraison%2FPraisonAI | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
</div>PraisonAI 🦞 — Automate and solve complex challenges with AI agent teams that plan, research, code, and deliver results to Telegram, Discord, and WhatsApp — running 24/7. A low-code, production-ready multi-agent framework with handoffs, guardrails, memory, RAG, and 100+ LLM providers, built around simplicity, customisation, and effective human-agent collaboration.
<div align="center"> <br> <a href="https://x.com/elonmusk/status/1893870468249141688" target="_blank"> <img src="https://img.shields.io/badge/Highlighted_by_Elon_Musk-000000?style=for-the-badge&logo=x&logoColor=white" alt="Highlighted by Elon Musk" /> </a> <p><em>"Grok 3 customer support" — <a href="https://x.com/elonmusk/status/1893870468249141688">Elon Musk quoting PraisonAI's tutorial</a></em></p> <br> </div> <p align="center"> <img src=".github/images/dashboard.png" alt="PraisonAI Dashboard" width="800" /> </p> <p align="center"> <img src=".github/images/agentflow.gif" alt="PraisonAI AgentFlow" width="800" /> </p> ██████╗ ██████╗ █████╗ ██╗███████╗ ██████╗ ███╗ ██╗ █████╗ ██╗
██╔══██╗██╔══██╗██╔══██╗██║██╔════╝██╔═══██╗████╗ ██║ ██╔══██╗██║
██████╔╝██████╔╝███████║██║███████╗██║ ██║██╔██╗ ██║ ███████║██║
██╔═══╝ ██╔══██╗██╔══██║██║╚════██║██║ ██║██║╚██╗██║ ██╔══██║██║
██║ ██║ ██║██║ ██║██║███████║╚██████╔╝██║ ╚████║ ██║ ██║██║
╚═╝ ╚═╝ ╚═╝╚═╝ ╚═╝╚═╝╚══════╝ ╚═════╝ ╚═╝ ╚═══╝ ╚═╝ ╚═╝╚═╝
pip install praisonai
* export TAVILY_API_KEY=xxxxx
Quick Paths:
- 🆕 New here? → Quick Start (1 minute to first agent)
- 📦 Installing? → Installation
- 🐍 Python SDK? → Python Examples
- 📄 YAML/No-Code? → YAML Examples
- 🎯 CLI user? → CLI Quick Reference
- 🤝 Contributing? → Contributing
⚡ Performance
PraisonAI is built for speed, with agent instantiation in under 4μs. This reduces overhead, improves responsiveness, and helps multi-agent systems scale efficiently in real-world production workloads.
| Performance Metric | PraisonAI |
|---|---|
| Avg Instantiation Time | 3.77 μs |
🎯 Use Cases
AI agents solving real-world problems across industries:
| Use Case | Description |
|---|---|
| 🔍 Research & Analysis | Conduct deep research, gather information, and generate insights from multiple sources automatically |
| 💻 Code Generation | Write, debug, and refactor code with AI agents that understand your codebase and requirements |
| ✍️ Content Creation | Generate blog posts, documentation, marketing copy, and technical writing with multi-agent teams |
| 📊 Data Pipelines | Extract, transform, and analyze data from APIs, databases, and web sources automatically |
| 🤖 Customer Support | Deploy 24/7 support bots on Telegram, Discord, Slack with memory and knowledge-backed responses |
| ⚙️ Workflow Automation | Automate multi-step business processes with agents that hand off tasks, verify results, and self-correct |
Supported Providers
PraisonAI supports 100+ LLM providers through seamless integration:
<p align="center"> <img src="https://img.shields.io/badge/OpenAI-412991?style=flat&logo=openai&logoColor=white" alt="OpenAI" /> <img src="https://img.shields.io/badge/Anthropic-191919?style=flat&logo=anthropic&logoColor=white" alt="Anthropic" /> <img src="https://img.shields.io/badge/Google_Gemini-4285F4?style=flat&logo=google&logoColor=white" alt="Google Gemini" /> <img src="https://img.shields.io/badge/DeepSeek-566AB2?style=flat" alt="DeepSeek" /> <img src="https://img.shields.io/badge/Azure-0078D4?style=flat&logo=microsoftazure&logoColor=white" alt="Azure" /> <img src="https://img.shields.io/badge/Ollama-000000?style=flat" alt="Ollama" /> <img src="https://img.shields.io/badge/Groq-F05237?style=flat" alt="Groq" /> <img src="https://img.shields.io/badge/Mistral-FF7000?style=flat" alt="Mistral" /> <img src="https://img.shields.io/badge/Cerebras-F05A28?style=flat" alt="Cerebras" /> <img src="https://img.shields.io/badge/Cohere-39594D?style=flat" alt="Cohere" /> <img src="https://img.shields.io/badge/OpenRouter-6467F2?style=flat" alt="OpenRouter" /> <img src="https://img.shields.io/badge/Perplexity-20808D?style=flat" alt="Perplexity" /> <img src="https://img.shields.io/badge/Fireworks-FF6B35?style=flat" alt="Fireworks" /> <img src="https://img.shields.io/badge/AWS_Bedrock-FF9900?style=flat&logo=amazonaws&logoColor=white" alt="AWS Bedrock" /> <img src="https://img.shields.io/badge/xAI_Grok-000000?style=flat" alt="xAI Grok" /> <img src="https://img.shields.io/badge/Vertex_AI-4285F4?style=flat&logo=googlecloud&logoColor=white" alt="Vertex AI" /> <img src="https://img.shields.io/badge/HuggingFace-FFD21E?style=flat&logo=huggingface&logoColor=black" alt="HuggingFace" /> <img src="https://img.shields.io/badge/Together_AI-000000?style=flat" alt="Together AI" /> <img src="https://img.shields.io/badge/Databricks-FF3621?style=flat&logo=databricks&logoColor=white" alt="Databricks" /> <img src="https://img.shields.io/badge/Replicate-262626?style=flat" alt="Replicate" /> <img src="https://img.shields.io/badge/Cloudflare-F38020?style=flat&logo=cloudflare&logoColor=white" alt="Cloudflare" /> </p> <details> <summary><strong>View all 24 providers with examples</strong></summary>| Provider | Example |
|---|---|
| OpenAI | Example |
| Anthropic | Example |
| Google Gemini | Example |
| Ollama | Example |
| Groq | Example |
| DeepSeek | Example |
| xAI Grok | Example |
| Mistral | Example |
| Cohere | Example |
| Perplexity | Example |
| Fireworks | Example |
| Together AI | Example |
| OpenRouter | Example |
| HuggingFace | Example |
| Azure OpenAI | Example |
| AWS Bedrock | Example |
| Google Vertex | Example |
| Databricks | Example |
| Cloudflare | Example |
| AI21 | Example |
| Replicate | Example |
| SageMaker | Example |
| Moonshot | Example |
| vLLM | Example |
🌟 Why PraisonAI?
| Feature | How | |
|---|---|---|
| 🔌 | MCP Protocol — stdio, HTTP, WebSocket, SSE | tools=MCP("npx ...") |
| 🧠 | Planning Mode — plan → execute → reason | planning=True |
| 🔍 | Deep Research — multi-step autonomous research | Docs |
| 🤖 | External Agents — orchestrate Claude Code, Gemini CLI, Codex | Docs |
| 🔄 | Agent Handoffs — seamless conversation passing | handoff=True |
| 🛡️ | Guardrails — input/output validation | Docs |
| Web Search + Fetch — native browsing | web_search=True | |
| 🪞 | Self Reflection — agent reviews its own output | Docs |
| 🔀 | Workflow Patterns — route, parallel, loop, repeat | Docs |
| 🧠 | Memory (zero deps) — works out of the box | memory=True |
| Feature | How | |
|---|---|---|
| 💡 | Prompt Caching — reduce latency + cost | prompt_caching=True |
| 💾 | Sessions + Auto-Save — persistent state across restarts | auto_save="my-project" |
| 💭 | Thinking Budgets — control reasoning depth | thinking_budget=1024 |
| 📚 | RAG + Quality-Based RAG — auto quality scoring retrieval | Docs |
| 📊 | Model Router — auto-routes to cheapest capable model | Docs |
| 🧊 | Shadow Git Checkpoints — auto-rollback on failure | Docs |
| 📡 | A2A Protocol — agent-to-agent interop | Docs |
| 📏 | Context Compaction — never hit token limits | Docs |
| 📡 | Telemetry — OpenTelemetry traces, spans, metrics | Docs |
| 📜 | Policy Engine — declarative agent behavior control | Docs |
| 🔄 | Background Tasks — fire-and-forget agents | Docs |
| 🔁 | Doom Loop Detection — auto-recovery from stuck agents | Docs |
| 🕸️ | Graph Memory — Neo4j-style relationship tracking | Docs |
| 🏖️ | Sandbox Execution — isolated code execution | Docs |
| 🖥️ | Bot Gateway — multi-agent routing across channels | Docs |
🚀 Quick Start
Get started with PraisonAI in under 1 minute:
# Install
pip install praisonaiagents
# Set API key
export OPENAI_API_KEY=your_key_here
# Create a simple agent
python -c "from praisonaiagents import Agent; Agent(instructions='You are a helpful AI assistant').start('Write a haiku about AI')"
Next Steps: Single Agent Example | Multi Agents | Full Docs
📦 Installation
Python SDK
Lightweight package dedicated for coding:
pip install praisonaiagents
For the full framework with CLI support:
pip install praisonai
🦞 PraisonAI Claw — full UI with bots, memory, knowledge, and gateway:
pip install "praisonai[claw]"
praisonai claw
🔗 PraisonAI Flow — Langflow Visual Flow Builder:
pip install "praisonai[flow]"
praisonai flow
🤖 PraisonAI UI — Clean chat interface:
pip install "praisonai[ui]"
praisonai ui
JavaScript SDK
npm install praisonai
📘 Using Python Code
1. Single Agent
from praisonaiagents import Agent
agent = Agent(instructions="You are a helpful AI assistant")
agent.start("Write a movie script about a robot in Mars")
2. Multi Agents
from praisonaiagents import Agent, Agents
research_agent = Agent(instructions="Research about AI")
summarise_agent = Agent(instructions="Summarise research agent's findings")
agents = Agents(agents=[research_agent, summarise_agent])
agents.start()
3. MCP (Model Context Protocol)
from praisonaiagents import Agent, MCP
# stdio - Local NPX/Python servers
agent = Agent(tools=MCP("npx @modelcontextprotocol/server-memory"))
# Streamable HTTP - Production servers
agent = Agent(tools=MCP("https://api.example.com/mcp"))
# WebSocket - Real-time bidirectional
agent = Agent(tools=MCP("wss://api.example.com/mcp", auth_token="token"))
# With environment variables
agent = Agent(
tools=MCP(
command="npx",
args=["-y", "@modelcontextprotocol/server-brave-search"],
env={"BRAVE_API_KEY": "your-key"}
)
)
📖 Full MCP docs — stdio, HTTP, WebSocket, SSE transports
4. Custom Tools
from praisonaiagents import Agent, tool
@tool
def search(query: str) -> str:
"""Search the web for information."""
return f"Results for: {query}"
@tool
def calculate(expression: str) -> float:
"""Evaluate a math expression."""
return eval(expression)
agent = Agent(
instructions="You are a helpful assistant",
tools=[search, calculate]
)
agent.start("Search for AI news and calculate 15*4")
📖 Full tools docs — BaseTool, tool packages, 100+ built-in tools
5. Persistence (Databases)
from praisonaiagents import Agent, db
agent = Agent(
name="Assistant",
db=db(database_url="postgresql://localhost/mydb"),
session_id="my-session"
)
agent.chat("Hello!") # Auto-persists messages, runs, traces
📖 Full persistence docs — PostgreSQL, MySQL, SQLite, MongoDB, Redis, and 20+ more
6. PraisonAI Claw 🦞 (Dashboard UI)
Connect your AI agents to Telegram, Discord, Slack, WhatsApp and more — all from a single command.
pip install "praisonai[claw]"
praisonai claw
Open http://localhost:8082 — the dashboard comes with 13 built-in pages: Chat, Agents, Memory, Knowledge, Channels, Guardrails, Cron, and more. Add messaging channels directly from the UI.
📖 Full Claw docs — platform tokens, CLI options, Docker, and YAML agent mode
7. Langflow Integration 🔗 (Visual Flow Builder)
Build multi-agent workflows visually with drag-and-drop components in Langflow.
pip install "praisonai[flow]"
praisonai flow
Open http://localhost:7861 — use the Agent and Agent Team components to create sequential or parallel workflows. Connect Chat Input → Agent Team → Chat Output for instant multi-agent pipelines.
📖 Full Flow docs — visual agent building, component reference, and deployment
8. PraisonAI UI 🤖 (Clean Chat)
Lightweight chat interface for your AI agents.
pip install "praisonai[ui]"
praisonai ui
📄 Using YAML (No Code)
Example 1: Two Agents Working Together
Create agents.yaml:
framework: praisonai
topic: "Write a blog post about AI"
agents:
researcher:
role: Research Analyst
goal: Research AI trends and gather information
instructions: "Find accurate information about AI trends"
writer:
role: Content Writer
goal: Write engaging blog posts
instructions: "Write clear, engaging content based on research"
Run with:
praisonai agents.yaml
The agents automatically work together sequentially
Example 2: Agent with Custom Tool
Create two files in the same folder:
agents.yaml:
framework: praisonai
topic: "Calculate the sum of 25 and 15"
agents:
calculator_agent:
role: Calculator
goal: Perform calculations
instructions: "Use the add_numbers tool to help with calculations"
tools:
- add_numbers
tools.py:
def add_numbers(a: float, b: float) -> float:
"""
Add two numbers together.
Args:
a: First number
b: Second number
Returns:
The sum of a and b
"""
return a + b
Run with:
praisonai agents.yaml
💡 Tips:
- Use the function name (e.g.,
add_numbers) in the tools list, not the file name- Tools in
tools.pyare automatically discovered- The function's docstring helps the AI understand how to use it
🎯 CLI Quick Reference
| Category | Commands |
|---|---|
| Execution | praisonai, --auto, --interactive, --chat |
| Research | research, --query-rewrite, --deep-research |
| Planning | --planning, --planning-tools, --planning-reasoning |
| Workflows | workflow run, workflow list, workflow auto |
| Memory | memory show, memory add, memory search, memory clear |
| Knowledge | knowledge add, knowledge query, knowledge list |
| Sessions | session list, session resume, session delete |
| Tools | tools list, tools info, tools search |
| MCP | mcp list, mcp create, mcp enable |
| Development | commit, docs, checkpoint, hooks |
| Scheduling | schedule start, schedule list, schedule stop |
✨ Key Features
<details open> <summary><strong>🤖 Core Agents</strong></summary>| Feature | Code | Docs |
|---|---|---|
| Single Agent | Example | 📖 |
| Multi Agents | Example | 📖 |
| Auto Agents | Example | 📖 |
| Self Reflection AI Agents | Example | 📖 |
| Reasoning AI Agents | Example | 📖 |
| Multi Modal AI Agents | Example | 📖 |
| Feature | Code | Docs |
|---|---|---|
| Simple Workflow | Example | 📖 |
| Workflow with Agents | Example | 📖 |
Agentic Routing (route()) | Example | 📖 |
Parallel Execution (parallel()) | Example | 📖 |
Loop over List/CSV (loop()) | Example | 📖 |
Evaluator-Optimizer (repeat()) | Example | 📖 |
| Conditional Steps | Example | 📖 |
| Workflow Branching | Example | 📖 |
| Workflow Early Stop | Example | 📖 |
| Workflow Checkpoints | Example | 📖 |
| Feature | Code | Docs |
|---|---|---|
| Code Interpreter Agents | Example | 📖 |
| AI Code Editing Tools | Example | 📖 |
| External Agents (All) | Example | 📖 |
| Claude Code CLI | Example | 📖 |
| Gemini CLI | Example | 📖 |
| Codex CLI | Example | 📖 |
| Cursor CLI | Example | 📖 |
| Feature | Code | Docs |
|---|---|---|
| Memory (Short & Long Term) | Example | 📖 |
| File-Based Memory | Example | 📖 |
| Claude Memory Tool | Example | 📖 |
| Add Custom Knowledge | Example | 📖 |
| RAG Agents | Example | 📖 |
| Chat with PDF Agents | Example | 📖 |
| Data Readers (PDF, DOCX, etc.) | CLI | 📖 |
| Vector Store Selection | CLI | 📖 |
| Retrieval Strategies | CLI | 📖 |
| Rerankers | CLI | 📖 |
| Index Types (Vector/Keyword/Hybrid) | CLI | 📖 |
| Query Engines (Sub-Question, etc.) | CLI | 📖 |
| Feature | Code | Docs |
|---|---|---|
| Deep Research Agents | Example | 📖 |
| Query Rewriter Agent | Example | 📖 |
| Native Web Search | Example | 📖 |
| Built-in Search Tools | Example | 📖 |
| Unified Web Search | Example | 📖 |
| Web Fetch (Anthropic) | Example | 📖 |
| Feature | Code | Docs |
|---|---|---|
| Planning Mode | Example | 📖 |
| Planning Tools | Example | 📖 |
| Planning Reasoning | Example | 📖 |
| Prompt Chaining | Example | 📖 |
| Evaluator Optimiser | Example | 📖 |
| Orchestrator Workers | Example | 📖 |
| Feature | Code | Docs |
|---|---|---|
| Data Analyst Agent | Example | 📖 |
| Finance Agent | Example | 📖 |
| Shopping Agent | Example | 📖 |
| Recommendation Agent | Example | 📖 |
| Wikipedia Agent | Example | 📖 |
| Programming Agent | Example | 📖 |
| Math Agents | Example | 📖 |
| Markdown Agent | Example | 📖 |
| Prompt Expander Agent | Example | 📖 |
| Feature | Code | Docs |
|---|---|---|
| Image Generation Agent | Example | 📖 |
| Image to Text Agent | Example | 📖 |
| Video Agent | Example | 📖 |
| Camera Integration | Example | 📖 |
| Feature | Code | Docs |
|---|---|---|
| MCP Transports | Example | 📖 |
| WebSocket MCP | Example | 📖 |
| MCP Security | Example | 📖 |
| MCP Resumability | Example | 📖 |
| MCP Config Management | Docs | 📖 |
| LangChain Integrated Agents | Example | 📖 |
| Feature | Code | Docs |
|---|---|---|
| Guardrails | Example | 📖 |
| Human Approval | Example | 📖 |
| Rules & Instructions | Docs | 📖 |
| Feature | Code | Docs |
|---|---|---|
| Async & Parallel Processing | Example | 📖 |
| Parallelisation | Example | 📖 |
| Repetitive Agents | Example | 📖 |
| Agent Handoffs | Example | 📖 |
| Stateful Agents | Example | 📖 |
| Autonomous Workflow | Example | 📖 |
| Structured Output Agents | Example | 📖 |
| Model Router | Example | 📖 |
| Prompt Caching | Example | 📖 |
| Fast Context | Example | 📖 |
| Feature | Code | Docs |
|---|---|---|
| 100+ Custom Tools | Example | 📖 |
| YAML Configuration | Example | 📖 |
| 100+ LLM Support | Example | 📖 |
| Callback Agents | Example | 📖 |
| Hooks | Example | 📖 |
| Middleware System | Example | 📖 |
| Configurable Model | Example | 📖 |
| Rate Limiter | Example | 📖 |
| Injected Tool State | Example | 📖 |
| Shadow Git Checkpoints | Example | 📖 |
| Background Tasks | Example | 📖 |
| Policy Engine | Example | 📖 |
| Thinking Budgets | Example | 📖 |
| Output Styles | Example | 📖 |
| Context Compaction | Example | 📖 |
| Feature | Code | Docs |
|---|---|---|
| Sessions Management | Example | 📖 |
| Auto-Save Sessions | Docs | 📖 |
| History in Context | Docs | 📖 |
| Telemetry | Example | 📖 |
| Project Docs (.praison/docs/) | Docs | 📖 |
| AI Commit Messages | Docs | 📖 |
| @Mentions in Prompts | Docs | 📖 |
| Feature | Code | Docs |
|---|---|---|
| Slash Commands | Example | 📖 |
| Autonomy Modes | Example | 📖 |
| Cost Tracking | Example | 📖 |
| Repository Map | Example | 📖 |
| Interactive TUI | Example | 📖 |
| Git Integration | Example | 📖 |
| Sandbox Execution | Example | 📖 |
| CLI Compare | Example | 📖 |
| Profile/Benchmark | Docs | 📖 |
| Auto Mode | Docs | 📖 |
| Init | Docs | 📖 |
| File Input | Docs | 📖 |
| Final Agent | Docs | 📖 |
| Max Tokens | Docs | 📖 |
| Feature | Code | Docs |
|---|---|---|
| Accuracy Evaluation | Example | 📖 |
| Performance Evaluation | Example | 📖 |
| Reliability Evaluation | Example | 📖 |
| Criteria Evaluation | Example | 📖 |
| Feature | Code | Docs |
|---|---|---|
| Skills Management | Example | 📖 |
| Custom Skills | Example | 📖 |
| Feature | Code | Docs |
|---|---|---|
| Agent Scheduler | Example | 📖 |
💻 Using JavaScript Code
npm install praisonai
export OPENAI_API_KEY=xxxxxxxxxxxxxxxxxxxxxx
const { Agent } = require('praisonai');
const agent = new Agent({ instructions: 'You are a helpful AI assistant' });
agent.start('Write a movie script about a robot in Mars');
⭐ Star History
🎓 Video Tutorials
Learn PraisonAI through our comprehensive video series:
<details> <summary><strong>View all 22 video tutorials</strong></summary>👥 Contributing
We welcome contributions! Fork the repo, create a branch, and submit a PR → Contributing Guide.
❓ FAQ & Troubleshooting
<details> <summary><strong>ModuleNotFoundError: No module named 'praisonaiagents'</strong></summary>Install the package:
pip install praisonaiagents
Ensure your API key is set:
export OPENAI_API_KEY=your_key_here
For other providers, see Models docs.
</details> <details> <summary><strong>How do I use a local model (Ollama)?</strong></summary># Start Ollama server first
ollama serve
# Set environment variable
export OPENAI_BASE_URL=http://localhost:11434/v1
See Models docs for more details.
</details> <details> <summary><strong>How do I persist conversations to a database?</strong></summary>Use the db parameter:
from praisonaiagents import Agent, db
agent = Agent(
name="Assistant",
db=db(database_url="postgresql://localhost/mydb"),
session_id="my-session"
)
See Persistence docs for supported databases.
</details> <details> <summary><strong>How do I enable agent memory?</strong></summary>from praisonaiagents import Agent
agent = Agent(
name="Assistant",
memory=True, # Enables file-based memory (no extra deps!)
user_id="user123"
)
See Memory docs for more options.
</details> <details> <summary><strong>How do I run multiple agents together?</strong></summary>from praisonaiagents import Agent, Agents
agent1 = Agent(instructions="Research topics")
agent2 = Agent(instructions="Summarize findings")
agents = Agents(agents=[agent1, agent2])
agents.start()
See Agents docs for more examples.
</details> <details> <summary><strong>How do I use MCP tools?</strong></summary>from praisonaiagents import Agent, MCP
agent = Agent(
tools=MCP("npx @modelcontextprotocol/server-memory")
)
See MCP docs for all transport options.
</details>Getting Help
<div align="center"> <p><strong>Made with ❤️ by the PraisonAI Team</strong></p> <p> <a href="https://docs.praison.ai">📚 Documentation</a> • <a href="https://github.com/MervinPraison/PraisonAI">GitHub</a> • <a href="https://youtube.com/@MervinPraison">▶️ YouTube</a> • <a href="https://x.com/MervinPraison">𝕏 X</a> • <a href="https://linkedin.com/in/mervinpraison">💼 LinkedIn</a> </p> </div>
常见问题
PraisonAI 是什么?
AI Agents Framework with Self Reflection and MCP support
相关 Skills
Claude接口
by anthropics
面向接入 Claude API、Anthropic SDK 或 Agent SDK 的开发场景,自动识别项目语言并给出对应示例与默认配置,快速搭建 LLM 应用。
✎ 想把Claude能力接进应用或智能体,用claude-api上手快、兼容Anthropic与Agent SDK,集成路径清晰又省心
提示工程专家
by alirezarezvani
覆盖Prompt优化、Few-shot设计、结构化输出、RAG评测与Agent工作流编排,适合分析token成本、评估LLM输出质量,并搭建可落地的AI智能体系统。
✎ 把提示优化、LLM评测到RAG与智能体设计串成一套方法,适合想系统提升AI开发效率的人。
智能体流程设计
by alirezarezvani
面向生产级多 Agent 编排,梳理顺序、并行、分层、事件驱动、共识五种工作流设计,覆盖 handoff、状态管理、容错重试、上下文预算与成本优化,适合搭建复杂 AI 协作系统。
✎ 帮你把多智能体流程设计、编排和自动化统一起来,复杂工作流也能更稳地落地,适合追求强控制力的团队。
相关 MCP Server
顺序思维
编辑精选by Anthropic
Sequential Thinking 是让 AI 通过动态思维链解决复杂问题的参考服务器。
✎ 这个服务器展示了如何让 Claude 像人类一样逐步推理,适合开发者学习 MCP 的思维链实现。但注意它只是个参考示例,别指望直接用在生产环境里。
知识图谱记忆
编辑精选by Anthropic
Memory 是一个基于本地知识图谱的持久化记忆系统,让 AI 记住长期上下文。
✎ 帮 AI 和智能体补上“记不住”的短板,用本地知识图谱沉淀长期上下文,连续对话更聪明,数据也更可控。
ai.klavis/strata
编辑精选by klavis-ai
Strata 是让 AI 智能体动态管理数千个工具连接器的 MCP 服务器。
✎ 解决了 AI 工具过多时上下文窗口爆炸的问题,适合构建复杂工作流的开发者。但作为新兴方案,生态成熟度还需时间验证。





















