智能体创建
pydantic-ai-agent-creation
by anderskev
Create PydanticAI agents with type-safe dependencies, structured outputs, and proper configuration. Use when building AI agents, creating chat systems, or integrating LLMs with Pydantic validation.
安装
claude skill add --url github.com/openclaw/skills/tree/main/skills/anderskev/pydantic-ai-agent-creation文档
Creating PydanticAI Agents
Quick Start
from pydantic_ai import Agent
# Minimal agent (text output)
agent = Agent('openai:gpt-4o')
result = agent.run_sync('Hello!')
print(result.output) # str
Model Selection
Model strings follow provider:model-name format:
# OpenAI
agent = Agent('openai:gpt-4o')
agent = Agent('openai:gpt-4o-mini')
# Anthropic
agent = Agent('anthropic:claude-sonnet-4-5')
agent = Agent('anthropic:claude-haiku-4-5')
# Google
agent = Agent('google-gla:gemini-2.0-flash')
agent = Agent('google-vertex:gemini-2.0-flash')
# Others: groq:, mistral:, cohere:, bedrock:, etc.
Structured Outputs
Use Pydantic models for validated, typed responses:
from pydantic import BaseModel
from pydantic_ai import Agent
class CityInfo(BaseModel):
city: str
country: str
population: int
agent = Agent('openai:gpt-4o', output_type=CityInfo)
result = agent.run_sync('Tell me about Paris')
print(result.output.city) # "Paris"
print(result.output.population) # int, validated
Agent Configuration
agent = Agent(
'openai:gpt-4o',
output_type=MyOutput, # Structured output type
deps_type=MyDeps, # Dependency injection type
instructions='You are helpful.', # Static instructions
retries=2, # Retry attempts for validation
name='my-agent', # For logging/tracing
model_settings=ModelSettings( # Provider settings
temperature=0.7,
max_tokens=1000
),
end_strategy='early', # How to handle tool calls with results
)
Running Agents
Three execution methods:
# Async (preferred)
result = await agent.run('prompt', deps=my_deps)
# Sync (convenience)
result = agent.run_sync('prompt', deps=my_deps)
# Streaming
async with agent.run_stream('prompt') as response:
async for chunk in response.stream_output():
print(chunk, end='')
Instructions vs System Prompts
# Instructions: Concatenated, for agent behavior
agent = Agent(
'openai:gpt-4o',
instructions='You are a helpful assistant. Be concise.'
)
# Dynamic instructions via decorator
@agent.instructions
def add_context(ctx: RunContext[MyDeps]) -> str:
return f"User ID: {ctx.deps.user_id}"
# System prompts: Static, for model context
agent = Agent(
'openai:gpt-4o',
system_prompt=['You are an expert.', 'Always cite sources.']
)
Common Patterns
Parameterized Agent (Type-Safe)
from dataclasses import dataclass
from pydantic_ai import Agent, RunContext
@dataclass
class Deps:
api_key: str
user_id: int
agent: Agent[Deps, str] = Agent(
'openai:gpt-4o',
deps_type=Deps,
)
# deps is now required and type-checked
result = agent.run_sync('Hello', deps=Deps(api_key='...', user_id=123))
No Dependencies (Satisfy Type Checker)
# Option 1: Explicit type annotation
agent: Agent[None, str] = Agent('openai:gpt-4o')
# Option 2: Pass deps=None
result = agent.run_sync('Hello', deps=None)
Decision Framework
| Scenario | Configuration |
|---|---|
| Simple text responses | Agent(model) |
| Structured data extraction | Agent(model, output_type=MyModel) |
| Need external services | Add deps_type=MyDeps |
| Validation retries needed | Increase retries=3 |
| Debugging/monitoring | Set instrument=True |
相关 Skills
Claude接口
by anthropics
面向接入 Claude API、Anthropic SDK 或 Agent SDK 的开发场景,自动识别项目语言并给出对应示例与默认配置,快速搭建 LLM 应用。
✎ 想把Claude能力接进应用或智能体,用claude-api上手快、兼容Anthropic与Agent SDK,集成路径清晰又省心
提示工程专家
by alirezarezvani
覆盖Prompt优化、Few-shot设计、结构化输出、RAG评测与Agent工作流编排,适合分析token成本、评估LLM输出质量,并搭建可落地的AI智能体系统。
✎ 把提示优化、LLM评测到RAG与智能体设计串成一套方法,适合想系统提升AI开发效率的人。
智能体流程设计
by alirezarezvani
面向生产级多 Agent 编排,梳理顺序、并行、分层、事件驱动、共识五种工作流设计,覆盖 handoff、状态管理、容错重试、上下文预算与成本优化,适合搭建复杂 AI 协作系统。
✎ 帮你把多智能体流程设计、编排和自动化统一起来,复杂工作流也能更稳地落地,适合追求强控制力的团队。
相关 MCP 服务
顺序思维
编辑精选by Anthropic
Sequential Thinking 是让 AI 通过动态思维链解决复杂问题的参考服务器。
✎ 这个服务器展示了如何让 Claude 像人类一样逐步推理,适合开发者学习 MCP 的思维链实现。但注意它只是个参考示例,别指望直接用在生产环境里。
知识图谱记忆
编辑精选by Anthropic
Memory 是一个基于本地知识图谱的持久化记忆系统,让 AI 记住长期上下文。
✎ 帮 AI 和智能体补上“记不住”的短板,用本地知识图谱沉淀长期上下文,连续对话更聪明,数据也更可控。
PraisonAI
编辑精选by mervinpraison
PraisonAI 是一个支持自反思和多 LLM 的低代码 AI 智能体框架。
✎ 如果你需要快速搭建一个能 24/7 运行的 AI 智能体团队来处理复杂任务(比如自动研究或代码生成),PraisonAI 的低代码设计和多平台集成(如 Telegram)让它上手极快。但作为非官方项目,它的生态成熟度可能不如 LangChain 等主流框架,适合愿意尝鲜的开发者。