智能体避坑

pydantic-ai-common-pitfalls

by anderskev

Avoid common mistakes and debug issues in PydanticAI agents. Use when encountering errors, unexpected behavior, or when reviewing agent implementations.

3.7kAI 与智能体未扫描2026年3月23日

安装

claude skill add --url github.com/openclaw/skills/tree/main/skills/anderskev/pydantic-ai-common-pitfalls

文档

PydanticAI Common Pitfalls and Debugging

Tool Decorator Errors

Wrong: RunContext in tool_plain

python
# ERROR: RunContext not allowed in tool_plain
@agent.tool_plain
async def bad_tool(ctx: RunContext[MyDeps]) -> str:
    return "oops"
# UserError: RunContext annotations can only be used with tools that take context

Fix: Use @agent.tool if you need context:

python
@agent.tool
async def good_tool(ctx: RunContext[MyDeps]) -> str:
    return "works"

Wrong: Missing RunContext in tool

python
# ERROR: First param must be RunContext
@agent.tool
def bad_tool(user_id: int) -> str:
    return "oops"
# UserError: First parameter of tools that take context must be annotated with RunContext[...]

Fix: Add RunContext as first parameter:

python
@agent.tool
def good_tool(ctx: RunContext[MyDeps], user_id: int) -> str:
    return "works"

Wrong: RunContext not first

python
# ERROR: RunContext must be first parameter
@agent.tool
def bad_tool(user_id: int, ctx: RunContext[MyDeps]) -> str:
    return "oops"

Fix: RunContext must always be the first parameter.

Valid Patterns (Not Errors)

Raw Function Tool Registration

The following pattern IS valid and supported by pydantic-ai:

python
from pydantic_ai import Agent, RunContext

async def search_db(ctx: RunContext[MyDeps], query: str) -> list[dict]:
    """Search the database."""
    return await ctx.deps.db.search(query)

async def get_user(ctx: RunContext[MyDeps], user_id: int) -> dict:
    """Get user by ID."""
    return await ctx.deps.db.get_user(user_id)

# Valid: Pass raw functions to Agent(tools=[...])
agent = Agent(
    'openai:gpt-4o',
    deps_type=MyDeps,
    tools=[search_db, get_user]  # RunContext detected from signature
)

Why this works: PydanticAI inspects function signatures. If the first parameter is RunContext[T], it's treated as a context-aware tool. No decorator required.

Reference: https://ai.pydantic.dev/agents/#registering-tools-via-the-tools-argument

Do NOT flag code that passes functions with RunContext signatures to Agent(tools=[...]). This is equivalent to using @agent.tool and is explicitly documented.

Dependency Type Mismatches

Wrong: Missing deps at runtime

python
agent = Agent('openai:gpt-4o', deps_type=MyDeps)

# ERROR: deps required but not provided
result = agent.run_sync('Hello')  # Missing deps!

Fix: Always provide deps when deps_type is set:

python
result = agent.run_sync('Hello', deps=MyDeps(...))

Wrong: Wrong deps type

python
@dataclass
class AppDeps:
    db: Database

@dataclass
class WrongDeps:
    api: ApiClient

agent = Agent('openai:gpt-4o', deps_type=AppDeps)

# Type error: WrongDeps != AppDeps
result = agent.run_sync('Hello', deps=WrongDeps(...))

Output Type Issues

Pydantic validation fails

python
class Response(BaseModel):
    count: int
    items: list[str]

agent = Agent('openai:gpt-4o', output_type=Response)
result = agent.run_sync('List items')
# May fail if LLM returns wrong structure

Fix: Increase retries or improve prompt:

python
agent = Agent(
    'openai:gpt-4o',
    output_type=Response,
    retries=3,  # More attempts
    instructions='Return JSON with count (int) and items (list of strings).'
)

Complex nested types

python
# May cause schema issues with some models
class Complex(BaseModel):
    nested: dict[str, list[tuple[int, str]]]

Fix: Simplify or use intermediate models:

python
class Item(BaseModel):
    id: int
    name: str

class Simple(BaseModel):
    items: list[Item]

Async vs Sync Mistakes

Wrong: Calling async in sync context

python
# ERROR: Can't await in sync function
def handler():
    result = await agent.run('Hello')  # SyntaxError!

Fix: Use run_sync or make handler async:

python
def handler():
    result = agent.run_sync('Hello')

# Or
async def handler():
    result = await agent.run('Hello')

Wrong: Blocking in async tools

python
@agent.tool
async def slow_tool(ctx: RunContext[Deps]) -> str:
    time.sleep(5)  # WRONG: Blocks event loop!
    return "done"

Fix: Use async I/O:

python
@agent.tool
async def slow_tool(ctx: RunContext[Deps]) -> str:
    await asyncio.sleep(5)  # Correct
    return "done"

Model Configuration Errors

Missing API key

python
# ERROR: OPENAI_API_KEY not set
agent = Agent('openai:gpt-4o')
result = agent.run_sync('Hello')
# ModelAPIError: Authentication failed

Fix: Set environment variable or use defer_model_check:

python
# For testing
agent = Agent('openai:gpt-4o', defer_model_check=True)
with agent.override(model=TestModel()):
    result = agent.run_sync('Hello')

Invalid model string

python
# ERROR: Unknown provider
agent = Agent('unknown:model')
# ValueError: Unknown model provider

Fix: Use valid provider:model format.

Streaming Issues

Wrong: Using result before stream completes

python
async with agent.run_stream('Hello') as response:
    # DON'T access .output before streaming completes
    print(response.output)  # May be incomplete!

# Correct: access after context manager
print(response.output)  # Complete result

Wrong: Not iterating stream

python
async with agent.run_stream('Hello') as response:
    pass  # Never consumed!

# Stream was never read - output may be incomplete

Fix: Always consume the stream:

python
async with agent.run_stream('Hello') as response:
    async for chunk in response.stream_output():
        print(chunk, end='')

Tool Return Issues

Wrong: Returning non-serializable

python
@agent.tool_plain
def bad_return() -> object:
    return CustomObject()  # Can't serialize!

Fix: Return serializable types (str, dict, Pydantic model):

python
@agent.tool_plain
def good_return() -> dict:
    return {"key": "value"}

Debugging Tips

Enable tracing

python
import logfire
logfire.configure()
logfire.instrument_pydantic_ai()

# Or per-agent
agent = Agent('openai:gpt-4o', instrument=True)

Capture messages

python
from pydantic_ai import capture_run_messages

with capture_run_messages() as messages:
    result = agent.run_sync('Hello')

for msg in messages:
    print(type(msg).__name__, msg)

Check model responses

python
result = agent.run_sync('Hello')
print(result.all_messages())  # Full message history
print(result.response)  # Last model response
print(result.usage())  # Token usage

Common Error Messages

ErrorCauseFix
First parameter... RunContext@agent.tool missing ctxAdd ctx: RunContext[...]
RunContext... only... context@agent.tool_plain has ctxRemove ctx or use @agent.tool
Unknown model providerInvalid model stringUse valid provider:model
ModelAPIErrorAPI auth/quotaCheck API key, limits
RetryPromptPart in messagesValidation failedCheck output_type, increase retries

相关 Skills

Claude接口

by anthropics

Universal
热门

面向接入 Claude API、Anthropic SDK 或 Agent SDK 的开发场景,自动识别项目语言并给出对应示例与默认配置,快速搭建 LLM 应用。

想把Claude能力接进应用或智能体,用claude-api上手快、兼容Anthropic与Agent SDK,集成路径清晰又省心

AI 与智能体
未扫描109.6k

提示工程专家

by alirezarezvani

Universal
热门

覆盖Prompt优化、Few-shot设计、结构化输出、RAG评测与Agent工作流编排,适合分析token成本、评估LLM输出质量,并搭建可落地的AI智能体系统。

把提示优化、LLM评测到RAG与智能体设计串成一套方法,适合想系统提升AI开发效率的人。

AI 与智能体
未扫描9.0k

智能体流程设计

by alirezarezvani

Universal
热门

面向生产级多 Agent 编排,梳理顺序、并行、分层、事件驱动、共识五种工作流设计,覆盖 handoff、状态管理、容错重试、上下文预算与成本优化,适合搭建复杂 AI 协作系统。

帮你把多智能体流程设计、编排和自动化统一起来,复杂工作流也能更稳地落地,适合追求强控制力的团队。

AI 与智能体
未扫描9.0k

相关 MCP 服务

顺序思维

编辑精选

by Anthropic

热门

Sequential Thinking 是让 AI 通过动态思维链解决复杂问题的参考服务器。

这个服务器展示了如何让 Claude 像人类一样逐步推理,适合开发者学习 MCP 的思维链实现。但注意它只是个参考示例,别指望直接用在生产环境里。

AI 与智能体
82.9k

知识图谱记忆

编辑精选

by Anthropic

热门

Memory 是一个基于本地知识图谱的持久化记忆系统,让 AI 记住长期上下文。

帮 AI 和智能体补上“记不住”的短板,用本地知识图谱沉淀长期上下文,连续对话更聪明,数据也更可控。

AI 与智能体
82.9k

PraisonAI

编辑精选

by mervinpraison

热门

PraisonAI 是一个支持自反思和多 LLM 的低代码 AI 智能体框架。

如果你需要快速搭建一个能 24/7 运行的 AI 智能体团队来处理复杂任务(比如自动研究或代码生成),PraisonAI 的低代码设计和多平台集成(如 Telegram)让它上手极快。但作为非官方项目,它的生态成熟度可能不如 LangChain 等主流框架,适合愿意尝鲜的开发者。

AI 与智能体
6.4k

评论