事实核查

Universal

fact-checker

by daymade

针对文档里的技术规格、统计数据、AI 模型参数等事实陈述做核查,优先检索官网和权威来源,标出错误或过时内容,并在你确认后生成修正建议。

技术文档、AI 规格或数据说明里怕有错漏,用它能快速核验事实和过期信息;结合网页搜索与官方信源给出修正建议,还会等你确认后再改,靠谱省心。

884编码与调试未扫描2026年3月5日

安装

claude skill add --url github.com/daymade/claude-code-skills/tree/main/fact-checker

文档

Fact Checker

Verify factual claims in documents and propose corrections backed by authoritative sources.

When to use

Trigger when users request:

  • "Fact-check this document"
  • "Verify these AI model specifications"
  • "Check if this information is still accurate"
  • "Update outdated data in this file"
  • "Validate the claims in this section"

Workflow

Copy this checklist to track progress:

code
Fact-checking Progress:
- [ ] Step 1: Identify factual claims
- [ ] Step 2: Search authoritative sources
- [ ] Step 3: Compare claims against sources
- [ ] Step 4: Generate correction report
- [ ] Step 5: Apply corrections with user approval

Step 1: Identify factual claims

Scan the document for verifiable statements:

Target claim types:

  • Technical specifications (context windows, pricing, features)
  • Version numbers and release dates
  • Statistical data and metrics
  • API capabilities and limitations
  • Benchmark scores and performance data

Skip subjective content:

  • Opinions and recommendations
  • Explanatory prose
  • Tutorial instructions
  • Architectural discussions

Step 2: Search authoritative sources

For each claim, search official sources:

AI models:

  • Official announcement pages (anthropic.com/news, openai.com/index, blog.google)
  • API documentation (platform.claude.com/docs, platform.openai.com/docs)
  • Developer guides and release notes

Technical libraries:

  • Official documentation sites
  • GitHub repositories (releases, README)
  • Package registries (npm, PyPI, crates.io)

General claims:

  • Academic papers and research
  • Government statistics
  • Industry standards bodies

Search strategy:

  • Use model names + specification (e.g., "Claude Opus 4.5 context window")
  • Include current year for recent information
  • Verify from multiple sources when possible

Step 3: Compare claims against sources

Create a comparison table:

Claim in DocumentSource InformationStatusAuthoritative Source
Claude 3.5 Sonnet: 200K tokensClaude Sonnet 4.5: 200K tokens❌ Outdated model nameplatform.claude.com/docs
GPT-4o: 128K tokensGPT-5.2: 400K tokens❌ Incorrect version & specopenai.com/index/gpt-5-2

Status codes:

  • ✅ Accurate - claim matches sources
  • ❌ Incorrect - claim contradicts sources
  • ⚠️ Outdated - claim was true but superseded
  • ❓ Unverifiable - no authoritative source found

Step 4: Generate correction report

Present findings in structured format:

markdown
## Fact-Check Report

### Summary
- Total claims checked: X
- Accurate: Y
- Issues found: Z

### Issues Requiring Correction

#### Issue 1: Outdated AI Model Reference
**Location:** Line 77-80 in docs/file.md
**Current claim:** "Claude 3.5 Sonnet: 200K tokens"
**Correction:** "Claude Sonnet 4.5: 200K tokens"
**Source:** https://platform.claude.com/docs/en/build-with-claude/context-windows
**Rationale:** Claude 3.5 Sonnet has been superseded by Claude Sonnet 4.5 (released Sept 2025)

#### Issue 2: Incorrect Context Window
**Location:** Line 79 in docs/file.md
**Current claim:** "GPT-4o: 128K tokens"
**Correction:** "GPT-5.2: 400K tokens"
**Source:** https://openai.com/index/introducing-gpt-5-2/
**Rationale:** 128K was output limit; context window is 400K. Model also updated to GPT-5.2

Step 5: Apply corrections with user approval

Before making changes:

  1. Show the correction report to the user
  2. Wait for explicit approval: "Should I apply these corrections?"
  3. Only proceed after confirmation

When applying corrections:

python
# Use Edit tool to update document
# Example:
Edit(
    file_path="docs/03-写作规范/AI辅助写书方法论.md",
    old_string="- Claude 3.5 Sonnet: 200K tokens(约 15 万汉字)",
    new_string="- Claude Sonnet 4.5: 200K tokens(约 15 万汉字)"
)

After corrections:

  1. Verify all edits were applied successfully
  2. Note the correction summary (e.g., "Updated 4 claims in section 2.1")
  3. Remind user to commit changes

Search best practices

Query construction

Good queries (specific, current):

  • "Claude Opus 4.5 context window 2026"
  • "GPT-5.2 official release announcement"
  • "Gemini 3 Pro token limit specifications"

Poor queries (vague, generic):

  • "Claude context"
  • "AI models"
  • "Latest version"

Source evaluation

Prefer official sources:

  1. Product official pages (highest authority)
  2. API documentation
  3. Official blog announcements
  4. GitHub releases (for open source)

Use with caution:

  • Third-party aggregators (llm-stats.com, etc.) - verify against official sources
  • Blog posts and articles - cross-reference claims
  • Social media - only for announcements, verify elsewhere

Avoid:

  • Outdated documentation
  • Unofficial wikis without citations
  • Speculation and rumors

Handling ambiguity

When sources conflict:

  1. Prioritize most recent official documentation
  2. Note the discrepancy in the report
  3. Present both sources to the user
  4. Recommend contacting vendor if critical

When no source found:

  1. Mark as ❓ Unverifiable
  2. Suggest alternative phrasing: "According to [Source] as of [Date]..."
  3. Recommend adding qualification: "approximately", "reported as"

Special considerations

Time-sensitive information

Always include temporal context:

Good corrections:

  • "截至 2026 年 1 月" (As of January 2026)
  • "Claude Sonnet 4.5 (released September 2025)"

Poor corrections:

  • "Latest version" (becomes outdated)
  • "Current model" (ambiguous timeframe)

Numerical precision

Match precision to source:

Source says: "approximately 1 million tokens" Write: "1M tokens (approximately)"

Source says: "200,000 token context window" Write: "200K tokens" (exact)

Citation format

Include citations in corrections:

markdown
> **注**:具体上下文窗口以模型官方文档为准,本书写作时使用 Claude Sonnet 4.5 为主要工具。

Link to sources when possible.

Examples

Example 1: Technical specification update

User request: "Fact-check the AI model context windows in section 2.1"

Process:

  1. Identify claims: Claude 3.5 Sonnet (200K), GPT-4o (128K), Gemini 1.5 Pro (2M)
  2. Search official docs for current models
  3. Find: Claude Sonnet 4.5, GPT-5.2, Gemini 3 Pro
  4. Generate report showing discrepancies
  5. Apply corrections after approval

Example 2: Statistical data verification

User request: "Verify the benchmark scores in chapter 5"

Process:

  1. Extract numerical claims
  2. Search for official benchmark publications
  3. Compare reported vs. source values
  4. Flag any discrepancies with source links
  5. Update with verified figures

Example 3: Version number validation

User request: "Check if these library versions are still current"

Process:

  1. List all version numbers mentioned
  2. Check package registries (npm, PyPI, etc.)
  3. Identify outdated versions
  4. Suggest updates with changelog references
  5. Update after user confirms

Quality checklist

Before completing fact-check:

  • All factual claims identified and categorized
  • Each claim verified against official sources
  • Sources are authoritative and current
  • Correction report is clear and actionable
  • Temporal context included where relevant
  • User approval obtained before changes
  • All edits verified successful
  • Summary provided to user

Limitations

This skill cannot:

  • Verify subjective opinions or judgments
  • Access paywalled or restricted sources
  • Determine "truth" in disputed claims
  • Predict future specifications or features

For such cases:

  • Note the limitation in the report
  • Suggest qualification language
  • Recommend user research or expert consultation

相关 Skills

网页构建器

by anthropics

Universal
热门

面向复杂 claude.ai HTML artifact 开发,快速初始化 React + Tailwind CSS + shadcn/ui 项目并打包为单文件 HTML,适合需要状态管理、路由或多组件交互的页面。

在 claude.ai 里做复杂网页 Artifact 很省心,多组件、状态和路由都能顺手搭起来,React、Tailwind 与 shadcn/ui 组合效率高、成品也更精致。

编码与调试
未扫描121.2k

前端设计

by anthropics

Universal
热门

面向组件、页面、海报和 Web 应用开发,按鲜明视觉方向生成可直接落地的前端代码与高质感 UI,适合做 landing page、Dashboard 或美化现有界面,避开千篇一律的 AI 审美。

想把页面做得既能上线又有设计感,就用前端设计:组件到整站都能产出,难得的是能避开千篇一律的 AI 味。

编码与调试
未扫描121.2k

网页应用测试

by anthropics

Universal
热门

用 Playwright 为本地 Web 应用编写自动化测试,支持启动开发服务器、校验前端交互、排查 UI 异常、抓取截图与浏览器日志,适合调试动态页面和回归验证。

借助 Playwright 一站式验证本地 Web 应用前端功能,调 UI 时还能同步查看日志和截图,定位问题更快。

编码与调试
未扫描121.2k

相关 MCP 服务

GitHub

编辑精选

by GitHub

热门

GitHub 是 MCP 官方参考服务器,让 Claude 直接读写你的代码仓库和 Issues。

这个参考服务器解决了开发者想让 AI 安全访问 GitHub 数据的问题,适合需要自动化代码审查或 Issue 管理的团队。但注意它只是参考实现,生产环境得自己加固安全。

编码与调试
84.2k

by Context7

热门

Context7 是实时拉取最新文档和代码示例的智能助手,让你告别过时资料。

它能解决开发者查找文档时信息滞后的问题,特别适合快速上手新库或跟进更新。不过,依赖外部源可能导致偶尔的数据延迟,建议结合官方文档使用。

编码与调试
53.3k

by tldraw

热门

tldraw 是让 AI 助手直接在无限画布上绘图和协作的 MCP 服务器。

这解决了 AI 只能输出文本、无法视觉化协作的痛点——想象让 Claude 帮你画流程图或白板讨论。最适合需要快速原型设计或头脑风暴的开发者。不过,目前它只是个基础连接器,你得自己搭建画布应用才能发挥全部潜力。

编码与调试
46.4k

评论