io.github.Alberto-Codes/docvet
编码与调试by alberto-codes
用于 Python 的 Docstring 质量审查工具,可检查增强建议、时效性、覆盖率与缺失情况。
什么是 io.github.Alberto-Codes/docvet?
用于 Python 的 Docstring 质量审查工具,可检查增强建议、时效性、覆盖率与缺失情况。
README
docvet
Better docstrings, better AI.
Why docvet?
ruff checks how your docstrings look. interrogate checks if they exist (but is unmaintained). docvet checks if they're right — and now covers presence too. Existing tools cover style; docvet delivers the layers they miss:
| Layer | Check | ruff | interrogate | pydoclint | docvet |
|---|---|---|---|---|---|
| 1. Presence | "Does a docstring exist?" | -- | Yes (unmaintained) | -- | Yes |
| 2. Style | "Is it formatted correctly?" | Yes | -- | -- | -- |
| 3. Completeness | "Does it have all required sections?" | -- | -- | Partial | Yes |
| 4. Accuracy | "Does it match the current code?" | -- | -- | -- | Yes |
| 5. Rendering | "Will mkdocs render it correctly?" | -- | -- | -- | Yes |
| 6. Visibility | "Will mkdocs even see the file?" | -- | -- | -- | Yes |
pydoclint covers 3 structural categories (Args, Returns, Raises). docvet's enrichment alone has 20 rules, including Raises, Yields, Receives, Warns, Attributes, Examples, cross-references, parameter agreement, and more. Add presence (coverage metrics + threshold enforcement), freshness (git diff/blame staleness detection), griffe rendering compatibility, and mkdocs coverage: 31 rules across 5 checks, in territory no other tool touches.
Quickstart | GitHub Action | Pre-commit | Configuration | AI Agent Integration | Docs
What It Checks
Presence (existence) -- 2 rules:
missing-docstring overload-has-docstring
Enrichment (completeness) -- 20 rules:
missing-raises missing-returns missing-yields missing-receives missing-warns missing-deprecation missing-param-in-docstring extra-param-in-docstring missing-other-parameters missing-attributes undocumented-init-params missing-typed-attributes missing-examples missing-cross-references extra-raises-in-docstring extra-yields-in-docstring extra-returns-in-docstring missing-return-type trivial-docstring prefer-fenced-code-blocks
Freshness (accuracy) -- 5 rules:
stale-signature stale-body stale-import stale-drift stale-age
Griffe (rendering) -- 3 rules:
griffe-unknown-param griffe-missing-type griffe-format-warning
Coverage (visibility) -- 1 rule:
missing-init
Quickstart
pip install docvet && docvet check --all
For optional griffe rendering checks:
pip install docvet[griffe]
Example output:
src/mypackage/helpers.py:1: missing-docstring Module has no docstring [required]
src/mypackage/utils.py:42: missing-raises Function 'parse_config' raises ValueError but has no Raises section [required]
src/mypackage/models.py:15: stale-signature Function 'process' signature changed but docstring not updated [required]
src/mypackage/api.py:1: missing-init Package directory missing __init__.py (invisible to mkdocs) [required]
Configuration
Configure via [tool.docvet] in your pyproject.toml. All checks run and print findings. Checks listed in fail-on cause a non-zero exit code; unlisted checks are treated as warnings.
[tool.docvet]
exclude = ["tests", "scripts"]
fail-on = ["griffe", "coverage"]
[tool.docvet.freshness]
drift-threshold = 30
age-threshold = 90
Pre-commit
Add to your .pre-commit-config.yaml:
repos:
- repo: https://github.com/Alberto-Codes/docvet
rev: v1.2.0
hooks:
- id: docvet
For griffe rendering checks, add the optional dependency:
repos:
- repo: https://github.com/Alberto-Codes/docvet
rev: v1.2.0
hooks:
- id: docvet
additional_dependencies: [griffe]
GitHub Action
Add docvet to your GitHub Actions workflow — findings appear as inline annotations on your PR:
- uses: Alberto-Codes/docvet@v1
Select specific checks or pin a version:
- uses: Alberto-Codes/docvet@v1
with:
checks: 'enrichment,freshness'
docvet-version: '1.9.0'
python-version: '3.13'
For griffe rendering checks, install griffe before running docvet:
- uses: actions/setup-python@v6
with:
python-version: '3.12'
- run: pip install griffe
- uses: Alberto-Codes/docvet@v1
AI Agent Integration
For tool-specific integration snippets, see the full AI Agent Integration guide.
Add docvet to your AI coding workflow. Drop this into your CLAUDE.md, .cursorrules, or agent configuration:
## Docstring Quality
After modifying Python functions, classes, or modules, run `docvet check` and fix all findings before committing.
Recommended pyproject.toml configuration:
[tool.docvet]
fail-on = ["enrichment", "freshness", "coverage", "griffe"]
Subcommand Quick Reference
| Command | Description |
|---|---|
docvet check | Run all enabled checks (default: git diff files) |
docvet check --all | Run all checks on entire codebase |
docvet check --staged | Run all checks on staged files only |
docvet presence | Check for missing docstrings with coverage metrics |
docvet enrichment | Check for missing docstring sections |
docvet freshness | Detect stale docstrings via git |
docvet freshness --mode drift | Sweep for long-stale docstrings via git blame |
docvet coverage | Find files invisible to mkdocs |
docvet griffe | Check mkdocs rendering compatibility |
docvet fix | Scaffold missing docstring sections |
docvet fix --dry-run | Preview scaffolding changes without writing files |
docvet config | Show effective configuration with source annotations |
docvet lsp | Start LSP server for real-time editor diagnostics |
docvet mcp | Start MCP server for AI agent integration |
Better Docstrings, Better AI
AI coding agents rely on docstrings as context when generating and modifying code. Agents modify code but often leave docstrings stale, and research shows stale or incorrect documentation is actively harmful, worse than no docs at all:
- Incorrect docs degrade LLM task success by 22.6 percentage points
- Comment density improves code generation by 40-54%
- Misleading comments reduce LLM fault localization accuracy to 24.55%
- Performance drops substantially without docstrings
As the 2025 DORA report puts it: "AI doesn't fix a team; it amplifies what's already there." The only signal correlating with AI productivity is code quality.
docvet's freshness checking catches the accuracy gap that stale docs create, and its enrichment rules ensure the docstring sections that agents use as context are complete. Run docvet check in your CI, pre-commit hooks, or agent toolchain.
Badge
Add a badge to your project to show your docs are vetted:
[](https://github.com/Alberto-Codes/docvet)
Used By
Are you using docvet? Open a pull request to add your project here.
License
MIT -- see LICENSE for details.
mcp-name: io.github.Alberto-Codes/docvet
常见问题
io.github.Alberto-Codes/docvet 是什么?
用于 Python 的 Docstring 质量审查工具,可检查增强建议、时效性、覆盖率与缺失情况。
相关 Skills
前端设计
by anthropics
面向组件、页面、海报和 Web 应用开发,按鲜明视觉方向生成可直接落地的前端代码与高质感 UI,适合做 landing page、Dashboard 或美化现有界面,避开千篇一律的 AI 审美。
✎ 想把页面做得既能上线又有设计感,就用前端设计:组件到整站都能产出,难得的是能避开千篇一律的 AI 味。
网页构建器
by anthropics
面向复杂 claude.ai HTML artifact 开发,快速初始化 React + Tailwind CSS + shadcn/ui 项目并打包为单文件 HTML,适合需要状态管理、路由或多组件交互的页面。
✎ 在 claude.ai 里做复杂网页 Artifact 很省心,多组件、状态和路由都能顺手搭起来,React、Tailwind 与 shadcn/ui 组合效率高、成品也更精致。
网页应用测试
by anthropics
用 Playwright 为本地 Web 应用编写自动化测试,支持启动开发服务器、校验前端交互、排查 UI 异常、抓取截图与浏览器日志,适合调试动态页面和回归验证。
✎ 借助 Playwright 一站式验证本地 Web 应用前端功能,调 UI 时还能同步查看日志和截图,定位问题更快。
相关 MCP Server
GitHub
编辑精选by GitHub
GitHub 是 MCP 官方参考服务器,让 Claude 直接读写你的代码仓库和 Issues。
✎ 这个参考服务器解决了开发者想让 AI 安全访问 GitHub 数据的问题,适合需要自动化代码审查或 Issue 管理的团队。但注意它只是参考实现,生产环境得自己加固安全。
Context7 文档查询
编辑精选by Context7
Context7 是实时拉取最新文档和代码示例的智能助手,让你告别过时资料。
✎ 它能解决开发者查找文档时信息滞后的问题,特别适合快速上手新库或跟进更新。不过,依赖外部源可能导致偶尔的数据延迟,建议结合官方文档使用。
by tldraw
tldraw 是让 AI 助手直接在无限画布上绘图和协作的 MCP 服务器。
✎ 这解决了 AI 只能输出文本、无法视觉化协作的痛点——想象让 Claude 帮你画流程图或白板讨论。最适合需要快速原型设计或头脑风暴的开发者。不过,目前它只是个基础连接器,你得自己搭建画布应用才能发挥全部潜力。