review-verification-protocol
by anderskev
Mandatory verification steps for all code reviews to reduce false positives. Load this skill before reporting ANY code review findings.
安装
claude skill add --url https://github.com/openclaw/skills文档
Review Verification Protocol
This protocol MUST be followed before reporting any code review finding. Skipping these steps leads to false positives that waste developer time and erode trust in reviews.
Pre-Report Verification Checklist
Before flagging ANY issue, verify:
- I read the actual code - Not just the diff context, but the full function/class
- I searched for usages - Before claiming "unused", searched all references
- I checked surrounding code - The issue may be handled elsewhere (guards, earlier checks)
- I verified syntax against current docs - Framework syntax evolves (Tailwind v4, TS 5.x, React 19)
- I distinguished "wrong" from "different style" - Both approaches may be valid
- I considered intentional design - Checked comments, CLAUDE.md, architectural context
Verification by Issue Type
"Unused Variable/Function"
Before flagging, you MUST:
- Search for ALL references in the codebase (grep/find)
- Check if it's exported and used by external consumers
- Check if it's used via reflection, decorators, or dynamic dispatch
- Verify it's not a callback passed to a framework
Common false positives:
- State setters in React (may trigger re-renders even if value appears unused)
- Variables used in templates/JSX
- Exports used by consuming packages
"Missing Validation/Error Handling"
Before flagging, you MUST:
- Check if validation exists at a higher level (caller, middleware, route handler)
- Check if the framework provides validation (Pydantic, Zod, TypeScript)
- Verify the "missing" check isn't present in a different form
Common false positives:
- Framework already validates (FastAPI + Pydantic, React Hook Form)
- Parent component validates before passing props
- Error boundary catches at higher level
"Type Assertion/Unsafe Cast"
Before flagging, you MUST:
- Confirm it's actually an assertion, not an annotation
- Check if the type is narrowed by runtime checks before the point
- Verify if framework guarantees the type (loader data, form data)
Valid patterns often flagged incorrectly:
# Type annotation, NOT cast
data: UserData = await load_user()
# Type narrowing with isinstance
if isinstance(data, User):
data.name # Mypy knows this is User
"Potential Memory Leak/Race Condition"
Before flagging, you MUST:
- Verify cleanup function is actually missing (not just in a different location)
- Check if AbortController signal is checked after awaits
- Confirm the component can actually unmount during the async operation
Common false positives:
- Cleanup exists in useEffect return
- Signal is checked (code reviewer missed it)
- Operation completes before unmount is possible
"Performance Issue"
Before flagging, you MUST:
- Confirm the code runs frequently enough to matter (render vs click handler)
- Verify the optimization would have measurable impact
- Check if the framework already optimizes this (React compiler, memoization)
Do NOT flag:
- Functions created in click handlers (runs once per click)
- Array methods on small arrays (< 100 items)
- Object creation in event handlers
Severity Calibration
Critical (Block Merge)
ONLY use for:
- Security vulnerabilities (injection, auth bypass, data exposure)
- Data corruption bugs
- Crash-causing bugs in happy path
- Breaking changes to public APIs
Major (Should Fix)
Use for:
- Logic bugs that affect functionality
- Missing error handling that causes poor UX
- Performance issues with measurable impact
- Accessibility violations
Minor (Consider Fixing)
Use for:
- Code clarity improvements
- Documentation gaps
- Inconsistent style (within reason)
- Non-critical test coverage gaps
Informational (No Action Required)
Use for:
- Improvements that require adding new dependencies or modules
- Suggestions for net-new code that didn't exist in the codebase before (new modules, test suites, abstractions)
- Architectural ideas for future consideration
- Test infrastructure suggestions (new mock libraries, behaviour extraction)
- Optimizations without measurable impact in the current context
These are NOT review blockers. They should be noted for the author's awareness but must not appear in the actionable issue count. The Verdict should ignore informational items entirely.
Do NOT Flag At All
- Style preferences where both approaches are valid
- Optimizations with no measurable benefit
- Test code not meeting production standards (intentionally simpler)
- Library/framework internal code (shadcn components, generated code)
- Hypothetical issues that require unlikely conditions
Valid Patterns (Do NOT Flag)
Python
| Pattern | Why It's Valid |
|---|---|
dict.get(key, []) | Returns default for missing keys, not error suppression |
Optional[T] return type | Standard way to express nullable in Python typing |
assert in test code | pytest uses assertions, not try/except |
| Type annotation on variable | Not a cast, just a hint for type checkers |
typing.cast() with prior validation | Valid after runtime check confirms type |
FastAPI
| Pattern | Why It's Valid |
|---|---|
Depends() without explicit type | FastAPI infers dependency type from function signature |
async def endpoint without await | May use sync DB calls or simple returns |
| Response model different from DB model | Separation of concerns between API and persistence |
BackgroundTasks parameter | Valid for fire-and-forget operations |
Direct request.state access | Standard pattern for middleware-injected data |
Testing
| Pattern | Why It's Valid |
|---|---|
assert without message | pytest rewrites assertions to show detailed diffs |
@pytest.fixture without explicit scope | Default function scope is correct for most fixtures |
monkeypatch over unittest.mock | Simpler API, pytest-native |
| Fixture returning mutable state | Each test gets fresh fixture invocation by default |
General
| Pattern | Why It's Valid |
|---|---|
+? lazy quantifier in regex | Prevents over-matching, correct for many patterns |
| Direct string concatenation | Simpler than template literals for simple cases |
| Multiple returns in function | Can improve readability |
| Comments explaining "why" | Better than no comments |
Context-Sensitive Rules
Type Annotations
Flag missing type annotation ONLY IF ALL of these are true:
- Function is public API (not prefixed with
_) - Types are not obvious from context (e.g.,
x = 5is clearlyint) - Not a test function or fixture
- Codebase has existing typing conventions
Exception Handling
Flag bare except ONLY IF:
- Not in a top-level error boundary / middleware
- The caught exception is actually swallowed (not logged/re-raised)
- Specific exception types are known and available
- Not in cleanup/teardown code where any error should be caught
Error Handling
Flag missing try/except ONLY IF:
- No middleware or error handler catches this at a higher level
- The framework doesn't handle errors (FastAPI exception handlers)
- The error would cause a crash, not just a failed operation
- User needs specific feedback for this error type
Before Submitting Review
Final verification:
- Re-read each finding and ask: "Did I verify this is actually an issue?"
- For each finding, can you point to the specific line that proves the issue exists?
- Would a domain expert agree this is a problem, or is it a style preference?
- Does fixing this provide real value, or is it busywork?
- Format every finding as:
[FILE:LINE] ISSUE_TITLE - For each finding, ask: "Does this fix existing code, or does it request entirely new code that didn't exist before?" If the latter, downgrade to Informational.
- If this is a re-review: ONLY verify previous fixes. Do not introduce new findings.
If uncertain about any finding, either:
- Remove it from the review
- Mark it as a question rather than an issue
- Verify by reading more code context
相关 Skills
安全专家
by alirezarezvani
覆盖威胁建模、漏洞评估、安全架构设计、代码审计与渗透测试,内置 STRIDE、OWASP、加密模式和安全扫描流程,适合系统设计评审与上线前安全排查。
✎ 安全专家把威胁建模、漏洞分析到渗透测试串成一套流程,内置 STRIDE 与 OWASP 指南,做安全设计和排查更省心。
安全运营
by alirezarezvani
覆盖应用安全、漏洞管理与合规审计,支持代码/依赖扫描、CVE 评估、Secrets 检测和安全自动化,适合做安全基线落地、漏洞响应、审计检查与安全开发治理。
✎ 应用安全、漏洞管理和合规检查一套打通,还能自动化扫描与响应,帮团队更早发现并收敛风险。
安全审计
by alirezarezvani
安装前审计 Claude Code Skill 的代码执行、Prompt 注入和依赖供应链风险,支持本地目录或 Git 仓库扫描,输出 PASS/WARN/FAIL 结论及修复建议
✎ 把代码审查、漏洞扫描和合规检查串成一条线,帮团队更早发现风险,做安全治理更省心。
相关 MCP 服务
by Sentry
搜索和分析 Sentry 错误报告,辅助调试。
✎ 把零散的 Sentry 错误报告变成可检索线索,帮你在海量报错里更快定位线上故障,排障调试明显省时。
by sinewaveai
为 AI agents 提供安全层:拦截 prompt injection、识别伪造 packages,并扫描漏洞风险。
✎ 给 AI Agent 补上关键安全层,能拦截 prompt 注入、识别伪造包并扫描漏洞风险,把防护前置更省心。
by pantheon-security
强化安全性的 NotebookLM MCP,集成 post-quantum encryption,提升数据防护能力。