模式分析

Universal

ln-641-pattern-analyzer

by levnikolaevich

聚焦单个架构模式做实现审查,扫描代码定位实现,按合规性、完整性、质量和落地程度打分,并输出缺口、问题严重度与修复投入评估,适合 ln-640 协调器或按需定向分析。

415数据与存储未扫描2026年3月5日

安装

claude skill add --url github.com/levnikolaevich/claude-code-skills/tree/master/ln-641-pattern-analyzer

文档

Paths: File paths (shared/, references/, ../ln-*) are relative to skills repo root. If not found at CWD, locate this SKILL.md directory and go up one level for repo root.

Pattern Analyzer

L3 Worker that analyzes a single architectural pattern against best practices and calculates 4 scores.

Purpose & Scope

  • Analyze ONE pattern per invocation (receives pattern name, locations, best practices from coordinator)
  • Find all implementations in codebase (Glob/Grep)
  • Validate implementation exists and works
  • Calculate 4 scores: compliance, completeness, quality, implementation
  • Identify gaps and issues with severity and effort estimates
  • Return structured analysis result to coordinator

Out of Scope (owned by ln-624-code-quality-auditor):

  • Cyclomatic complexity thresholds (>10, >20)
  • Method/class length thresholds (>50, >100, >500 lines)
  • Quality Score focuses on pattern-specific quality (SOLID within pattern, pattern-level smells), not generic code metrics

Input (from ln-640 coordinator)

code
- pattern: string          # Pattern name (e.g., "Job Processing")
- locations: string[]      # Known file paths/directories
- bestPractices: object    # Best practices from MCP Ref/Context7/WebSearch
- output_dir: string       # e.g., "docs/project/.audit/ln-640/{YYYY-MM-DD}"

Note: All patterns arrive pre-verified (passed ln-640 Phase 1d applicability gate with >= 2 structural components confirmed).

Workflow

Phase 1: Find Implementations

MANDATORY READ: Load ../ln-640-pattern-evolution-auditor/references/pattern_library.md — use "Pattern Detection (Grep)" table for detection keywords per pattern.

code
IF pattern.source == "adaptive":
  # Pattern discovered by coordinator Phase 1b — evidence already provided
  files = pattern.evidence.files
  SKIP detection keyword search (already done in Phase 1b)
ELSE:
  # Baseline pattern — use library detection keywords
  files = Glob(locations)
  additional = Grep("{pattern_keywords}", "**/*.{ts,js,py,rb,cs,java}")
  files = deduplicate(files + additional)

Phase 2: Read and Analyze Code

code
FOR EACH file IN files (limit: 10 key files):
  Read(file)
  Extract: components, patterns, error handling, logging, tests

Phase 3: Calculate 4 Scores

MANDATORY READ: Load ../ln-640-pattern-evolution-auditor/references/scoring_rules.md — follow Detection column for each criterion.

ScoreSource in scoring_rules.mdMax
Compliance"Compliance Score" section — industry standard, naming, conventions, anti-patterns100
Completeness"Completeness Score" section — required components table (per pattern), error handling, tests100
Quality"Quality Score" section — method length, complexity, code smells, SOLID100
Implementation"Implementation Score" section — compiles, production usage, integration, monitoring100

Scoring process for each criterion:

  1. Run the Detection Grep/Glob from scoring_rules.md
  2. If matches found → add points per criterion
  3. If anti-pattern/smell detected → subtract per deduction table
  4. Document evidence: file path + line for each score justification

Phase 4: Identify Issues and Gaps

code
FOR EACH bestPractice NOT implemented:
  issues.append({
    severity: "HIGH" | "MEDIUM" | "LOW",
    category: "compliance" | "completeness" | "quality" | "implementation",
    issue: description,
    suggestion: how to fix,
    effort: "S" | "M" | "L"
  })

gaps = {
  missingComponents: required components not found in code,
  inconsistencies: conflicting or incomplete implementations
}

Phase 5: Calculate Score

MANDATORY READ: Load shared/references/audit_scoring.md for unified scoring formula.

Primary score uses penalty formula (same as all workers):

code
penalty = (critical × 2.0) + (high × 1.0) + (medium × 0.5) + (low × 0.2)
score = max(0, 10 - penalty)

Diagnostic sub-scores (0-100 each) are calculated separately and reported in AUDIT-META for diagnostic purposes only:

  • compliance, completeness, quality, implementation

Phase 6: Write Report

MANDATORY READ: Load shared/templates/audit_worker_report_template.md for file format (ln-640 section: extended AUDIT-META + DATA-EXTENDED).

code
# Build pattern name slug: "Job Processing" → "job-processing"
slug = pattern.name.lower().replace(" ", "-")

# Build markdown report in memory with:
# - AUDIT-META (extended: score [penalty-based] + diagnostic score_compliance/completeness/quality/implementation)
# - Checks table (compliance_check, completeness_check, quality_check, implementation_check)
# - Findings table (issues sorted by severity)
# - DATA-EXTENDED: {pattern, codeReferences, gaps, recommendations}

Write to {output_dir}/641-pattern-{slug}.md (atomic single Write call)

Phase 7: Return Summary

code
Report written: docs/project/.audit/ln-640/{YYYY-MM-DD}/641-pattern-job-processing.md
Score: 7.9/10 (C:72 K:85 Q:68 I:90) | Issues: 3 (H:1 M:2 L:0)

Critical Rules

  • One pattern only: Analyze only the pattern passed by coordinator
  • Read before score: Never score without reading actual code
  • Detection-based scoring: Use Grep/Glob patterns from scoring_rules.md, not assumptions
  • Effort estimates: Always provide S/M/L for each issue
  • Code references: Always include file paths for findings

Definition of Done

  • All implementations found via Glob/Grep (using pattern_library.md keywords or adaptive evidence)
  • Key files read and analyzed
  • 4 scores calculated using scoring_rules.md Detection patterns
  • Issues identified with severity, category, suggestion, effort
  • Gaps documented (missing components, inconsistencies)
  • Recommendations provided
  • Report written to {output_dir}/641-pattern-{slug}.md (atomic single Write call)
  • Summary returned to coordinator

Reference Files

  • Worker report template: shared/templates/audit_worker_report_template.md
  • Scoring rules: ../ln-640-pattern-evolution-auditor/references/scoring_rules.md
  • Pattern library: ../ln-640-pattern-evolution-auditor/references/pattern_library.md
  • MANDATORY READ: shared/references/research_tool_fallback.md

Version: 2.0.0 Last Updated: 2026-02-08

相关 Skills

资深数据工程师

by alirezarezvani

Universal
热门

聚焦生产级数据工程,覆盖 ETL/ELT、批处理与流式管道、数据建模、Airflow/dbt/Spark 优化和数据质量治理,适合设计数据架构、搭建现代数据栈与排查性能问题。

复杂数据管道、ETL/ELT 和治理难题交给它,凭 Spark、Airflow、dbt 等现代数据栈经验,能更稳地搭起可扩展的数据基础设施。

数据与存储
未扫描12.1k

技术栈评估

by alirezarezvani

Universal
热门

对比框架、数据库和云服务,结合 5 年 TCO、安全风险、生态活力与迁移复杂度做量化评估,适合技术选型、栈升级和替换路线决策。

帮你系统比较技术栈优劣,不只看功能,还把TCO、安全性和生态健康度一起量化,选型和迁移决策更稳。

数据与存储
未扫描12.1k

迁移架构师

by alirezarezvani

Universal
热门

为数据库、API 与基础设施迁移制定分阶段零停机方案,提前校验兼容性与风险,生成回滚策略、验证关卡和时间线,适合复杂系统平滑切换。

做数据库与存储迁移时,用它统一梳理表结构和数据搬迁流程,架构视角更完整,复杂迁移也更稳。

数据与存储
未扫描12.1k

相关 MCP 服务

SQLite 数据库

编辑精选

by Anthropic

热门

SQLite 是让 AI 直接查询本地数据库进行数据分析的 MCP 服务器。

这个服务器解决了 AI 无法直接访问 SQLite 数据库的问题,适合需要快速分析本地数据集的开发者。不过,作为参考实现,它可能缺乏生产级的安全特性,建议在受控环境中使用。

数据与存储
84.2k

by Anthropic

热门

PostgreSQL 是让 Claude 直接查询和管理你的数据库的 MCP 服务器。

这个服务器解决了开发者需要手动编写 SQL 查询的痛点,特别适合数据分析师或后端开发者快速探索数据库结构。不过,由于是参考实现,生产环境使用前务必评估安全风险,别指望它能处理复杂事务。

数据与存储
84.1k

by Firecrawl

热门

Firecrawl 是让 AI 直接抓取网页并提取结构化数据的 MCP 服务器。

它解决了手动写爬虫的麻烦,让 Claude 能直接访问动态网页内容。最适合需要实时数据的研究者或开发者,比如监控竞品价格或抓取新闻。但要注意,它依赖第三方 API,可能涉及隐私和成本问题。

数据与存储
6.1k

评论