产品分析

Universal

product-analysis

by daymade

并行调度 Claude Code agent teams 与 Codex CLI,从 UX、API、架构、文档等多视角审查产品,汇总成可执行优化方案,适合产品自查、发布前审查和信息架构审计。

做产品审查或发布前把关时很省心,多代理并行从体验、信息架构到竞品多视角分析,最后沉淀成可执行优化方案。

855数据与存储未扫描2026年3月5日

安装

claude skill add --url github.com/daymade/claude-code-skills/tree/main/product-analysis

文档

Product Analysis

Multi-path parallel product analysis that combines Claude Code agent teams and Codex CLI for cross-model test-time compute scaling.

Core principle: Same analysis task, multiple AI perspectives, deep synthesis.

How It Works

code
/product-analysis full
         │
         ├─ Step 0: Auto-detect available tools (codex? competitors?)
         │
    ┌────┼──────────────┐
    │    │              │
 Claude Code         Codex CLI (auto-detected)
 Task Agents         (background Bash)
 (Explore ×3-5)      (×2-3 parallel)
    │                   │
    └────────┬──────────┘
             │
      Synthesis (main context)
             │
      Structured Report

Step 0: Auto-Detect Available Tools

Before launching any agents, detect what tools are available:

bash
# Check if Codex CLI is installed
which codex 2>/dev/null && codex --version

Decision logic:

  • If codex is found: Inform the user — "Codex CLI detected (version X). Will run cross-model analysis for richer perspectives."
  • If codex is not found: Silently proceed with Claude Code agents only. Do NOT ask the user to install anything.

Also detect the project type to tailor agent prompts:

bash
# Detect project type
ls package.json 2>/dev/null    # Node.js/React
ls pyproject.toml 2>/dev/null  # Python
ls Cargo.toml 2>/dev/null      # Rust
ls go.mod 2>/dev/null          # Go

Scope Modes

Parse [参数] to determine analysis scope:

ScopeWhat it coversTypical agents
fullUX + API + Architecture + Docs (default)5 Claude + Codex (if available)
uxFrontend navigation, information density, user journey, empty state, onboarding3 Claude + Codex (if available)
apiBackend API coverage, endpoint health, error handling, consistency2 Claude + Codex (if available)
archModule structure, dependency graph, code duplication, separation of concerns2 Claude + Codex (if available)
compare X YSelf-audit + competitive benchmarking (invokes /competitors-analysis)3 Claude + competitors-analysis

Phase 1: Parallel Exploration

Launch all exploration agents simultaneously using Task tool (background mode).

Claude Code Agents (always)

For each dimension, spawn a Task agent with subagent_type: Explore and run_in_background: true:

Agent A — Frontend Navigation & Information Density

code
Explore the frontend navigation structure and entry points:
1. App.tsx: How many top-level components are mounted simultaneously?
2. Left sidebar: How many buttons/entries? What does each link to?
3. Right sidebar: How many tabs? How many sections per tab?
4. Floating panels: How many drawers/modals? Which overlap in functionality?
5. Count total first-screen interactive elements for a new user.
6. Identify duplicate entry points (same feature accessible from 2+ places).
Give specific file paths, line numbers, and element counts.

Agent B — User Journey & Empty State

code
Explore the new user experience:
1. Empty state page: What does a user with no sessions see? Count clickable elements.
2. Onboarding flow: How many steps? What information is presented?
3. Prompt input area: How many buttons/controls surround the input box? Which are high-frequency vs low-frequency?
4. Mobile adaptation: How many nav items? How does it differ from desktop?
5. Estimate: Can a new user complete their first conversation in 3 minutes?
Give specific file paths, line numbers, and UX assessment.

Agent C — Backend API & Health

code
Explore the backend API surface:
1. List ALL API endpoints (method + path + purpose).
2. Identify endpoints that are unused or have no frontend consumer.
3. Check error handling consistency (do all endpoints return structured errors?).
4. Check authentication/authorization patterns (which endpoints require auth?).
5. Identify any endpoints that duplicate functionality.
Give specific file paths and line numbers.

Agent D — Architecture & Module Structure (full/arch scope only)

code
Explore the module structure and dependencies:
1. Map the module dependency graph (which modules import which).
2. Identify circular dependencies or tight coupling.
3. Find code duplication across modules (same pattern in 3+ places).
4. Check separation of concerns (does each module have a single responsibility?).
5. Identify dead code or unused exports.
Give specific file paths and line numbers.

Agent E — Documentation & Config Consistency (full scope only)

code
Explore documentation and configuration:
1. Compare README claims vs actual implemented features.
2. Check config file consistency (base.yaml vs .env.example vs code defaults).
3. Find outdated documentation (references to removed features/files).
4. Check test coverage gaps (which modules have no tests?).
Give specific file paths and line numbers.

Codex CLI Agents (auto-detected)

If Codex CLI was detected in Step 0, launch parallel Codex analyses via background Bash.

Each Codex invocation gets the same dimensional prompt but from a different model's perspective:

bash
codex -m o4-mini \
  -c model_reasoning_effort="high" \
  --full-auto \
  "Analyze the frontend navigation structure of this project. Count all interactive elements visible to a new user on first screen. Identify duplicate entry points where the same feature is accessible from 2+ places. Give specific file paths and counts."

Run 2-3 Codex commands in parallel (background Bash), one per major dimension.

Important: Codex runs in the project's working directory. It has full filesystem access. The --full-auto flag (or --dangerously-bypass-approvals-and-sandbox for older versions) enables autonomous execution.

Phase 2: Competitive Benchmarking (compare scope only)

When scope is compare, invoke the competitors-analysis skill for each competitor:

code
Use the Skill tool to invoke: /competitors-analysis {competitor-name} {competitor-url}

This delegates to the orthogonal competitors-analysis skill which handles:

  • Repository cloning and validation
  • Evidence-based code analysis (file:line citations)
  • Competitor profile generation

Phase 3: Synthesis

After all agents complete, synthesize findings in the main conversation context.

Cross-Validation

Compare findings across agents (Claude vs Claude, Claude vs Codex):

  • Agreement = high confidence finding
  • Disagreement = investigate deeper (one agent may have missed context)
  • Codex-only finding = different model perspective, validate manually

Quantification

Extract hard numbers from agent reports:

MetricWhat to measure
First-screen interactive elementsTotal count of buttons/links/inputs visible to new user
Feature entry point duplicationNumber of features with 2+ entry points
API endpoints without frontend consumerCount of unused backend routes
Onboarding steps to first valueSteps from launch to first successful action
Module coupling scoreNumber of circular or bi-directional dependencies

Structured Output

Produce a layered optimization report:

markdown
## Product Analysis Report

### Executive Summary
[1-2 sentences: key finding]

### Quantified Findings
| Metric | Value | Assessment |
|--------|-------|------------|
| ... | ... | ... |

### P0: Critical (block launch)
[Issues that prevent basic usability]

### P1: High Priority (launch week)
[Issues that significantly degrade experience]

### P2: Medium Priority (next sprint)
[Issues worth addressing but not blocking]

### Cross-Model Insights
[Findings that only one model identified — worth investigating]

### Competitive Position (if compare scope)
[How we compare on key dimensions]

Workflow Checklist

  • Parse [参数] for scope
  • Auto-detect Codex CLI availability (which codex)
  • Auto-detect project type (package.json / pyproject.toml / etc.)
  • Launch Claude Code Explore agents (3-5 parallel, background)
  • Launch Codex CLI commands (2-3 parallel, background) if detected
  • Invoke /competitors-analysis if compare scope
  • Collect all agent results
  • Cross-validate findings
  • Quantify metrics
  • Generate structured report with P0/P1/P2 priorities

References

相关 Skills

资深架构师

by alirezarezvani

Universal
热门

适合系统设计评审、ADR记录和扩展性规划,分析依赖与耦合,权衡单体或微服务、数据库与技术栈选型,并输出Mermaid、PlantUML、ASCII架构图。

搞系统设计、技术选型和扩展规划时,用它能更快理清架构决策与依赖关系,还能直接产出 Mermaid/PlantUML 图,方案讨论效率很高。

数据与存储
未扫描11.5k

迁移架构师

by alirezarezvani

Universal
热门

为数据库、API 与基础设施迁移制定分阶段零停机方案,提前校验兼容性与风险,生成回滚策略、验证关卡和时间线,适合复杂系统平滑切换。

做数据库与存储迁移时,用它统一梳理表结构和数据搬迁流程,架构视角更完整,复杂迁移也更稳。

数据与存储
未扫描11.5k

资深数据工程师

by alirezarezvani

Universal
热门

聚焦生产级数据工程,覆盖 ETL/ELT、批处理与流式管道、数据建模、Airflow/dbt/Spark 优化和数据质量治理,适合设计数据架构、搭建现代数据栈与排查性能问题。

复杂数据管道、ETL/ELT 和治理难题交给它,凭 Spark、Airflow、dbt 等现代数据栈经验,能更稳地搭起可扩展的数据基础设施。

数据与存储
未扫描11.5k

相关 MCP 服务

by Anthropic

热门

PostgreSQL 是让 Claude 直接查询和管理你的数据库的 MCP 服务器。

这个服务器解决了开发者需要手动编写 SQL 查询的痛点,特别适合数据分析师或后端开发者快速探索数据库结构。不过,由于是参考实现,生产环境使用前务必评估安全风险,别指望它能处理复杂事务。

数据与存储
83.9k

SQLite 数据库

编辑精选

by Anthropic

热门

SQLite 是让 AI 直接查询本地数据库进行数据分析的 MCP 服务器。

这个服务器解决了 AI 无法直接访问 SQLite 数据库的问题,适合需要快速分析本地数据集的开发者。不过,作为参考实现,它可能缺乏生产级的安全特性,建议在受控环境中使用。

数据与存储
83.9k

by Firecrawl

热门

Firecrawl 是让 AI 直接抓取网页并提取结构化数据的 MCP 服务器。

它解决了手动写爬虫的麻烦,让 Claude 能直接访问动态网页内容。最适合需要实时数据的研究者或开发者,比如监控竞品价格或抓取新闻。但要注意,它依赖第三方 API,可能涉及隐私和成本问题。

数据与存储
6.1k

评论