研究综述

research-summarizer

by alirezarezvani

Structured research summarization agent skill for non-dev users. Handles academic papers, web articles, reports, and documentation. Extracts key findings, generates comparative analyses, and produces properly formatted citations. Use when: user wants to summarize a research paper, compare multiple sources, extract citations from documents, or create structured research briefs. Plugin for Claude Code, Codex, Gemini CLI, and OpenClaw.

3.7k搜索与获取未扫描2026年3月23日

安装

claude skill add --url github.com/openclaw/skills/tree/main/skills/alirezarezvani/research-summarizer

文档

Research Summarizer

Read less. Understand more. Cite correctly.

Structured research summarization workflow that turns dense source material into actionable briefs. Built for product managers, analysts, founders, and anyone who reads more than they should have to.

Not a generic "summarize this" — a repeatable framework that extracts what matters, compares across sources, and formats citations properly.


Slash Commands

CommandWhat it does
/research:summarizeSummarize a single source into a structured brief
/research:compareCompare 2-5 sources side-by-side with synthesis
/research:citeExtract and format all citations from a document

When This Skill Activates

Recognize these patterns from the user:

  • "Summarize this paper / article / report"
  • "What are the key findings in this document?"
  • "Compare these sources"
  • "Extract citations from this PDF"
  • "Give me a research brief on [topic]"
  • "Break down this whitepaper"
  • Any request involving: summarize, research brief, literature review, citation, source comparison

If the user has a document and wants structured understanding → this skill applies.


Workflow

/research:summarize — Single Source Summary

  1. Identify source type

    • Academic paper → use IMRAD structure (Introduction, Methods, Results, Analysis, Discussion)
    • Web article → use claim-evidence-implication structure
    • Technical report → use executive summary structure
    • Documentation → use reference summary structure
  2. Extract structured brief

    code
    Title: [exact title]
    Author(s): [names]
    Date: [publication date]
    Source Type: [paper | article | report | documentation]
    
    ## Key Thesis
    [1-2 sentences: the central argument or finding]
    
    ## Key Findings
    1. [Finding with supporting evidence]
    2. [Finding with supporting evidence]
    3. [Finding with supporting evidence]
    
    ## Methodology
    [How they arrived at these findings — data sources, sample size, approach]
    
    ## Limitations
    - [What the source doesn't cover or gets wrong]
    
    ## Actionable Takeaways
    - [What to do with this information]
    
    ## Notable Quotes
    > "[Direct quote]" (p. X)
    
  3. Assess quality

    • Source credibility (peer-reviewed, reputable outlet, primary vs secondary)
    • Evidence strength (data-backed, anecdotal, theoretical)
    • Recency (when published, still relevant?)
    • Bias indicators (funding source, author affiliation, methodology gaps)

/research:compare — Multi-Source Comparison

  1. Collect sources (2-5 documents)

  2. Summarize each using the single-source workflow above

  3. Build comparison matrix

    code
    | Dimension        | Source A        | Source B        | Source C        |
    |------------------|-----------------|-----------------|-----------------|
    | Central Thesis   | ...             | ...             | ...             |
    | Methodology      | ...             | ...             | ...             |
    | Key Finding      | ...             | ...             | ...             |
    | Sample/Scope     | ...             | ...             | ...             |
    | Credibility      | High/Med/Low    | High/Med/Low    | High/Med/Low    |
    
  4. Synthesize

    • Where do sources agree? (convergent findings = stronger signal)
    • Where do they disagree? (divergent findings = needs investigation)
    • What gaps exist across all sources?
    • What's the weight of evidence for each position?
  5. Produce synthesis brief

    code
    ## Consensus Findings
    [What most sources agree on]
    
    ## Contested Points
    [Where sources disagree, with strongest evidence for each side]
    
    ## Gaps
    [What none of the sources address]
    
    ## Recommendation
    [Based on weight of evidence, what should the reader believe/do?]
    

/research:cite — Citation Extraction

  1. Scan document for all references, footnotes, in-text citations
  2. Extract and format using the requested style (APA 7 default)
  3. Classify citations by type:
    • Primary sources (original research, data)
    • Secondary sources (reviews, meta-analyses, commentary)
    • Tertiary sources (textbooks, encyclopedias)
  4. Output sorted bibliography with classification tags

Supported citation formats:

  • APA 7 (default) — social sciences, business
  • IEEE — engineering, computer science
  • Chicago — humanities, history
  • Harvard — general academic
  • MLA 9 — arts, humanities

Tooling

scripts/extract_citations.py

CLI utility for extracting and formatting citations from text.

Features:

  • Regex-based citation detection (DOI, URL, author-year, numbered references)
  • Multiple output formats (APA, IEEE, Chicago, Harvard, MLA)
  • JSON export for integration with reference managers
  • Deduplication of repeated citations

Usage:

bash
# Extract citations from a file (APA format, default)
python3 scripts/extract_citations.py document.txt

# Specify format
python3 scripts/extract_citations.py document.txt --format ieee

# JSON output
python3 scripts/extract_citations.py document.txt --format apa --output json

# From stdin
cat paper.txt | python3 scripts/extract_citations.py --stdin

scripts/format_summary.py

CLI utility for generating structured research summaries.

Features:

  • Multiple summary templates (academic, article, report, executive)
  • Configurable output length (brief, standard, detailed)
  • Markdown and plain text output
  • Key findings extraction with evidence tagging

Usage:

bash
# Generate structured summary template
python3 scripts/format_summary.py --template academic

# Brief executive summary format
python3 scripts/format_summary.py --template executive --length brief

# All templates listed
python3 scripts/format_summary.py --list-templates

# JSON output
python3 scripts/format_summary.py --template article --output json

Quality Assessment Framework

Rate every source on four dimensions:

DimensionHighMediumLow
CredibilityPeer-reviewed, established authorReputable outlet, known authorBlog, unknown author, no review
EvidenceLarge sample, rigorous methodModerate data, sound approachAnecdotal, no data, opinion
RecencyPublished within 2 years2-5 years old5+ years, may be outdated
ObjectivityNo conflicts, balanced viewMinor affiliations disclosedFunded by interested party, one-sided

Overall Rating:

  • 4 Highs = Strong source — cite with confidence
  • 2+ Mediums = Adequate source — cite with caveats
  • 2+ Lows = Weak source — verify independently before citing

Summary Templates

See references/summary-templates.md for:

  • Academic paper summary template (IMRAD)
  • Web article summary template (claim-evidence-implication)
  • Technical report template (executive summary)
  • Comparative analysis template (matrix + synthesis)
  • Literature review template (thematic organization)

See references/citation-formats.md for:

  • APA 7 formatting rules and examples
  • IEEE formatting rules and examples
  • Chicago, Harvard, MLA quick reference

Proactive Triggers

Flag these without being asked:

  • Source has no date → Note it. Undated sources lose credibility points.
  • Source contradicts other sources → Highlight the contradiction explicitly. Don't paper over disagreements.
  • Source is behind a paywall → Note limited access. Suggest alternatives if known.
  • User provides only one source for a compare → Ask for at least one more. Comparison needs 2+.
  • Citations are incomplete → Flag missing fields (year, author, title). Don't invent metadata.
  • Source is 5+ years old in a fast-moving field → Warn about potential obsolescence.

Installation

One-liner (any tool)

bash
git clone https://github.com/alirezarezvani/claude-skills.git
cp -r claude-skills/product-team/research-summarizer ~/.claude/skills/

Multi-tool install

bash
./scripts/convert.sh --skill research-summarizer --tool codex|gemini|cursor|windsurf|openclaw

OpenClaw

bash
clawhub install cs-research-summarizer

Related Skills

  • product-analytics — Quantitative analysis. Complementary — use research-summarizer for qualitative sources, product-analytics for metrics.
  • competitive-teardown — Competitive research. Complementary — use research-summarizer for individual source analysis, competitive-teardown for market landscape.
  • content-production — Content writing. Research-summarizer feeds content-production — summarize sources first, then write.
  • product-discovery — Discovery frameworks. Complementary — research-summarizer for desk research, product-discovery for user research.

相关 Skills

agent-browser

by chulla-ceja

热门

Browser automation CLI for AI agents. Use when the user needs to interact with websites, including navigating pages, filling forms, clicking buttons, taking screenshots, extracting data, testing web apps, or automating any browser task. Triggers include requests to "open a website", "fill out a form", "click a button", "take a screenshot", "scrape data from a page", "test this web app", "login to a site", "automate browser actions", or any task requiring programmatic web interaction.

搜索与获取
未扫描3.7k

接口规范

by alexxxiong

热门

API 规范管理工具 - 跨项目 API 文档的初始化、更新、查询与搜索。Triggers: 'API文档', 'API规范', '接口文档', '路由解析', 'apispec', 'API lookup', 'API search'.

搜索与获取
未扫描3.7k

investment-research

by caijichang212

热门

Perform structured investment research (投研分析) for a company/stock/ETF/sector using a repeatable framework: fundamentals (basic/财务报表与商业模式), technical analysis (技术指标与关键价位), industry research (行业景气与竞争格局), valuation (估值对比/情景), catalysts and risks, and produce a professional research report + actionable plan. Use when the user asks for: equity/ETF analysis, earnings/financial statement breakdown, peer/industry comparison, valuation ranges, bull/base/bear scenarios, technical trend/support-resistance, or a full research memo.

搜索与获取
未扫描3.7k

相关 MCP 服务

by Anthropic

热门

Puppeteer 是让 Claude 自动操作浏览器进行网页抓取和测试的 MCP 服务器。

这个服务器解决了手动编写 Puppeteer 脚本的繁琐问题,适合需要自动化网页交互的开发者,比如抓取动态内容或做端到端测试。不过,作为参考实现,它可能缺少生产级的安全防护,建议在可控环境中使用。

搜索与获取
82.9k

网页抓取

编辑精选

by Anthropic

热门

Fetch 是 MCP 官方参考服务器,让 AI 能抓取网页并转为 Markdown 格式。

这个服务器解决了 AI 直接处理网页内容时格式混乱的问题,适合需要让 Claude 分析在线文档或新闻的开发者。不过作为参考实现,它缺乏生产级的安全配置,你得自己处理反爬虫和隐私风险。

搜索与获取
82.9k

Brave 搜索

编辑精选

by Anthropic

热门

Brave Search 是让 Claude 直接调用 Brave 搜索 API 获取实时网络信息的 MCP 服务器。

如果你想让 AI 助手帮你搜索最新资讯或技术文档,这个工具能绕过传统搜索的限制,直接返回结构化数据。特别适合需要实时信息的开发者,比如查 API 更新或竞品动态。不过它依赖 Brave 的 API 配额,高频使用可能受限。

搜索与获取
82.9k

评论