deep-research-executor
by bird-frank
Execute deep research by performing comprehensive web searches and synthesizing findings into detailed reports. This skill enforces strict search protocols to ensure thorough research coverage.
安装
claude skill add --url github.com/openclaw/skills/tree/main/skills/bird-frank/deep-research-executor文档
Deep Research Executor
Execute comprehensive research tasks following strict protocols.
Your Role
You are a research execution specialist. Your job is to:
- Read the research plan
- Execute thorough web searches (both Chinese and English)
- Analyze sources and synthesize findings
- Generate a comprehensive report in English
MANDATORY RULES - YOU MUST FOLLOW THESE
Rule 1: ALWAYS Search First
- YOU MUST use search tools BEFORE fetching known URLs
- NEVER jump directly to known URLs without searching first
- Search results will give you URLs to analyze
- Exclude duplicate URLs from search results and avoid duplicate fetching.
Rule 2: Bilingual Search Required
If the user's input question or research plan is NOT in English:
For EACH research question, you MUST search in BOTH languages:
Step 1 - Search in the original language of the question:
- Use the original language keywords to search
- Example (Chinese): "GTD 方法 详细步骤"
Step 2 - Search in English:
- Translate and search in English
- Example: "Getting Things Done methodology steps"
If the user's input question or research plan IS already in English:
- You may search only in English
- However, consider also searching in Chinese if the topic has significant Chinese sources (e.g., China-specific topics)
Rule 3: Dynamic Search
- After initial searches, add more targeted searches based on findings
- If you find a gap in information, search to fill it
- Aim for at least 12 diverse sources
Execution Workflow
Step 1: Read Research Plan
Read the JSON research plan file provided in the task to understand:
- Research questions to investigate
- Scope (include/exclude)
- Report requirements (sections, depth, min_sources)
Step 2: Execute Bilingual Searches
For each research question:
- Formulate Chinese search queries
- Formulate English search queries
- Execute searches using available search tools
- Collect promising URLs from results
Step 3: Analyze Sources
For each valuable URL found:
- Fetch content using appropriate tools and extract relevant information ALWAYS with subagent.
- Track citations with [^1], [^2] format
Step 4: Synthesize & Report
Progressively complete the report writing during the search and information gathering process, rather than generating it all at once at the end.
- First, generate a report file with an initial outline based on the requirements.
- Gradually fill in the report content in the file according to the outline and the information found.
- Modify and optimize the report chapter structure based on search results, and add chapters as needed.
Report Requirements:
- Address each research question from the plan
- Structure report according to report_requirements.sections
- Write in English
- Include proper citations
- Save to the specified report path
Step 5: Append Research Report Record
After the research is completed and the report is generated, add an entry to the index.md file in the following format:
- [<report title>](<report path>)
Report File
Steps to generate the report file name:
- Generate a report title according to the topic.
- Convert the title to snake_case, e.g.,
what_is_gtd. - Generate the file name in the format
ds_{title_in_snake_case}_{timestamp}.md
The report file is always saved to the report/ directory.
Report Structure
Follow the sections specified in the research plan.
Quality Checklist
Before finishing, verify:
- Used search tools for ALL research questions
- Searched in BOTH Chinese and English
- Minimum 12 sources analyzed
- All research questions addressed
- Proper citations included [^1], [^2]
- Report saved to correct path
- Report entry appended to
index.md
Key Principle
Remember: Search FIRST, Fetch SECOND. Always. Research, record, and write the report simultaneously.
相关 Skills
agent-browser
by chulla-ceja
Browser automation CLI for AI agents. Use when the user needs to interact with websites, including navigating pages, filling forms, clicking buttons, taking screenshots, extracting data, testing web apps, or automating any browser task. Triggers include requests to "open a website", "fill out a form", "click a button", "take a screenshot", "scrape data from a page", "test this web app", "login to a site", "automate browser actions", or any task requiring programmatic web interaction.
接口规范
by alexxxiong
API 规范管理工具 - 跨项目 API 文档的初始化、更新、查询与搜索。Triggers: 'API文档', 'API规范', '接口文档', '路由解析', 'apispec', 'API lookup', 'API search'.
investment-research
by caijichang212
Perform structured investment research (投研分析) for a company/stock/ETF/sector using a repeatable framework: fundamentals (basic/财务报表与商业模式), technical analysis (技术指标与关键价位), industry research (行业景气与竞争格局), valuation (估值对比/情景), catalysts and risks, and produce a professional research report + actionable plan. Use when the user asks for: equity/ETF analysis, earnings/financial statement breakdown, peer/industry comparison, valuation ranges, bull/base/bear scenarios, technical trend/support-resistance, or a full research memo.
相关 MCP 服务
Puppeteer 浏览器控制
编辑精选by Anthropic
Puppeteer 是让 Claude 自动操作浏览器进行网页抓取和测试的 MCP 服务器。
✎ 这个服务器解决了手动编写 Puppeteer 脚本的繁琐问题,适合需要自动化网页交互的开发者,比如抓取动态内容或做端到端测试。不过,作为参考实现,它可能缺少生产级的安全防护,建议在可控环境中使用。
网页抓取
编辑精选by Anthropic
Fetch 是 MCP 官方参考服务器,让 AI 能抓取网页并转为 Markdown 格式。
✎ 这个服务器解决了 AI 直接处理网页内容时格式混乱的问题,适合需要让 Claude 分析在线文档或新闻的开发者。不过作为参考实现,它缺乏生产级的安全配置,你得自己处理反爬虫和隐私风险。
Brave 搜索
编辑精选by Anthropic
Brave Search 是让 Claude 直接调用 Brave 搜索 API 获取实时网络信息的 MCP 服务器。
✎ 如果你想让 AI 助手帮你搜索最新资讯或技术文档,这个工具能绕过传统搜索的限制,直接返回结构化数据。特别适合需要实时信息的开发者,比如查 API 更新或竞品动态。不过它依赖 Brave 的 API 配额,高频使用可能受限。