Screaming Frog SEO Spider MCP Server

搜索与获取

by bzsasson

通过 Screaming Frog SEO Spider 进行网站爬取、导出 SEO 数据,并管理 crawl 任务的 MCP 服务器。

什么是 Screaming Frog SEO Spider MCP Server

通过 Screaming Frog SEO Spider 进行网站爬取、导出 SEO 数据,并管理 crawl 任务的 MCP 服务器。

README

Screaming Frog SEO Spider MCP Server

An MCP (Model Context Protocol) server that gives Claude (or any MCP-compatible client) programmatic access to Screaming Frog SEO Spider — crawl websites, export crawl data, and manage your crawl storage, all from your AI assistant.

Prerequisites

  1. Screaming Frog SEO Spider installed on your machine (tested with v23.x, should work with v16+). Download from: https://www.screamingfrog.co.uk/seo-spider/

  2. A valid Screaming Frog license. The free version has a 500-URL crawl limit. Most MCP features (headless CLI, saving/loading crawls, exports) require a paid license.

  3. Python 3.10+

Important: How the Workflow Works

Screaming Frog uses an internal database that can only be accessed by one process at a time. This means:

You must close the Screaming Frog GUI before the MCP server can access crawl data.

The typical workflow is:

  1. Run your crawl — either through the SF GUI (with all your custom settings, filters, etc.) or via the MCP crawl_site tool.
  2. Close the Screaming Frog GUI — the GUI locks the crawl database. The MCP server's headless CLI cannot read or export data while the GUI is running.
  3. Use the MCP tools — once the GUI is closed, you can list crawls, export data, read CSVs, and more through your AI assistant.

If you forget to close the GUI, the server will detect it and show a clear error message telling you to quit SF first.

Setup

Option A: Install from PyPI (recommended)

bash
pip install screaming-frog-mcp

Or run directly with uvx (no install needed):

bash
uvx screaming-frog-mcp

Option B: Clone and install from source

bash
git clone https://github.com/bzsasson/screaming-frog-mcp.git
cd screaming-frog-mcp
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt

Configure the CLI path

The default Screaming Frog CLI path works for macOS. If you're on Linux or Windows, set the SF_CLI_PATH environment variable:

OSDefault Path
macOS/Applications/Screaming Frog SEO Spider.app/Contents/MacOS/ScreamingFrogSEOSpiderLauncher
Linux/usr/bin/screamingfrogseospider
WindowsC:\Program Files (x86)\Screaming Frog SEO Spider\ScreamingFrogSEOSpiderCli.exe

If you cloned the repo, copy .env.example to .env and edit it.

Add to Claude Code

If installed via pip/uvx:

json
{
  "mcpServers": {
    "screaming-frog": {
      "command": "uvx",
      "args": ["screaming-frog-mcp"],
      "env": {
        "SF_CLI_PATH": "/path/to/ScreamingFrogSEOSpiderLauncher"
      }
    }
  }
}

If cloned from source:

json
{
  "mcpServers": {
    "screaming-frog": {
      "command": "/path/to/screaming-frog-mcp/.venv/bin/python",
      "args": ["/path/to/screaming-frog-mcp/sf_mcp.py"]
    }
  }
}

Add to Claude Desktop

Add to your Claude Desktop config (claude_desktop_config.json):

json
{
  "mcpServers": {
    "screaming-frog": {
      "command": "uvx",
      "args": ["screaming-frog-mcp"],
      "env": {
        "SF_CLI_PATH": "/path/to/ScreamingFrogSEOSpiderLauncher"
      }
    }
  }
}

Available Tools

ToolDescription
sf_checkVerify Screaming Frog is installed, check version and license status
crawl_siteStart a headless background crawl (see note below)
crawl_statusCheck progress of a running crawl
list_crawlsList all saved crawls with their Database IDs
export_crawlExport crawl data as CSV files (many export options available)
read_crawl_dataRead exported CSV data with pagination and filtering
delete_crawlPermanently delete a crawl from the database
storage_summaryShow disk usage of SF's crawl storage

Usage Examples

Check installation

"Is Screaming Frog installed and licensed?"

The assistant will call sf_check and report version/license info.

Work with existing crawls (recommended flow)

For most use cases, crawl in the Screaming Frog GUI where you have full control over configuration, JavaScript rendering, crawl scope, custom extraction, etc. Then close the GUI and use the MCP to analyze the results:

After you've crawled a site in the Screaming Frog GUI and closed it:

"List my saved crawls" "Export the crawl for example.com" "Show me all pages with missing meta descriptions" "What are the 404 pages?"

Crawl a site via MCP (optional)

"Crawl https://example.com with a max of 100 URLs"

The crawl_site tool can kick off headless crawls via CLI. This is useful for quick re-crawls or automated workflows, but note the limitations compared to the GUI:

  • Uses default crawl settings (no custom extraction, JavaScript rendering config, etc.)
  • You can pass a .seospiderconfig file to customize settings, but the GUI is easier for complex setups
  • The crawl must finish and save before you can export data

Export options

The server supports all of Screaming Frog's export tabs, bulk exports, and reports. Ask the assistant to read the screaming-frog://export-reference resource for the full list, or specify them directly:

code
export_tabs: "Internal:All,Response Codes:All,Page Titles:All"
bulk_export: "All Inlinks,All Outlinks"
save_report: "Crawl Overview"

Temp file cleanup

Exported CSVs are stored in ~/.cache/sf-mcp/exports/ and are automatically cleaned up after 1 hour.

Troubleshooting

ProblemSolution
"GUI is already running" errorQuit the Screaming Frog application, then retry
Empty CSV exports (headers only, 0 data rows)The GUI likely has the database locked — close it and re-export
CLI not foundCheck that SF_CLI_PATH in .env points to the correct executable
Crawl not appearing in list_crawlsMake sure you saved the crawl in the GUI (File > Save) before closing
Export times outLarge crawls may need more time — try exporting fewer tabs

License

MIT

<!-- mcp-name: io.github.bzsasson/screaming-frog-mcp -->

常见问题

Screaming Frog SEO Spider MCP Server 是什么?

通过 Screaming Frog SEO Spider 进行网站爬取、导出 SEO 数据,并管理 crawl 任务的 MCP 服务器。

相关 Skills

SEO审计工具

by amdf01-debug

热门

搜索与获取
未扫描3.9k

浏览器自动化

by chulla-ceja

热门

Browser automation CLI for AI agents. Use when the user needs to interact with websites, including navigating pages, filling forms, clicking buttons, taking screenshots, extracting data, testing web apps, or automating any browser task. Triggers include requests to "open a website", "fill out a form", "click a button", "take a screenshot", "scrape data from a page", "test this web app", "login to a site", "automate browser actions", or any task requiring programmatic web interaction.

搜索与获取
未扫描3.9k

多视角辩论

by caius-kong

Multi-perspective structured debate for complex topics. Spawn parallel subagents with opposing stances, cross-inject arguments for rebuttal, then synthesize via neutral judge into a consensus report with recommendations and scenario matrix. Use when: (1) user asks for deep comparison, pros/cons, or X vs Y analysis, (2) user asks for multi-angle research on a controversial or complex topic, (3) user explicitly requests debate, dialectical analysis, or adversarial research. NOT for: simple factual lookups, single-perspective deep research (use academic-deep-research), or quick opinion questions.

搜索与获取
未扫描3.9k

相关 MCP Server

by Anthropic

热门

Puppeteer 是让 Claude 自动操作浏览器进行网页抓取和测试的 MCP 服务器。

这个服务器解决了手动编写 Puppeteer 脚本的繁琐问题,适合需要自动化网页交互的开发者,比如抓取动态内容或做端到端测试。不过,作为参考实现,它可能缺少生产级的安全防护,建议在可控环境中使用。

搜索与获取
83.1k

网页抓取

编辑精选

by Anthropic

热门

Fetch 是 MCP 官方参考服务器,让 AI 能抓取网页并转为 Markdown 格式。

这个服务器解决了 AI 直接处理网页内容时格式混乱的问题,适合需要让 Claude 分析在线文档或新闻的开发者。不过作为参考实现,它缺乏生产级的安全配置,你得自己处理反爬虫和隐私风险。

搜索与获取
83.1k

Brave 搜索

编辑精选

by Anthropic

热门

Brave Search 是让 Claude 直接调用 Brave 搜索 API 获取实时网络信息的 MCP 服务器。

如果你想让 AI 助手帮你搜索最新资讯或技术文档,这个工具能绕过传统搜索的限制,直接返回结构化数据。特别适合需要实时信息的开发者,比如查 API 更新或竞品动态。不过它依赖 Brave 的 API 配额,高频使用可能受限。

搜索与获取
83.1k

评论