LinkedIn Profile and Job Scraper

效率与工作流编辑精选

by stickerdaniel

LinkedIn Profile and Job Scraper 是让 Claude 直接抓取 LinkedIn 个人资料、公司信息和职位详情的工具。

这个服务器解决了招聘和商业调研中手动复制粘贴 LinkedIn 数据的痛点,适合猎头或市场分析师快速获取候选人背景和公司动态。不过,LinkedIn 反爬机制频繁更新,数据稳定性需要持续维护,使用时建议搭配人工验证。

1.3kGitHub

什么是 LinkedIn Profile and Job Scraper

LinkedIn Profile and Job Scraper 是让 Claude 直接抓取 LinkedIn 个人资料、公司信息和职位详情的工具。

README

LinkedIn MCP Server

<p align="left"> <a href="https://pypi.org/project/linkedin-scraper-mcp/" target="_blank"><img src="https://img.shields.io/pypi/v/linkedin-scraper-mcp?color=blue" alt="PyPI"></a> <a href="https://github.com/stickerdaniel/linkedin-mcp-server/actions/workflows/ci.yml" target="_blank"><img src="https://github.com/stickerdaniel/linkedin-mcp-server/actions/workflows/ci.yml/badge.svg?branch=main" alt="CI Status"></a> <a href="https://github.com/stickerdaniel/linkedin-mcp-server/actions/workflows/release.yml" target="_blank"><img src="https://github.com/stickerdaniel/linkedin-mcp-server/actions/workflows/release.yml/badge.svg?branch=main" alt="Release"></a> <a href="https://github.com/stickerdaniel/linkedin-mcp-server/blob/main/LICENSE" target="_blank"><img src="https://img.shields.io/badge/License-Apache%202.0-%233fb950?labelColor=32383f" alt="License"></a> </p>

Through this LinkedIn MCP server, AI assistants like Claude can connect to your LinkedIn. Access profiles and companies, search for jobs, or get job details.

Installation Methods

uvx Install MCP Bundle Docker Development

https://github.com/user-attachments/assets/eb84419a-6eaf-47bd-ac52-37bc59c83680

Usage Examples

code
Research the background of this candidate https://www.linkedin.com/in/stickerdaniel/
code
Get this company profile for partnership discussions https://www.linkedin.com/company/inframs/
code
Suggest improvements for my CV to target this job posting https://www.linkedin.com/jobs/view/4252026496
code
What has Anthropic been posting about recently? https://www.linkedin.com/company/anthropicresearch/

Features & Tool Status

ToolDescriptionStatus
get_person_profileGet profile info with explicit section selection (experience, education, interests, honors, languages, contact_info, posts)Working
connect_with_personSend a connection request or accept an incoming one, with optional noteWorking
get_sidebar_profilesExtract profile URLs from sidebar recommendation sections ("More profiles for you", "Explore premium profiles", "People you may know") on a profile pageWorking
get_inboxList recent conversations from the LinkedIn messaging inboxWorking
get_conversationRead a specific messaging conversation by username or thread IDWorking
search_conversationsSearch messages by keywordWorking
send_messageSend a message to a LinkedIn user (requires confirmation)Working
get_company_profileExtract company information with explicit section selection (posts, jobs)Working
get_company_postsGet recent posts from a company's LinkedIn feedWorking
search_jobsSearch for jobs with keywords and location filtersWorking
search_peopleSearch for people by keywords and locationWorking
get_job_detailsGet detailed information about a specific job postingWorking
close_sessionClose browser session and clean up resourcesWorking

[!IMPORTANT] Breaking change: LinkedIn recently made some changes to prevent scraping. The newest version uses Patchright with persistent browser profiles instead of Playwright with session files. Old session.json files and LINKEDIN_COOKIE env vars are no longer supported. Run --login again to create a new profile + cookie file that can be mounted in docker. 02/2026

<br/> <br/>

🚀 uvx Setup (Recommended - Universal)

Prerequisites: Install uv.

Installation

Client Configuration

json
{
  "mcpServers": {
    "linkedin": {
      "command": "uvx",
      "args": ["linkedin-scraper-mcp@latest"],
      "env": { "UV_HTTP_TIMEOUT": "300" }
    }
  }
}

The @latest tag ensures you always run the newest version — uvx checks PyPI on each client launch and updates automatically. The server starts quickly, prepares the shared Patchright Chromium browser cache in the background under ~/.linkedin-mcp/patchright-browsers, and opens a LinkedIn login browser window on the first tool call that needs authentication.

[!NOTE] Early tool calls may return a setup/authentication-in-progress error until browser setup or login finishes. If you prefer to create a session explicitly, run uvx linkedin-scraper-mcp@latest --login.

uvx Setup Help

<details> <summary><b>🔧 Configuration</b></summary>

Transport Modes:

  • Default (stdio): Standard communication for local MCP servers
  • Streamable HTTP: For web-based MCP server
  • If no transport is specified, the server defaults to stdio
  • An interactive terminal without explicit transport shows a chooser prompt

CLI Options:

  • --login - Open browser to log in and save persistent profile
  • --no-headless - Show browser window (useful for debugging scraping issues)
  • --log-level {DEBUG,INFO,WARNING,ERROR} - Set logging level (default: WARNING)
  • --transport {stdio,streamable-http} - Optional: force transport mode (default: stdio)
  • --host HOST - HTTP server host (default: 127.0.0.1)
  • --port PORT - HTTP server port (default: 8000)
  • --path PATH - HTTP server path (default: /mcp)
  • --logout - Clear stored LinkedIn browser profile
  • --timeout MS - Browser timeout for page operations in milliseconds (default: 5000)
  • --user-data-dir PATH - Path to persistent browser profile directory (default: ~/.linkedin-mcp/profile)
  • --chrome-path PATH - Path to Chrome/Chromium executable (for custom browser installations)

Basic Usage Examples:

bash
# Run with debug logging
uvx linkedin-scraper-mcp@latest --log-level DEBUG

HTTP Mode Example (for web-based MCP clients):

bash
uvx linkedin-scraper-mcp@latest --transport streamable-http --host 127.0.0.1 --port 8080 --path /mcp

Runtime server logs are emitted by FastMCP/Uvicorn.

Tool calls are serialized within a single server process to protect the shared LinkedIn browser session. Concurrent client requests queue instead of running in parallel. Use --log-level DEBUG to see scraper lock wait/acquire/release logs.

Test with mcp inspector:

  1. Install and run mcp inspector bunx @modelcontextprotocol/inspector
  2. Click pre-filled token url to open the inspector in your browser
  3. Select Streamable HTTP as Transport Type
  4. Set URL to http://localhost:8080/mcp
  5. Connect
  6. Test tools
</details> <details> <summary><b>❗ Troubleshooting</b></summary>

Installation issues:

  • Ensure you have uv installed: curl -LsSf https://astral.sh/uv/install.sh | sh
  • Check uv version: uv --version (should be 0.4.0 or higher)
  • On first run, uvx downloads all Python dependencies. On slow connections, uv's default 30s HTTP timeout may be too short. The recommended config above already sets UV_HTTP_TIMEOUT=300 (seconds) to avoid this.

Session issues:

  • Browser profile is stored at ~/.linkedin-mcp/profile/
  • Managed browser downloads are cached at ~/.linkedin-mcp/patchright-browsers/
  • Make sure you have only one active LinkedIn session at a time

Login issues:

  • LinkedIn may require a login confirmation in the LinkedIn mobile app for --login
  • You might get a captcha challenge if you logged in frequently. Run uvx linkedin-scraper-mcp@latest --login which opens a browser where you can solve it manually.

Timeout issues:

  • If pages fail to load or elements aren't found, try increasing the timeout: --timeout 10000
  • Users on slow connections may need higher values (e.g., 15000-30000ms)
  • Can also set via environment variable: TIMEOUT=10000

Custom Chrome path:

  • If Chrome is installed in a non-standard location, use --chrome-path /path/to/chrome
  • Can also set via environment variable: CHROME_PATH=/path/to/chrome
</details> <br/> <br/>

📦 Claude Desktop MCP Bundle (formerly DXT)

Prerequisites: Claude Desktop.

One-click installation for Claude Desktop users:

  1. Download the latest .mcpb artifact from releases
  2. Click the downloaded .mcpb file to install it into Claude Desktop
  3. Call any LinkedIn tool

On startup, the MCP Bundle starts preparing the shared Patchright Chromium browser cache in the background. If you call a tool too early, Claude will surface a setup-in-progress error. On the first tool call that needs authentication, the server opens a LinkedIn login browser window and asks you to retry after sign-in.

MCP Bundle Setup Help

<details> <summary><b>❗ Troubleshooting</b></summary>

First-time setup behavior:

  • Claude Desktop starts the bundle immediately; browser setup continues in the background
  • If the Patchright Chromium browser is still downloading, retry the tool after a short wait
  • Managed browser downloads are shared under ~/.linkedin-mcp/patchright-browsers/

Login issues:

  • Make sure you have only one active LinkedIn session at a time
  • LinkedIn may require a login confirmation in the LinkedIn mobile app for --login
  • You might get a captcha challenge if you logged in frequently. Run uvx linkedin-scraper-mcp@latest --login which opens a browser where you can solve captchas manually. See the uvx setup for prerequisites.

Timeout issues:

  • If pages fail to load or elements aren't found, try increasing the timeout: --timeout 10000
  • Users on slow connections may need higher values (e.g., 15000-30000ms)
  • Can also set via environment variable: TIMEOUT=10000
</details> <br/> <br/>

🐳 Docker Setup

Prerequisites: Make sure you have Docker installed and running, and uv installed on the host for the one-time --login step.

Authentication

Docker runs headless (no browser window), so you need to create a browser profile locally first and mount it into the container.

Step 1: Create profile on the host (one-time setup)

bash
uvx linkedin-scraper-mcp@latest --login

This opens a browser window where you log in manually (5 minute timeout for 2FA, captcha, etc.). The browser profile and cookies are saved under ~/.linkedin-mcp/. On startup, Docker derives a Linux browser profile from your host cookies and creates a fresh session each time. If you experience stability issues with Docker, consider using the uvx setup instead.

Step 2: Configure Claude Desktop with Docker

json
{
  "mcpServers": {
    "linkedin": {
      "command": "docker",
      "args": [
        "run", "--rm", "-i",
        "-v", "~/.linkedin-mcp:/home/pwuser/.linkedin-mcp",
        "stickerdaniel/linkedin-mcp-server:latest"
      ]
    }
  }
}

[!NOTE] Docker creates a fresh session on each startup. Sessions may expire over time — run uvx linkedin-scraper-mcp@latest --login again if you encounter authentication issues.

[!NOTE] Why can't I run --login in Docker? Docker containers don't have a display server. Create a profile on your host using the uvx setup and mount it into Docker.

Docker Setup Help

<details> <summary><b>🔧 Configuration</b></summary>

Transport Modes:

  • Default (stdio): Standard communication for local MCP servers
  • Streamable HTTP: For a web-based MCP server
  • If no transport is specified, the server defaults to stdio
  • An interactive terminal without explicit transport shows a chooser prompt

CLI Options:

  • --log-level {DEBUG,INFO,WARNING,ERROR} - Set logging level (default: WARNING)
  • --transport {stdio,streamable-http} - Optional: force transport mode (default: stdio)
  • --host HOST - HTTP server host (default: 127.0.0.1)
  • --port PORT - HTTP server port (default: 8000)
  • --path PATH - HTTP server path (default: /mcp)
  • --logout - Clear all stored LinkedIn auth state, including source and derived runtime profiles
  • --timeout MS - Browser timeout for page operations in milliseconds (default: 5000)
  • --user-data-dir PATH - Path to persistent browser profile directory (default: ~/.linkedin-mcp/profile)
  • --chrome-path PATH - Path to Chrome/Chromium executable (rarely needed in Docker)

[!NOTE] --login and --no-headless are not available in Docker (no display server). Use the uvx setup to create profiles.

HTTP Mode Example (for web-based MCP clients):

bash
docker run -it --rm \
  -v ~/.linkedin-mcp:/home/pwuser/.linkedin-mcp \
  -p 8080:8080 \
  stickerdaniel/linkedin-mcp-server:latest \
  --transport streamable-http --host 0.0.0.0 --port 8080 --path /mcp

Runtime server logs are emitted by FastMCP/Uvicorn.

Test with mcp inspector:

  1. Install and run mcp inspector bunx @modelcontextprotocol/inspector
  2. Click pre-filled token url to open the inspector in your browser
  3. Select Streamable HTTP as Transport Type
  4. Set URL to http://localhost:8080/mcp
  5. Connect
  6. Test tools
</details> <details> <summary><b>❗ Troubleshooting</b></summary>

Docker issues:

  • Make sure Docker is installed
  • Check if Docker is running: docker ps

Login issues:

  • Make sure you have only one active LinkedIn session at a time
  • LinkedIn may require a login confirmation in the LinkedIn mobile app for --login
  • You might get a captcha challenge if you logged in frequently. Run uvx linkedin-scraper-mcp@latest --login which opens a browser where you can solve captchas manually. See the uvx setup for prerequisites.
  • If Docker auth becomes stale after you re-login on the host, restart Docker once so it can fresh-bridge from the new source session generation.

Timeout issues:

  • If pages fail to load or elements aren't found, try increasing the timeout: --timeout 10000
  • Users on slow connections may need higher values (e.g., 15000-30000ms)
  • Can also set via environment variable: TIMEOUT=10000

Custom Chrome path:

  • If Chrome is installed in a non-standard location, use --chrome-path /path/to/chrome
  • Can also set via environment variable: CHROME_PATH=/path/to/chrome
</details> <br/> <br/>

🐍 Local Setup (Develop & Contribute)

Contributions are welcome! See CONTRIBUTING.md for architecture guidelines and checklists. Please open an issue first to discuss the feature or bug fix before submitting a PR.

Prerequisites: Git and uv installed

Installation

bash
# 1. Clone repository
git clone https://github.com/stickerdaniel/linkedin-mcp-server
cd linkedin-mcp-server

# 2. Install UV package manager (if not already installed)
curl -LsSf https://astral.sh/uv/install.sh | sh

# 3. Install dependencies
uv sync
uv sync --group dev

# 4. Install pre-commit hooks
uv run pre-commit install

# 5. Start the server
uv run -m linkedin_mcp_server

The local server uses the same managed-runtime flow as MCPB and uvx: it prepares the Patchright Chromium browser cache in the background and opens LinkedIn login on the first auth-requiring tool call. You can still run uv run -m linkedin_mcp_server --login when you want to create the session explicitly.

Local Setup Help

<details> <summary><b>🔧 Configuration</b></summary>

CLI Options:

  • --login - Open browser to log in and save persistent profile
  • --no-headless - Show browser window (useful for debugging scraping issues)
  • --log-level {DEBUG,INFO,WARNING,ERROR} - Set logging level (default: WARNING)
  • --transport {stdio,streamable-http} - Optional: force transport mode (default: stdio)
  • --host HOST - HTTP server host (default: 127.0.0.1)
  • --port PORT - HTTP server port (default: 8000)
  • --path PATH - HTTP server path (default: /mcp)
  • --logout - Clear stored LinkedIn browser profile
  • --timeout MS - Browser timeout for page operations in milliseconds (default: 5000)
  • --status - Check if current session is valid and exit
  • --user-data-dir PATH - Path to persistent browser profile directory (default: ~/.linkedin-mcp/profile)
  • --slow-mo MS - Delay between browser actions in milliseconds (default: 0, useful for debugging)
  • --user-agent STRING - Custom browser user agent
  • --viewport WxH - Browser viewport size (default: 1280x720)
  • --chrome-path PATH - Path to Chrome/Chromium executable (for custom browser installations)
  • --help - Show help

Note: Most CLI options have environment variable equivalents. See .env.example for details.

HTTP Mode Example (for web-based MCP clients):

bash
uv run -m linkedin_mcp_server --transport streamable-http --host 127.0.0.1 --port 8000 --path /mcp

Claude Desktop:

json
{
  "mcpServers": {
    "linkedin": {
      "command": "uv",
      "args": ["--directory", "/path/to/linkedin-mcp-server", "run", "-m", "linkedin_mcp_server"]
    }
  }
}

stdio is used by default for this config.

</details> <details> <summary><b>❗ Troubleshooting</b></summary>

Login issues:

  • Make sure you have only one active LinkedIn session at a time
  • LinkedIn may require a login confirmation in the LinkedIn mobile app for --login
  • You might get a captcha challenge if you logged in frequently. The --login command opens a browser where you can solve it manually.

Scraping issues:

  • Use --no-headless to see browser actions and debug scraping problems
  • Add --log-level DEBUG to see more detailed logging

Session issues:

  • Browser profile is stored at ~/.linkedin-mcp/profile/
  • Use --logout to clear the profile and start fresh

Python/Patchright issues:

  • Check Python version: python --version (should be 3.12+)
  • Reinstall Patchright: uv run patchright install chromium
  • Reinstall dependencies: uv sync --reinstall

Timeout issues:

  • If pages fail to load or elements aren't found, try increasing the timeout: --timeout 10000
  • Users on slow connections may need higher values (e.g., 15000-30000ms)
  • Can also set via environment variable: TIMEOUT=10000

Custom Chrome path:

  • If Chrome is installed in a non-standard location, use --chrome-path /path/to/chrome
  • Can also set via environment variable: CHROME_PATH=/path/to/chrome
</details> <br/> <br/>

Acknowledgements

Built with FastMCP and Patchright.

Use in accordance with LinkedIn's Terms of Service. Web scraping may violate LinkedIn's terms. This tool is for personal use only.

License

This project is licensed under the Apache 2.0 license.

<br>

常见问题

LinkedIn Profile and Job Scraper 是什么?

Enable AI assistants to interact with LinkedIn by scraping profiles, companies, and job postings. Perform detailed data extraction and session management to support recruitment and business research workflows. Simplify LinkedIn data access with secure credential handling and seamless integration.

相关 Skills

表格处理

by anthropics

Universal
热门

围绕 .xlsx、.xlsm、.csv、.tsv 做读写、修复、清洗、格式整理、公式计算与格式转换,适合修改现有表格、生成新报表或把杂乱数据整理成交付级电子表格。

做 Excel/CSV 相关任务很省心,能直接读写、修复、清洗和格式转换,尤其擅长把乱七八糟的表格整理成交付级文件。

效率与工作流
未扫描109.6k

PDF处理

by anthropics

Universal
热门

遇到 PDF 读写、文本表格提取、合并拆分、旋转加水印、表单填写或加解密时直接用它,也能提取图片、生成新 PDF,并把扫描件通过 OCR 变成可搜索文档。

PDF杂活别再来回切工具了,文本表格提取、合并拆分到OCR识别一次搞定,连扫描件也能变可搜索。

效率与工作流
未扫描109.6k

Word文档

by anthropics

Universal
热门

覆盖Word/.docx文档的创建、读取、编辑与重排,适合生成报告、备忘录、信函和模板,也能处理目录、页眉页脚、页码、图片替换、查找替换、修订批注及内容提取整理。

搞定 .docx 的创建、改写与精排版,目录、批量替换、批注修订和图片更新都能自动化,做正式文档尤其省心。

效率与工作流
未扫描109.6k

相关 MCP Server

文件系统

编辑精选

by Anthropic

热门

Filesystem 是 MCP 官方参考服务器,让 LLM 安全读写本地文件系统。

这个服务器解决了让 Claude 直接操作本地文件的痛点,比如自动整理文档或生成代码文件。适合需要自动化文件处理的开发者,但注意它只是参考实现,生产环境需自行加固安全。

效率与工作流
82.9k

by wonderwhy-er

热门

Desktop Commander 是让 AI 直接执行终端命令、管理文件和进程的 MCP 服务器。

这工具解决了 AI 无法直接操作本地环境的痛点,适合需要自动化脚本调试或文件批量处理的开发者。它能让你用自然语言指挥终端,但权限控制需谨慎,毕竟让 AI 执行 rm -rf 可不是闹着玩的。

效率与工作流
5.8k

EdgarTools

编辑精选

by dgunning

热门

EdgarTools 是无需 API 密钥即可解析 SEC EDGAR 财报的开源 Python 库。

这个工具解决了金融数据获取的痛点——直接让 AI 读取结构化财报,比如让 Claude 分析苹果的 10-K 文件。适合量化分析师或金融开发者快速构建数据管道。但注意,它依赖 SEC 网站稳定性,高峰期可能延迟。

效率与工作流
1.9k

评论