grokipedia-mcp
搜索与获取by skymoore
搜索并获取 Grokipedia 文章,支持过滤、全文与引用查看,可探索相关页面、提取章节,并通过引导式流程完成研究与主题比较。
什么是 grokipedia-mcp?
搜索并获取 Grokipedia 文章,支持过滤、全文与引用查看,可探索相关页面、提取章节,并通过引导式流程完成研究与主题比较。
核心功能 (7 个工具)
searchSearch for articles in Grokipedia with optional filtering and sorting.
get_pageGet complete page information including metadata, content preview, and citations summary.
get_page_contentGet only the article content without citations or metadata.
get_page_citationsGet the citations list for a specific page.
get_related_pagesGet pages that are linked from the specified page.
get_page_sectionExtract a specific section from an article by header name.
get_page_sectionsGet a list of all section headers in an article.
README
Grokipedia MCP Server
<a href="https://glama.ai/mcp/servers/@skymoore/grokipedia-mcp"> <img width="380" height="200" src="https://glama.ai/mcp/servers/@skymoore/grokipedia-mcp/badge" alt="Grokipedia MCP Server" /> </a>MCP server for searching and retrieving content from Grokipedia
The User of the MCP assumes full responsibility for interacting with Grokipedia.
Please see the Xai Terms of Service if you have any doubts.
Elon, please don't sue me. I only wanted my agents to have access to truthful information and stop referencing wikipedia all the time.
Quick Start
Add this to your MCP configuration file:
{
"mcpServers": {
"grokipedia": {
"command": "uvx",
"args": ["grokipedia-mcp"]
}
}
}
Verifying Installation
You should see the Grokipedia server available with these tools:
search- Search with filtersget_page- Get page overviewget_page_content- Get full contentget_page_citations- Get citationsget_related_pages- Get linked pagesget_page_sections- List all section headersget_page_section- Extract specific sections
And these prompts:
research_topic- Research workflowfind_sources- Find citationsexplore_related- Explore connectionscompare_topics- Compare two topics
Features
- Search with Filters: Search with sorting (relevance/views) and filtering (min views)
- Page Content: Retrieve articles, citations, and metadata with smart truncation
- Related Pages: Discover linked/related articles
- Section Extraction: Get specific sections from long articles
- Smart Suggestions: Helpful alternatives when pages aren't found
- Guided Prompts: Pre-built workflows for research, sources, exploration
Installation (Development)
Using uv:
cd grokipedia-mcp
uv sync
For development with MCP Inspector and CLI tools:
uv sync --dev
Usage
Run with MCP Inspector (Development)
The fastest way to test and debug (requires dev dependencies):
uv run --dev mcp dev main.py
This launches the MCP Inspector UI where you can:
- Explore available tools
- Test search queries
- Retrieve page content
- View structured output
Run Directly
# Using the installed entry point
uv run grokipedia-mcp
# Or as a Python module
uv run python -m grokipedia_mcp
# Or directly
uv run python main.py
Available Tools
search
Search for articles in Grokipedia with filtering and sorting options.
Parameters:
query(string, required) - Search querylimit(int, optional, default: 12) - Maximum number of resultsoffset(int, optional, default: 0) - Pagination offsetsort_by(string, optional, default: "relevance") - Sort by "relevance" or "views"min_views(int, optional) - Filter to articles with at least this many views
Returns: List of search results with title, slug, snippet, relevance score, and view count.
Examples:
// Basic search
{"query": "machine learning", "limit": 5}
// Sort by most viewed
{"query": "python", "sort_by": "views"}
// Filter popular articles only
{"query": "artificial intelligence", "min_views": 1000}
get_page
Get complete page information including metadata, content preview, and citations summary. Includes smart suggestion of alternatives if page not found.
Parameters:
slug(string, required) - Article identifier (from search results)max_content_length(int, optional, default: 5000) - Maximum content length
Returns: Complete page object with metadata, truncated content, and citation summaries.
Features:
- Suggests similar pages if the requested slug doesn't exist
- Provides overview with content preview and citations
Use this when: You need an overview of a page with metadata and a content preview.
Example:
{"slug": "Machine_learning"}
get_page_content
Get only the article content without citations or metadata.
Parameters:
slug(string, required) - Article identifiermax_length(int, optional, default: 10000) - Maximum content length
Returns: Only the article content (title and content text).
Use this when: You need to read the full article content without citations.
Example:
{"slug": "Machine_learning", "max_length": 15000}
get_page_citations
Get the citations list for a specific page.
Parameters:
slug(string, required) - Article identifierlimit(int, optional) - Maximum number of citations to return (returns all if not specified)
Returns: List of citations with titles, URLs, and descriptions. Includes total count and returned count.
Use this when: You need to access source references and citations.
Examples:
// Get all citations
{"slug": "Machine_learning"}
// Get first 10 citations only
{"slug": "Machine_learning", "limit": 10}
get_related_pages
Get pages that are linked from a specific article.
Parameters:
slug(string, required) - Article identifierlimit(int, optional, default: 10) - Maximum number of related pages to return
Returns: List of related/linked pages with titles and slugs.
Use this when: You want to discover related topics or explore connections between articles.
Examples:
// Get related pages
{"slug": "Machine_learning"}
// Get more related pages
{"slug": "Quantum_computing", "limit": 20}
get_page_sections
Get a list of all section headers in an article.
Parameters:
slug(string, required) - Article identifier
Returns: List of all section headers with their levels (h1, h2, h3, etc.).
Use this when: You want to see the structure/outline of an article before reading specific sections.
Example:
{"slug": "Machine_learning"}
get_page_section
Extract a specific section from an article by header name.
Parameters:
slug(string, required) - Article identifiersection_header(string, required) - Section header to extract (case-insensitive)max_length(int, optional, default: 5000) - Maximum section content length
Returns: Content of the specified section only.
Use this when: You need just one section of a long article (e.g., "Applications", "History", "Examples").
Examples:
// Get specific section
{"slug": "Neural_networks", "section_header": "Applications"}
// Get longer section
{"slug": "Python", "section_header": "Syntax", "max_length": 10000}
Note: Articles can be 100,000+ characters. Content is automatically truncated to prevent overwhelming LLM context windows. Use the max_length parameters to control the amount returned.
Prompts
The server provides pre-built prompts for common workflows:
research_topic
Guided workflow to research a topic: search → retrieve → analyze related pages and citations
find_sources
Find authoritative sources and citations for academic/research purposes
explore_related
Discover connections between topics and suggested further reading
compare_topics
Compare two topics side-by-side with their content and citations
Architecture
The server uses:
- FastMCP for declarative MCP server implementation
- grokipedia-api-sdk AsyncClient for API communication
- Lifespan context for client connection management
- Structured output using Pydantic models from the SDK
- Comprehensive error handling with specific exception types
Error Handling
The server handles various error scenarios:
ValueErrorfor invalid parameters or not found pagesRuntimeErrorfor network or API errors- Detailed logging at debug, info, warning, and error levels
Development
Project Structure
grokipedia-mcp/
├── grokipedia_mcp/
│ ├── __init__.py # Package exports
│ ├── __main__.py # CLI entry point
│ └── server.py # FastMCP server implementation
├── main.py # Direct execution entry point
├── pyproject.toml # Project configuration
└── README.md # This file
Testing
Use the MCP Inspector for interactive testing:
uv run mcp dev main.py
License
MIT
常见问题
grokipedia-mcp 是什么?
搜索并获取 Grokipedia 文章,支持过滤、全文与引用查看,可探索相关页面、提取章节,并通过引导式流程完成研究与主题比较。
grokipedia-mcp 提供哪些工具?
提供 7 个工具,包括 search、get_page、get_page_content 等。
相关 Skills
agent-browser
by chulla-ceja
Browser automation CLI for AI agents. Use when the user needs to interact with websites, including navigating pages, filling forms, clicking buttons, taking screenshots, extracting data, testing web apps, or automating any browser task. Triggers include requests to "open a website", "fill out a form", "click a button", "take a screenshot", "scrape data from a page", "test this web app", "login to a site", "automate browser actions", or any task requiring programmatic web interaction.
接口规范
by alexxxiong
API 规范管理工具 - 跨项目 API 文档的初始化、更新、查询与搜索。Triggers: 'API文档', 'API规范', '接口文档', '路由解析', 'apispec', 'API lookup', 'API search'.
investment-research
by caijichang212
Perform structured investment research (投研分析) for a company/stock/ETF/sector using a repeatable framework: fundamentals (basic/财务报表与商业模式), technical analysis (技术指标与关键价位), industry research (行业景气与竞争格局), valuation (估值对比/情景), catalysts and risks, and produce a professional research report + actionable plan. Use when the user asks for: equity/ETF analysis, earnings/financial statement breakdown, peer/industry comparison, valuation ranges, bull/base/bear scenarios, technical trend/support-resistance, or a full research memo.
相关 MCP Server
Puppeteer 浏览器控制
编辑精选by Anthropic
Puppeteer 是让 Claude 自动操作浏览器进行网页抓取和测试的 MCP 服务器。
✎ 这个服务器解决了手动编写 Puppeteer 脚本的繁琐问题,适合需要自动化网页交互的开发者,比如抓取动态内容或做端到端测试。不过,作为参考实现,它可能缺少生产级的安全防护,建议在可控环境中使用。
网页抓取
编辑精选by Anthropic
Fetch 是 MCP 官方参考服务器,让 AI 能抓取网页并转为 Markdown 格式。
✎ 这个服务器解决了 AI 直接处理网页内容时格式混乱的问题,适合需要让 Claude 分析在线文档或新闻的开发者。不过作为参考实现,它缺乏生产级的安全配置,你得自己处理反爬虫和隐私风险。
Brave 搜索
编辑精选by Anthropic
Brave Search 是让 Claude 直接调用 Brave 搜索 API 获取实时网络信息的 MCP 服务器。
✎ 如果你想让 AI 助手帮你搜索最新资讯或技术文档,这个工具能绕过传统搜索的限制,直接返回结构化数据。特别适合需要实时信息的开发者,比如查 API 更新或竞品动态。不过它依赖 Brave 的 API 配额,高频使用可能受限。