io.github.shinpr/mcp-local-rag

编码与调试

by shinpr

易于部署的本地 RAG server,仅需极少配置即可启动,适合快速构建私有知识检索与问答流程。

想快速搭起私有知识检索与问答流程,不妨试试它:极少配置就能部署本地 RAG 服务,上手轻松,数据也更安心。

什么是 io.github.shinpr/mcp-local-rag

易于部署的本地 RAG server,仅需极少配置即可启动,适合快速构建私有知识检索与问答流程。

README

<p align="center"> <img src="assets/banner.jpg" alt="MCP Local RAG — Search below the surface." width="600" /> </p>

MCP Local RAG

GitHub stars npm version License: MIT TypeScript MCP Registry

Local RAG for developers via MCP or CLI. Semantic search with keyword boost for exact technical terms — fully private, zero setup.

Features

  • Semantic search with keyword boost Vector search first, then keyword matching boosts exact matches. Terms like useEffect, error codes, and class names rank higher—not just semantically guessed.

  • Smart semantic chunking Chunks documents by meaning, not character count. Uses embedding similarity to find natural topic boundaries—keeping related content together and splitting where topics change.

  • Quality-first result filtering Groups results by relevance gaps instead of arbitrary top-K cutoffs. Get fewer but more trustworthy chunks.

  • Runs entirely locally No API keys, no cloud, no data leaving your machine. Works fully offline after the first model download.

  • Zero-friction setup One npx command. No Docker, no Python, no servers to manage. Use via MCP, CLI, or both. Optional Agent Skills help AI assistants form better queries and interpret results.

Quick Start

Set BASE_DIR to the folder you want to search. Documents must live under it.

Add the MCP server to your AI coding tool:

For Cursor — Add to ~/.cursor/mcp.json:

json
{
  "mcpServers": {
    "local-rag": {
      "command": "npx",
      "args": ["-y", "mcp-local-rag"],
      "env": {
        "BASE_DIR": "/path/to/your/documents"
      }
    }
  }
}

For Codex — Add to ~/.codex/config.toml:

toml
[mcp_servers.local-rag]
command = "npx"
args = ["-y", "mcp-local-rag"]

[mcp_servers.local-rag.env]
BASE_DIR = "/path/to/your/documents"

For Claude Code — Run this command:

bash
claude mcp add local-rag --scope user --env BASE_DIR=/path/to/your/documents -- npx -y mcp-local-rag

Restart your tool, then start using it:

code
You: "Ingest api-spec.pdf"
Assistant: Successfully ingested api-spec.pdf (47 chunks created)

You: "What does the API documentation say about authentication?"
Assistant: Based on the documentation, authentication uses OAuth 2.0 with JWT tokens.
          The flow is described in section 3.2...

Or use directly as CLI — no MCP server needed:

bash
npx mcp-local-rag ingest ./docs/
npx mcp-local-rag query "authentication API"

That's it. No Docker, no Python, no server setup.

Why This Exists

You want AI to search your documents—technical specs, research papers, internal docs. But most solutions send your files to external APIs.

Privacy. Your documents might contain sensitive data. This runs entirely locally.

Cost. External embedding APIs charge per use. This is free after the initial model download.

Offline. Works without internet after setup.

Code search. Pure semantic search misses exact terms like useEffect or ERR_CONNECTION_REFUSED. Keyword boost catches both meaning and exact matches.

Agent reality. In practice, many AI environments mainly use tool calling. CLI support and Agent Skills make the same workflows available even without full MCP integration.

Usage

mcp-local-rag provides two interfaces: an MCP server for AI coding tools and a CLI for direct use from the terminal.

Using with MCP

The MCP server provides 6 tools: ingest_file, ingest_data, query_documents, list_files, delete_file, status.

Ingesting Documents

code
"Ingest the document at /Users/me/docs/api-spec.pdf"

Supports PDF, DOCX, TXT, and Markdown. The server extracts text, splits it into chunks, generates embeddings locally, and stores everything in a local vector database.

Re-ingesting the same file replaces the old version automatically.

Ingesting HTML Content

Use ingest_data to ingest HTML content retrieved by your AI assistant (via web fetch, curl, browser tools, etc.):

code
"Fetch https://example.com/docs and ingest the HTML"

The server extracts main content using Readability (removes navigation, ads, etc.), converts to Markdown, and indexes it. Perfect for:

  • Web documentation
  • HTML retrieved by the AI assistant
  • Clipboard content

HTML is automatically cleaned—you get the article content, not the boilerplate.

Note: The RAG server itself doesn't fetch web content—your AI assistant retrieves it and passes the HTML to ingest_data. This keeps the server fully local while letting you index any content your assistant can access. Please respect website terms of service and copyright when ingesting external content.

Searching Documents

code
"What does the API documentation say about authentication?"
"Find information about rate limiting"
"Search for error handling best practices"

Search uses semantic similarity with keyword boost. This means useEffect finds documents containing that exact term, not just semantically similar React concepts.

Results include text content, source file, document title, and relevance score. The document title provides context for each chunk, helping identify which document a result belongs to. Adjust result count with limit (1-20, default 10).

Managing Files

code
"List all files in BASE_DIR and their ingested status"   # See what's indexed
"Delete old-spec.pdf from RAG"     # Remove a file
"Show RAG server status"           # Check system health

Using as CLI

All MCP tools are also available as CLI commands — no MCP server needed:

bash
npx mcp-local-rag ingest ./docs/               # Bulk ingest files
npx mcp-local-rag query "authentication API"    # Search documents
npx mcp-local-rag list                          # Show ingestion status
npx mcp-local-rag status                        # Database stats
npx mcp-local-rag delete ./docs/old.pdf         # Remove content
npx mcp-local-rag delete --source "https://..."  # Remove by source URL

query, list, status, and delete output JSON to stdout for piping (e.g., | jq). ingest outputs progress to stderr. Global options (--db-path, --cache-dir, --model-name) go before the subcommand. Run npx mcp-local-rag --help for details.

⚠️ The CLI does not read your MCP client config (mcp.json, config.toml, etc.). Configure the CLI via flags or environment variables as shown below.

Configuration

CLI flags — global options go before the subcommand, subcommand options go after:

bash
npx mcp-local-rag --db-path ./my-db query "auth" --base-dir ./docs

Environment variables — set in your shell:

bash
export DB_PATH=./my-db
export BASE_DIR=./docs
npx mcp-local-rag query "auth"

Sharing config between MCP and CLI — if your MCP client inherits shell environment variables, you can set them in your shell profile (e.g., ~/.zshrc) so both use the same values. Otherwise, set them explicitly in your MCP config as well.

bash
export BASE_DIR=/path/to/your/documents
export DB_PATH=/path/to/lancedb

Configuration is resolved in this order:

  1. CLI flags (highest priority)
  2. Environment variables
  3. Defaults

For the full list of CLI flags, environment variables, and defaults, see Configuration.

For CLI-only setups (no MCP server), install Agent Skills so your AI assistant can form better queries and interpret results consistently.

⚠️ --model-name must match your MCP server config. Using a different embedding model against an existing database produces incompatible vectors, silently degrading search quality.

Search Tuning

Adjust these for your use case:

VariableDefaultDescription
RAG_HYBRID_WEIGHT0.6Keyword boost factor. 0 = semantic only, higher = stronger keyword boost.
RAG_GROUPING(not set)similar for top group only, related for top 2 groups.
RAG_MAX_DISTANCE(not set)Filter out low-relevance results (e.g., 0.5).
RAG_MAX_FILES(not set)Limit results to top N files (e.g., 1 for single best file).

Code-focused tuning

For codebases and API specs, increase keyword boost so exact identifiers (useEffect, ERR_*, class names) dominate ranking:

json
"env": {
  "RAG_HYBRID_WEIGHT": "0.7",
  "RAG_GROUPING": "similar"
}
  • 0.7 — balanced semantic + keyword
  • 1.0 — aggressive; exact matches strongly rerank results

Keyword boost is applied after semantic filtering, so it improves precision without surfacing unrelated matches.

How It Works

TL;DR:

  • Documents are chunked by semantic similarity, not fixed character counts
  • Each chunk is embedded locally using Transformers.js
  • Search uses semantic similarity with keyword boost for exact matches
  • Results are filtered based on relevance gaps, not raw scores

Details

When you ingest a document, the parser extracts text based on file type (PDF via mupdf, DOCX via mammoth, text files directly).

The semantic chunker splits text into sentences, then groups them using embedding similarity. It finds natural topic boundaries where the meaning shifts—keeping related content together instead of cutting at arbitrary character limits. This produces chunks that are coherent units of meaning, typically 500-1000 characters. Markdown code blocks are kept intact—never split mid-block—preserving copy-pastable code in search results.

Each chunk goes through a Transformers.js embedding model (default: all-MiniLM-L6-v2, configurable via MODEL_NAME), converting text into vectors. Vectors are stored in LanceDB, a file-based vector database requiring no server process.

When you search:

  1. Your query becomes a vector using the same model
  2. Semantic (vector) search finds the most relevant chunks
  3. Quality filters apply (distance threshold, grouping)
  4. Keyword matches boost rankings for exact term matching

The keyword boost ensures exact terms like useEffect or error codes rank higher when they match.

Agent Skills

Agent Skills provide optimized prompts that help AI assistants use RAG tools more effectively. Install skills for better query formulation, result interpretation, and ingestion workflows:

bash
# Claude Code (project-level)
npx mcp-local-rag skills install --claude-code

# Claude Code (user-level)
npx mcp-local-rag skills install --claude-code --global

# Codex
npx mcp-local-rag skills install --codex

Skills include:

  • Query optimization: Better search query formulation
  • Result interpretation: Score thresholds and filtering guidelines
  • HTML ingestion: Format selection and source naming

Ensuring Skill Activation

Skills are loaded automatically in most cases—AI assistants scan skill metadata and load relevant instructions when needed. For consistent behavior:

Option 1: Explicit request (natural language) Before RAG operations, request in natural language:

  • "Use the mcp-local-rag skill for this search"
  • "Apply RAG best practices from skills"

Option 2: Add to agent instruction file Add to your AGENTS.md, CLAUDE.md, or other agent instruction file:

code
When using query_documents, ingest_file, or ingest_data tools,
apply the mcp-local-rag skill for better query formulation and result interpretation.

Configuration

Environment Variables and CLI Flags

Both MCP and CLI use the same environment variables. The CLI also accepts equivalent flags.

Environment VariableCLI FlagDefaultDescription
BASE_DIR--base-dirCurrent directoryDocument root directory (security boundary)
DB_PATH--db-path./lancedb/Vector database location
CACHE_DIR--cache-dir./models/Model cache directory
MODEL_NAME--model-nameXenova/all-MiniLM-L6-v2HuggingFace model ID (available models)
MAX_FILE_SIZE--max-file-size104857600 (100MB)Maximum file size in bytes
CHUNK_MIN_LENGTH--chunk-min-length50Minimum chunk length in characters (1–10000)

Model choice tips:

  • Multilingual docs → e.g., onnx-community/embeddinggemma-300m-ONNX (100+ languages)
  • Scientific papers → e.g., sentence-transformers/allenai-specter (citation-aware)
  • Code repositories → default often suffices; keyword boost matters more (or jinaai/jina-embeddings-v2-base-code)

⚠️ Changing MODEL_NAME changes embedding dimensions. Delete DB_PATH and re-ingest after switching models.

Client-Specific Setup

Cursor — Global: ~/.cursor/mcp.json, Project: .cursor/mcp.json

json
{
  "mcpServers": {
    "local-rag": {
      "command": "npx",
      "args": ["-y", "mcp-local-rag"],
      "env": {
        "BASE_DIR": "/path/to/your/documents"
      }
    }
  }
}

Codex~/.codex/config.toml (note: must use mcp_servers with underscore)

toml
[mcp_servers.local-rag]
command = "npx"
args = ["-y", "mcp-local-rag"]

[mcp_servers.local-rag.env]
BASE_DIR = "/path/to/your/documents"

Claude Code:

bash
claude mcp add local-rag --scope user \
  --env BASE_DIR=/path/to/your/documents \
  -- npx -y mcp-local-rag

First Run

The embedding model (~90MB) downloads on first use. Takes 1-2 minutes, then works offline.

Security

  • Path restriction: Only files within BASE_DIR are accessible
  • Local only: No network requests after model download
  • Model source: Official HuggingFace repository (verify here)
<details> <summary><strong>Performance</strong></summary>

Tested on MacBook Pro M1 (16GB RAM), Node.js 22:

Query Speed: ~1.2 seconds for 10,000 chunks (p90 < 3s)

Ingestion (10MB PDF):

  • PDF parsing: ~8s
  • Chunking: ~2s
  • Embedding: ~30s
  • DB insertion: ~5s

Memory: ~200MB idle, ~800MB peak (50MB file ingestion)

Concurrency: Handles 5 parallel queries without degradation.

</details> <details> <summary><strong>Troubleshooting</strong></summary>

"No results found"

Documents must be ingested first. Run "List all ingested files" to verify.

Model download failed

Check internet connection. If behind a proxy, configure network settings. The model can also be downloaded manually.

"File too large"

Default limit is 100MB. Split large files or increase MAX_FILE_SIZE.

Slow queries

Check chunk count with status. Large documents with many chunks may slow queries. Consider splitting very large files.

"Path outside BASE_DIR"

Ensure file paths are within BASE_DIR. Use absolute paths.

MCP client doesn't see tools

  1. Verify config file syntax
  2. Restart client completely (Cmd+Q on Mac for Cursor)
  3. Test directly: npx mcp-local-rag should run without errors
</details> <details> <summary><strong>FAQ</strong></summary>

Is this really private? Yes. After model download, nothing leaves your machine. Verify with network monitoring.

Can I use this offline? Yes, after the first model download (~90MB).

How does this compare to cloud RAG? Cloud services offer better accuracy at scale but require sending data externally. This trades some accuracy for complete privacy and zero runtime cost.

What file formats are supported? PDF, DOCX, TXT, Markdown, and HTML (via ingest_data). Not yet: Excel, PowerPoint, images.

Can I change the embedding model? Yes, but you must delete your database and re-ingest all documents. Different models produce incompatible vector dimensions.

GPU acceleration? Transformers.js runs on CPU. GPU support is experimental. CPU performance is adequate for most use cases.

Multi-user support? No. Designed for single-user, local access. Multi-user would require authentication/access control.

How to backup? Copy DB_PATH directory (default: ./lancedb/).

</details> <details> <summary><strong>Development</strong></summary>

Building from Source

bash
git clone https://github.com/shinpr/mcp-local-rag.git
cd mcp-local-rag
pnpm install

Testing

bash
pnpm test              # Run all tests
pnpm run test:watch    # Watch mode

Code Quality

bash
pnpm run type-check    # TypeScript check
pnpm run check:fix     # Lint and format
pnpm run check:deps    # Circular dependency check
pnpm run check:all     # Full quality check

Project Structure

code
src/
  index.ts      # Entry point
  server/       # MCP tool handlers
  cli/          # CLI subcommands (ingest)
  parser/       # PDF, DOCX, TXT, MD parsing
  chunker/      # Text splitting
  embedder/     # Transformers.js embeddings
  vectordb/     # LanceDB operations
  __tests__/    # Test suites
</details>

Contributing

Contributions welcome! See CONTRIBUTING.md for setup and guidelines.

License

MIT License. Free for personal and commercial use.

Blog Posts

Acknowledgments

Built with Model Context Protocol by Anthropic, LanceDB, and Transformers.js.

常见问题

io.github.shinpr/mcp-local-rag 是什么?

易于部署的本地 RAG server,仅需极少配置即可启动,适合快速构建私有知识检索与问答流程。

相关 Skills

前端设计

by anthropics

Universal
热门

面向组件、页面、海报和 Web 应用开发,按鲜明视觉方向生成可直接落地的前端代码与高质感 UI,适合做 landing page、Dashboard 或美化现有界面,避开千篇一律的 AI 审美。

想把页面做得既能上线又有设计感,就用前端设计:组件到整站都能产出,难得的是能避开千篇一律的 AI 味。

编码与调试
未扫描109.6k

网页构建器

by anthropics

Universal
热门

面向复杂 claude.ai HTML artifact 开发,快速初始化 React + Tailwind CSS + shadcn/ui 项目并打包为单文件 HTML,适合需要状态管理、路由或多组件交互的页面。

在 claude.ai 里做复杂网页 Artifact 很省心,多组件、状态和路由都能顺手搭起来,React、Tailwind 与 shadcn/ui 组合效率高、成品也更精致。

编码与调试
未扫描109.6k

网页应用测试

by anthropics

Universal
热门

用 Playwright 为本地 Web 应用编写自动化测试,支持启动开发服务器、校验前端交互、排查 UI 异常、抓取截图与浏览器日志,适合调试动态页面和回归验证。

借助 Playwright 一站式验证本地 Web 应用前端功能,调 UI 时还能同步查看日志和截图,定位问题更快。

编码与调试
未扫描109.6k

相关 MCP Server

GitHub

编辑精选

by GitHub

热门

GitHub 是 MCP 官方参考服务器,让 Claude 直接读写你的代码仓库和 Issues。

这个参考服务器解决了开发者想让 AI 安全访问 GitHub 数据的问题,适合需要自动化代码审查或 Issue 管理的团队。但注意它只是参考实现,生产环境得自己加固安全。

编码与调试
82.9k

by Context7

热门

Context7 是实时拉取最新文档和代码示例的智能助手,让你告别过时资料。

它能解决开发者查找文档时信息滞后的问题,特别适合快速上手新库或跟进更新。不过,依赖外部源可能导致偶尔的数据延迟,建议结合官方文档使用。

编码与调试
51.5k

by tldraw

热门

tldraw 是让 AI 助手直接在无限画布上绘图和协作的 MCP 服务器。

这解决了 AI 只能输出文本、无法视觉化协作的痛点——想象让 Claude 帮你画流程图或白板讨论。最适合需要快速原型设计或头脑风暴的开发者。不过,目前它只是个基础连接器,你得自己搭建画布应用才能发挥全部潜力。

编码与调试
46.2k

评论