io.github.louis49/melchizedek

编码与调试

by louis49

Persistent memory for Claude Code — offline, single-file, hybrid search.

什么是 io.github.louis49/melchizedek

Persistent memory for Claude Code — offline, single-file, hybrid search.

README

Melchizedek

npm version npm downloads CI License: MIT Donate Donate

Persistent memory for Claude Code. Automatically indexes every conversation and provides production-grade hybrid search (BM25 + vectors + reranker) via MCP tools. 100% local, zero config, zero API keys, zero invoice.


Why Melchizedek?

Claude Code forgets everything between sessions - and knows nothing about your other projects. Melchizedek fixes both.

It runs silently in the background - indexing your conversations as you work - then gives Claude the ability to search across your entire history, across all projects: past debugging sessions, architectural decisions, error solutions, code patterns.

No cloud. No API keys. No config. Plug and ask.

How it works

code
~/.claude/projects/**/*.jsonl       (your conversation transcripts - read-only)
        |
        v
  SessionEnd hook                   (auto-triggers after each session)
        |
        v
  +-----------------+
  |  Indexer         |    Parse JSONL -> chunk pairs -> SHA-256 dedup
  |  (better-sqlite3)|    FTS5 tokenize -> vector embed (optional)
  +-----------------+
        |
        v
  ~/.melchizedek/memory.db           (single SQLite file, WAL mode)
        |
        v
  +-----------------+
  |  MCP Server      |    16 search & management tools
  |  (stdio)         |    Hybrid: BM25 + vectors + RRF + reranker
  +-----------------+
        |
        v
  Claude Code                       (searches your history via MCP)

Search pipeline - 4 levels of graceful degradation

Every layer is optional. The plugin works with BM25 alone and gets better as more components are available.

LevelComponentWhat it addsDependency
1BM25 (FTS5)Keyword search with stemmingNone (always active)
2Dual vectors (sqlite-vec)Semantic search - text (MiniLM 384d) + code (Jina 768d)@huggingface/transformers (optional)
3RRF fusionMerges BM25 + text vectors + code vectors via Reciprocal Rank FusionVectors enabled
4RerankerCross-encoder re-scoring of top resultsTransformers.js or node-llama-cpp (optional)

Performance

Measured with npm run bench - 100 sessions, 1 000 chunks, on a single SQLite file.

MetricResultTarget
Indexation (100 sessions)~80 ms< 10 s
BM25 search (mean)~0.2 ms< 50 ms
DB size (100 sessions)~1.4 MB< 30 MB
Tokens per search~125< 2 000

Quick Start

npm (recommended)

bash
npm install -g melchizedek

Add the MCP server to Claude Code:

bash
claude mcp add --scope user melchizedek -- melchizedek-server

npx (no install)

bash
claude mcp add --scope user melchizedek -- npx melchizedek-server

From source

bash
git clone https://github.com/louis49/melchizedek.git
cd melchizedek && npm install && npm run build
claude --mcp-config .mcp.json

Claude Code plugin marketplace (coming soon)

Plugin review pending. In the meantime, use npm or npx install above.

bash
claude plugin install melchizedek   # not yet available

Setting up hooks (automatic indexing)

The MCP server provides search tools, but hooks trigger automatic indexing. Without hooks, you'd need to manually index sessions.

For marketplace installs, hooks are configured automatically. For npm/npx/source installs, add hooks to ~/.claude/settings.json.

See docs/installation.md for the full JSON configuration, hook reference, and troubleshooting.

After setup, restart Claude Code. Indexing starts automatically.

MCP Tools

Search (start here)

ToolDescription
m9k_searchSearch indexed conversations. Returns compact snippets. Current project boosted. Supports since/until date filters and order (score, date_asc, date_desc).
m9k_contextGet a chunk with surrounding context (adjacent chunks in the same session).
m9k_fullRetrieve full content of chunks by IDs.

Progressive retrieval pattern - search returns ~50 tokens/result, context ~200-300, full ~500-1000. Start with m9k_search, drill down only when needed. 4x token savings vs loading everything.

Context-aware ranking - results from your current project (×1.5) and current session (×1.2) are automatically promoted. Cross-project results remain visible.

Specialized search

ToolDescription
m9k_file_historyFind past conversations that touched a specific file.
m9k_errorsFind past solutions for an error message.
m9k_similar_workFind past approaches to similar tasks. Prioritizes rich metadata.

Memory management

ToolDescription
m9k_saveManually save a memory note for future recall.
m9k_sessionsList all indexed sessions, optionally filtered by project.
m9k_infoShow memory index info: corpus size, search pipeline, embedding worker, usage metrics.
m9k_configView or update plugin configuration.
m9k_forgetPermanently remove a chunk from the index.
m9k_delete_sessionDelete a session from the index.
m9k_ignore_projectExclude a project from indexing. Future sessions won't be indexed, existing ones optionally purged.
m9k_unignore_projectRe-enable indexing for a previously ignored project. Purged data is not restored.
m9k_restartRestart the MCP server to load fresh code after npm run build. Supports force: true for stuck processes.

Usage guide

ToolDescription
__USAGE_GUIDEPhantom tool. Its description teaches Claude the retrieval pattern and available tools.

Configuration

Zero config by default. Everything is tunable via m9k_config or environment variables.

SettingDefaultEnv var
Database path~/.melchizedek/memory.dbM9K_DB_PATH
Daemon modeenabledM9K_NO_DAEMON=1 to disable
Log levelwarnM9K_LOG_LEVEL
Embeddings enabledtrueM9K_EMBEDDINGS=false to disable
Reranker enabledtrueM9K_RERANKER=false to disable

See docs/configuration.md for the full settings reference (20+ options, env vars, config file examples).

Enhanced Search

Melchizedek works out of the box with BM25 keyword search. Text embeddings (MiniLM) download automatically on first use for semantic search.

For GPU-accelerated code embeddings (Ollama), cross-encoder reranking (GGUF models), platform-specific setup guides, and the full model reference, see Enhanced Search Setup.

How is this different?

Melchizedekclaude-historian-mcpclaude-memepisodic-memorymcp-memory-service
GitHub stars npmGitHub stars npmGitHub stars npmGitHub starsGitHub stars PyPI
PhilosophySearch engine - indexes everything, you searchSearch engine - scans JSONL on demandNotebook - AI compresses & savesSearch engineNotebook - AI decides what to store
Indexes raw conversationsYes (JSONL transcripts)Yes (direct JSONL read, no persistent index)Compressed summariesYes (JSONL)No (manual store_memory)
Retroactive on installYes (backfills all history)Yes (reads existing files)NoYesNo (empty at start)
SearchBM25 + vectors + RRF + rerankerTF-IDF + fuzzy matchingFTS5 + ChromaDBVectors onlyBM25 + vectors
Progressive retrieval3 layers (search/context/full)NoNoNoNo
100% offlineYesYesNo (needs API for compression)YesYes
Single-file storageSQLiteNone (reads raw JSONL)SQLite + ChromaDBSQLiteSQLite-vec
Zero configYesYesYesYesYes
MCP tools16104212
LicenseMITMITAGPL-3.0MITApache-2.0
Dual embedding (text + code)Yes (MiniLM + Jina Code)NoNoNoNo
Configurable modelsYes (Transformers.js or Ollama)NoNo (Chroma internal)No (hardcoded)Yes (ONNX, Ollama, OpenAI, Cloudflare)
RerankerCross-encoder (ONNX, GGUF, or HTTP)NoNoNoQuality scorer (not search reranker)
PrivacyAll local, <private> tag redactionAll localSends data to Anthropic APIAll localAll local
Multi-instanceSingleton daemon - N Claude windows share 1 process (Unix socket / Windows named pipe, local fallback)N separate processesShared HTTP worker (:37777)N separate processesShared HTTP server

Inspirations

This project stands on the shoulders of others. Key ideas borrowed from:

ProjectWhat we took
CASSRRF hybrid fusion, SHA-256 dedup, auto-fuzzy fallbackGitHub stars
claude-historian-mcpSpecialized MCP tools (file_history, error_solutions)GitHub stars npm
claude-diaryPreCompact hook (archive before /compact)GitHub stars

Known issues

  • Session boost inactive - Claude Code currently sends an empty session_id in the SessionStart hook stdin payload, preventing the ×1.2 session boost from working. The ×1.5 project boost is unaffected and provides the primary context-aware ranking. Related upstream issues: #13668 (empty transcript_path), #9188 (stale session_id). Melchizedek's session boost code is tested and ready, and will activate automatically when the upstream fix lands.

Privacy

  • Zero telemetry. No tracking, no analytics, no network calls (except optional lazy model download).
  • Read-only on transcripts. Never writes to ~/.claude/projects/. All data in ~/.melchizedek/.
  • <private> tag support. Content between <private>...</private> is replaced with [REDACTED] before indexing.
  • Local-only. Your conversations never leave your machine.

Requirements

  • Node.js >= 20
  • Claude Code >= 2.0
  • macOS, Linux, or Windows

License

MIT


"Without father, without mother, without genealogy, having neither beginning of days nor end of life."

  • Hebrews 7:3

Built by @louis49

常见问题

io.github.louis49/melchizedek 是什么?

Persistent memory for Claude Code — offline, single-file, hybrid search.

相关 Skills

前端设计

by anthropics

Universal
热门

面向组件、页面、海报和 Web 应用开发,按鲜明视觉方向生成可直接落地的前端代码与高质感 UI,适合做 landing page、Dashboard 或美化现有界面,避开千篇一律的 AI 审美。

想把页面做得既能上线又有设计感,就用前端设计:组件到整站都能产出,难得的是能避开千篇一律的 AI 味。

编码与调试
未扫描111.8k

网页构建器

by anthropics

Universal
热门

面向复杂 claude.ai HTML artifact 开发,快速初始化 React + Tailwind CSS + shadcn/ui 项目并打包为单文件 HTML,适合需要状态管理、路由或多组件交互的页面。

在 claude.ai 里做复杂网页 Artifact 很省心,多组件、状态和路由都能顺手搭起来,React、Tailwind 与 shadcn/ui 组合效率高、成品也更精致。

编码与调试
未扫描111.8k

网页应用测试

by anthropics

Universal
热门

用 Playwright 为本地 Web 应用编写自动化测试,支持启动开发服务器、校验前端交互、排查 UI 异常、抓取截图与浏览器日志,适合调试动态页面和回归验证。

借助 Playwright 一站式验证本地 Web 应用前端功能,调 UI 时还能同步查看日志和截图,定位问题更快。

编码与调试
未扫描111.8k

相关 MCP Server

GitHub

编辑精选

by GitHub

热门

GitHub 是 MCP 官方参考服务器,让 Claude 直接读写你的代码仓库和 Issues。

这个参考服务器解决了开发者想让 AI 安全访问 GitHub 数据的问题,适合需要自动化代码审查或 Issue 管理的团队。但注意它只是参考实现,生产环境得自己加固安全。

编码与调试
83.1k

by Context7

热门

Context7 是实时拉取最新文档和代码示例的智能助手,让你告别过时资料。

它能解决开发者查找文档时信息滞后的问题,特别适合快速上手新库或跟进更新。不过,依赖外部源可能导致偶尔的数据延迟,建议结合官方文档使用。

编码与调试
51.8k

by tldraw

热门

tldraw 是让 AI 助手直接在无限画布上绘图和协作的 MCP 服务器。

这解决了 AI 只能输出文本、无法视觉化协作的痛点——想象让 Claude 帮你画流程图或白板讨论。最适合需要快速原型设计或头脑风暴的开发者。不过,目前它只是个基础连接器,你得自己搭建画布应用才能发挥全部潜力。

编码与调试
46.2k

评论