io.github.shackleai/memory

编码与调试

by shackleai

Persistent memory for AI coding tools. 11 tools for any MCP-compatible AI tool.

什么是 io.github.shackleai/memory

Persistent memory for AI coding tools. 11 tools for any MCP-compatible AI tool.

README

<p align="center"> <img src="logo.png" alt="ShackleAI" width="120" /> </p> <h1 align="center">ShackleAI Memory</h1> <p align="center"> <strong>Persistent memory for AI coding tools.</strong> The first MCP-native memory server. </p> <p align="center"> <a href="https://www.npmjs.com/package/@shackleai/memory-mcp"><img src="https://img.shields.io/npm/v/@shackleai/memory-mcp.svg" alt="npm version" /></a> <a href="https://www.npmjs.com/package/@shackleai/memory-mcp"><img src="https://img.shields.io/npm/dw/@shackleai/memory-mcp.svg" alt="npm downloads" /></a> <a href="LICENSE"><img src="https://img.shields.io/badge/license-MIT-blue.svg" alt="MIT License" /></a> </p>

Give Claude Code, Cursor, Windsurf, VS Code Copilot, OpenAI Codex, or any MCP-compatible AI tool persistent memory across sessions. Your AI remembers decisions, conventions, bugs, and context — picks up exactly where you left off.

Install — One Command

Run this in your project directory:

bash
npx -y @shackleai/memory-mcp@latest setup

This creates two files in your project:

  • .mcp.json — registers the memory server so your AI tool auto-starts it
  • CLAUDE.md — tells the AI to actively use memory every session

Commit both to git so your whole team gets memory. That's it — no config, no API keys, no accounts.

npm/npx version too old? If the command fails, see Troubleshooting.

How It Works

code
1. Run the setup command above (one-time)
2. Start your AI tool in the project directory
3. Memory server starts automatically in the background
4. AI stores decisions, conventions, and bugs as you work
5. Next session — AI searches memory and picks up where you left off

You don't need to do anything after setup. The AI sees the memory tools and uses them proactively — storing important decisions, searching for past context, and saving session summaries.

Verify It Works

After setup, start a session and give your AI a task. Then ask:

"What have you stored in memory so far?"

If it calls memory_search and shows stored entries, it's working. In your next session, ask:

"What do you remember about this project?"

It should recall context from the previous session without reading any files.

Alternative Setup Methods

The npx setup command works for all MCP clients. If you prefer client-specific configuration:

<details> <summary>Claude Code (global config)</summary>
bash
claude mcp add memory -- npx -y @shackleai/memory-mcp@latest

This adds memory to your global Claude Code config. It works across all projects, but won't be shared with your team via git.

</details> <details> <summary>Cursor</summary>

Add to ~/.cursor/mcp.json:

json
{
  "mcpServers": {
    "memory": {
      "command": "npx",
      "args": ["-y", "@shackleai/memory-mcp@latest"]
    }
  }
}
</details> <details> <summary>Windsurf</summary>

Add to ~/.codeium/windsurf/mcp_config.json:

json
{
  "mcpServers": {
    "memory": {
      "command": "npx",
      "args": ["-y", "@shackleai/memory-mcp@latest"]
    }
  }
}
</details> <details> <summary>VS Code Copilot</summary>

Add to .vscode/mcp.json in your project:

json
{
  "servers": {
    "memory": {
      "command": "npx",
      "args": ["-y", "@shackleai/memory-mcp@latest"]
    }
  }
}
</details> <details> <summary>Claude Desktop</summary>

Add to your Claude Desktop config:

  • Windows: %APPDATA%\Claude\claude_desktop_config.json
  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
json
{
  "mcpServers": {
    "memory": {
      "command": "npx",
      "args": ["-y", "@shackleai/memory-mcp@latest"]
    }
  }
}
</details> <details> <summary>Install globally (if npx doesn't work)</summary>
bash
npm install -g @shackleai/memory-mcp
shackleai-memory setup

Or use the global binary directly in your MCP config:

json
{
  "mcpServers": {
    "memory": {
      "command": "shackleai-memory"
    }
  }
}
</details> <details> <summary>Run from source (for contributors)</summary>
bash
git clone https://github.com/shackleai/memory-mcp.git
cd memory-mcp && npm install && npm run build
json
{
  "mcpServers": {
    "memory": {
      "command": "node",
      "args": ["/absolute/path/to/memory-mcp/dist/index.js"]
    }
  }
}
</details>

First Run

The first run downloads the embedding model (~80MB, one-time). After that, everything works offline.

Features

  • One-command setupnpx -y @shackleai/memory-mcp@latest setup and you're done
  • Fully automatic — auto-detects project on startup, no manual init needed
  • 11 MCP tools — init, store, search, update, delete, list projects, session end, TODO status, export, import, cleanup
  • MCP resources — project context available as a readable resource
  • Local-first — everything stored on your machine at ~/.shackleai/
  • Zero config — no API keys, no cloud account, no setup beyond the install command
  • Offline — local embeddings via MiniLM-L6-v2 (free, runs on CPU)
  • Human-readable — memories stored as Markdown files you can read and edit
  • Git-friendly — version control your AI's memory with standard git
  • Semantic search — find relevant memories by meaning, not just keywords
  • Deduplication — automatically detects and merges duplicate memories
  • Auto-archive — old session files cleaned up based on retention period
  • Multi-project — separate memory spaces per project, auto-detected
  • LLM-portable — switch AI tools anytime, your memory stays

MCP Tools Reference

memory_init

Initialize or switch project context. Auto-called on server startup — only call manually if switching projects mid-session.

code
Input:  { project_path: "/path/to/project" }
Output: { project_name, tech_stack, memory_count, summary }

Auto-detects project name from package.json, pyproject.toml, or directory name. Detects tech stack (Node.js, Python, Rust, Go, Java, Ruby, PHP, .NET, Docker, etc.).

memory_store

Save important information to persistent memory. Use for decisions, conventions, bugs, architecture, preferences, TODOs, and context.

code
Input:  {
  content: "We chose PostgreSQL with Prisma ORM for type-safe queries",
  category: "decision",        // decision|convention|bug|architecture|preference|todo|context|session_summary
  importance: "high",           // low|medium|high (optional, default: medium)
  tags: ["database", "orm"]     // optional
}
Output: { id, stored: true, deduplicated: false }

Automatically checks for duplicates. If similar content exists (cosine similarity > 0.9), updates the existing memory instead of creating a new one.

memory_search

Search past memories by semantic meaning.

code
Input:  { query: "what database are we using", category: "decision", limit: 5 }
Output: { results: [{ id, content, category, relevance, ... }], count }

Uses vector similarity search — finds relevant memories even when wording differs.

memory_update

Update an existing memory when information changes.

code
Input:  { id: "mem-uuid", content: "Updated content", reason: "Changed approach" }
Output: { updated: true, previous_content }

memory_delete

Remove a memory that is no longer relevant (soft delete).

code
Input:  { id: "mem-uuid" }
Output: { deleted: true }

memory_list_projects

List all projects with stored memories.

code
Input:  {}
Output: { projects: [{ name, path, tech_stack, memory_count, last_session }], count }

memory_session_end

Save a session summary and open items. Creates continuity between sessions.

code
Input:  { summary: "Built auth system with JWT", open_items: ["Add refresh tokens", "Write tests"] }
Output: { saved: true, date: "2026-03-04" }

MCP Resources

The server exposes project context as an MCP resource:

  • memory://project/context — Current project's conventions, decisions, architecture, bugs, and TODOs. MCP clients that support resources can auto-load this at session start.

Storage

All data lives locally on your machine:

code
~/.shackleai/
  db/
    memory.db                    SQLite database + vector index
  projects/
    my-project/
      decisions.md               Key decisions with reasoning
      conventions.md             Coding standards and patterns
      bugs.md                    Known issues and fixes
      architecture.md            Architecture choices
      preferences.md             Developer preferences
      todos.md                   Open items
      context.md                 General context
      sessions/
        2026-03-04.md            Today's session summary
        2026-03-03.md            Yesterday's session
  config.yaml                    Optional configuration

Markdown is the source of truth. You can read, edit, or delete any memory file with a text editor. The SQLite database is the search index.

Configuration

Create ~/.shackleai/config.yaml (optional — sensible defaults work out of the box):

yaml
# Embedding provider: "local" (free, offline) or "openai" (better quality, requires API key)
embedding:
  provider: local

# Custom storage path (default: ~/.shackleai)
# storage_path: /path/to/custom/location

# Maximum memories per project before oldest are archived
max_memories_per_project: 10000

# Session files older than this are auto-archived
max_session_history_days: 90

# Automatically detect and merge duplicate memories
auto_dedup: true

# Cosine similarity threshold for deduplication (0.0 to 1.0)
dedup_threshold: 0.9

Cloud Mode

By default, ShackleAI Memory runs locally with SQLite — no account needed. Cloud mode syncs your memories to ShackleAI's managed infrastructure via the Gateway, giving you server-side persistence, cross-device access, and PostgreSQL + pgvector-powered semantic search.

Local vs Cloud

Local (default)Cloud
StorageSQLite + sqlite-vec on your machinePostgreSQL + pgvector via ShackleAI Gateway
EmbeddingsLocal MiniLM-L6-v2 (CPU)Server-side embeddings
Account requiredNoYes (free tier available)
Cross-deviceNo — ~/.shackleai/ is per-machineYes — memories persist in the cloud
Tools availableAll 11 toolsAll 11 tools
Offline supportFullRequires internet

Get an API Key

  1. Go to shackleai.com and sign in (or create an account)
  2. Navigate to Settings > API Keys
  3. Click Create Key — you'll get a unified account key in the format sk_shackle_*
  4. Copy the key. It won't be shown again.

Configure Cloud Mode

Cloud mode connects to the ShackleAI Gateway MCP endpoint with Bearer authentication. Configure your MCP client to point at the Gateway instead of running the local server.

<details> <summary>Claude Code</summary>
bash
claude mcp add memory-cloud --transport http https://gateway.shackleai.com/mcp \
  --header "Authorization: Bearer sk_shackle_YOUR_KEY_HERE"

Or add to .mcp.json in your project:

json
{
  "mcpServers": {
    "memory": {
      "type": "http",
      "url": "https://gateway.shackleai.com/mcp",
      "headers": {
        "Authorization": "Bearer sk_shackle_YOUR_KEY_HERE"
      }
    }
  }
}
</details> <details> <summary>Cursor / Windsurf / VS Code Copilot</summary>

Add to your client's MCP config file:

json
{
  "mcpServers": {
    "memory": {
      "type": "http",
      "url": "https://gateway.shackleai.com/mcp",
      "headers": {
        "Authorization": "Bearer sk_shackle_YOUR_KEY_HERE"
      }
    }
  }
}
</details>

All 11 memory tools (memory_init, memory_store, memory_search, memory_update, memory_delete, memory_list_projects, memory_session_end, memory_status, memory_export, memory_import, memory_cleanup) work identically in cloud mode — no code changes needed.

Advanced: Auto-Init Options

The server auto-detects your project in this order:

  1. CLI argument: --project-path /path/to/project
  2. Environment variable: SHACKLEAI_PROJECT_PATH=/path/to/project
  3. Working directory: Uses process.cwd() (this is what most MCP clients pass)

For explicit control, set the project path in your MCP config:

json
{
  "mcpServers": {
    "memory": {
      "command": "npx",
      "args": ["-y", "@shackleai/memory-mcp@latest", "--project-path", "/path/to/project"]
    }
  }
}

Or via environment variable:

json
{
  "mcpServers": {
    "memory": {
      "command": "npx",
      "args": ["-y", "@shackleai/memory-mcp@latest"],
      "env": {
        "SHACKLEAI_PROJECT_PATH": "/path/to/project"
      }
    }
  }
}

Troubleshooting

npx setup fails or "command not found: setup"

Your npm/npx is too old. This happens when npm is v6.x (ships with older Node installers). Check with:

bash
npm --version

If it shows 6.x, fix it:

bash
# Option 1: Update npm (recommended — gets you modern npx)
npm install -g npm@latest

# If that fails on Windows with "Refusing to delete" error:
# Delete the stale files first, then retry:
# Remove-Item "$env:APPDATA\npm\npm.cmd", "$env:APPDATA\npm\npx.cmd" -Force
# npm install -g npm@latest

# Option 2: Install globally instead (works with any npm version)
npm install -g @shackleai/memory-mcp
shackleai-memory setup

"Cannot connect to MCP server" / Server fails to start

Make sure Node.js 20+ is installed. Then verify the server runs:

bash
npx -y @shackleai/memory-mcp --help

If using a global install, verify the binary is in your PATH:

bash
shackleai-memory --help

Claude Code: "memory" not showing in claude mcp list

If you used npx setup, Claude Code reads .mcp.json from your project directory automatically — you don't need claude mcp list to show it. Just start claude in the project directory.

If you used claude mcp add instead, verify with:

bash
claude mcp list
# Should show: memory: connected

AI not storing memories during sessions

The setup command creates a CLAUDE.md file with instructions that tell the AI to use memory proactively. If you already had a CLAUDE.md, the setup appends memory instructions to it. Check that your CLAUDE.md contains the "ShackleAI Memory" section.

If the AI still isn't storing, you can ask it directly:

"Store what you just did in memory"

This confirms the tools work, and the AI will be more proactive about storing in subsequent interactions.

First tool call is slow

The embedding model (~80MB) downloads on first use. This is a one-time download. Subsequent runs use the cached model and are fast.

Memory not persisting between sessions

Check that ~/.shackleai/ directory exists and has write permissions. The server creates it automatically on first run.

Wrong project detected

Use --project-path to explicitly set the project, or call memory_init with the correct path.

Why ShackleAI?

Every AI coding tool today has amnesia. Close the session, context is gone. Switch tools, everything lost.

ShackleAI fixes this by providing a universal memory layer that works across every MCP-compatible AI tool:

  • Works with every AI tool — Claude Code, Cursor, Windsurf, VS Code Copilot, OpenAI Codex, Claude Desktop
  • Works with every LLM — Claude, GPT, Gemini, Llama, Mistral — any LLM behind any MCP client
  • Your memory is YOUR asset — switch tools anytime, your knowledge stays
  • No vendor lock-in — open source, local storage, standard protocol

Requirements

  • Node.js 20 or later
  • npm 7 or later (for npx setup — or install globally with any npm version)
  • Any MCP-compatible AI client

Contributing

Issues and PRs welcome at github.com/shackleai/memory-mcp.

License

MIT — free and open source forever.


The shackle that keeps your AI anchored. Built by ShackleAI.

常见问题

io.github.shackleai/memory 是什么?

Persistent memory for AI coding tools. 11 tools for any MCP-compatible AI tool.

相关 Skills

网页构建器

by anthropics

Universal
热门

面向复杂 claude.ai HTML artifact 开发,快速初始化 React + Tailwind CSS + shadcn/ui 项目并打包为单文件 HTML,适合需要状态管理、路由或多组件交互的页面。

在 claude.ai 里做复杂网页 Artifact 很省心,多组件、状态和路由都能顺手搭起来,React、Tailwind 与 shadcn/ui 组合效率高、成品也更精致。

编码与调试
未扫描123.0k

前端设计

by anthropics

Universal
热门

面向组件、页面、海报和 Web 应用开发,按鲜明视觉方向生成可直接落地的前端代码与高质感 UI,适合做 landing page、Dashboard 或美化现有界面,避开千篇一律的 AI 审美。

想把页面做得既能上线又有设计感,就用前端设计:组件到整站都能产出,难得的是能避开千篇一律的 AI 味。

编码与调试
未扫描123.0k

网页应用测试

by anthropics

Universal
热门

用 Playwright 为本地 Web 应用编写自动化测试,支持启动开发服务器、校验前端交互、排查 UI 异常、抓取截图与浏览器日志,适合调试动态页面和回归验证。

借助 Playwright 一站式验证本地 Web 应用前端功能,调 UI 时还能同步查看日志和截图,定位问题更快。

编码与调试
未扫描123.0k

相关 MCP Server

GitHub

编辑精选

by GitHub

热门

GitHub 是 MCP 官方参考服务器,让 Claude 直接读写你的代码仓库和 Issues。

这个参考服务器解决了开发者想让 AI 安全访问 GitHub 数据的问题,适合需要自动化代码审查或 Issue 管理的团队。但注意它只是参考实现,生产环境得自己加固安全。

编码与调试
84.2k

by Context7

热门

Context7 是实时拉取最新文档和代码示例的智能助手,让你告别过时资料。

它能解决开发者查找文档时信息滞后的问题,特别适合快速上手新库或跟进更新。不过,依赖外部源可能导致偶尔的数据延迟,建议结合官方文档使用。

编码与调试
53.3k

by tldraw

热门

tldraw 是让 AI 助手直接在无限画布上绘图和协作的 MCP 服务器。

这解决了 AI 只能输出文本、无法视觉化协作的痛点——想象让 Claude 帮你画流程图或白板讨论。最适合需要快速原型设计或头脑风暴的开发者。不过,目前它只是个基础连接器,你得自己搭建画布应用才能发挥全部潜力。

编码与调试
46.4k

评论