io.github.varun29ankuS/shodh-memory
编码与调试by varun29ankus
提供带 semantic search 的持久化 AI memory,可跨会话存储、检索并召回上下文信息。
让 AI 助手摆脱“失忆症”,用语义搜索把跨会话上下文持久化存储与精准召回,特别适合需要长期记忆的编码流程。
什么是 io.github.varun29ankuS/shodh-memory?
提供带 semantic search 的持久化 AI memory,可跨会话存储、检索并召回上下文信息。
README
<p align="center"> <img src="https://raw.githubusercontent.com/varun29ankuS/shodh-memory/main/assets/Shodh_preview.gif" width="800" alt="Shodh-Memory Demo — Claude Code with persistent memory and TUI dashboard"> </p>
AI agents forget everything between sessions. Robots lose context between missions. They repeat mistakes, miss patterns, and treat every interaction like the first one.
Shodh-Memory fixes this. It's persistent memory that actually learns — memories you use often become easier to find, old irrelevant context fades automatically, and recalling one thing brings back related things. Works for chat agents (MCP/HTTP), robots (Zenoh/ROS2), and edge devices. No API keys. No cloud. No external databases. One binary.
Why Not Just Use mem0 / Cognee / Zep?
| Shodh | mem0 | Cognee | Zep | |
|---|---|---|---|---|
| LLM calls to store a memory | 0 | 2+ per add | 3+ per cognify | 2+ per episode |
| External services needed | None | OpenAI + vector DB | OpenAI + Neo4j + vector DB | OpenAI + Neo4j |
| Time to store a memory | 55ms | ~20 seconds | seconds | seconds |
| Learns from usage | Yes (Hebbian) | No | No | No |
| Forgets irrelevant data | Yes (decay) | No | No | Temporal only |
| Runs fully offline | Yes | No | No | No |
| Robotics / ROS2 native | Yes (Zenoh) | No | No | No |
| Binary size | ~17MB | pip install + API keys | pip install + API keys + Neo4j | Cloud only |
Every other memory system delegates intelligence to LLM API calls — that's why they're slow, expensive, and can't work offline. Shodh uses algorithmic intelligence: local embeddings, mathematical decay, learned associations. No LLM in the loop.
Get Started
Unified CLI
# Download from GitHub Releases (or brew tap varun29ankuS/shodh-memory && brew install shodh-memory)
shodh init # First-time setup — creates config, generates API key, downloads AI model
shodh server # Start the memory server on :3030
shodh tui # Launch the TUI dashboard
shodh status # Check server health
shodh doctor # Diagnose issues
One binary, all functionality. No Docker, no API keys, no external dependencies.
Claude Code (one command)
claude mcp add shodh-memory -- npx -y @shodh/memory-mcp
That's it. The MCP server auto-downloads the backend binary and starts it. No Docker, no API keys, no configuration. Claude now has persistent memory across sessions.
<details> <summary>Or with Docker (for production / shared servers)</summary># 1. Start the server
docker run -d -p 3030:3030 -v shodh-data:/data varunshodh/shodh-memory
# 2. Add to Claude Code
claude mcp add shodh-memory -- npx -y @shodh/memory-mcp
{
"mcpServers": {
"shodh-memory": {
"command": "npx",
"args": ["-y", "@shodh/memory-mcp"]
}
}
}
For local use, no API key is needed — one is generated automatically. For remote servers, add "env": { "SHODH_API_KEY": "your-key" }.
Python
pip install shodh-memory
from shodh_memory import Memory
memory = Memory(storage_path="./my_data")
memory.remember("User prefers dark mode", memory_type="Decision")
results = memory.recall("user preferences", limit=5)
Rust
[dependencies]
shodh-memory = "0.1"
use shodh_memory::{MemorySystem, MemoryConfig};
let memory = MemorySystem::new(MemoryConfig::default())?;
memory.remember("user-1", "User prefers dark mode", MemoryType::Decision, vec![])?;
let results = memory.recall("user-1", "user preferences", 5)?;
Docker
docker run -d -p 3030:3030 -v shodh-data:/data varunshodh/shodh-memory
What It Does
You use a memory often → it becomes easier to find (Hebbian learning)
You stop using a memory → it fades over time (activation decay)
You recall one memory → related memories surface too (spreading activation)
A connection is used → it becomes permanent (long-term potentiation)
Under the hood, memories flow through three tiers:
Working Memory ──overflow──▶ Session Memory ──importance──▶ Long-Term Memory
(100 items) (100 MB) (RocksDB)
This is based on Cowan's working memory model and Wixted's memory decay research. The neuroscience isn't a gimmick — it's why the system gets better with use instead of just accumulating data.
Performance
| Operation | Latency |
|---|---|
| Store memory (API response) | <200ms |
| Store memory (core) | 55-60ms |
| Semantic search | 34-58ms |
| Tag search | ~1ms |
| Entity lookup | 763ns |
| Graph traversal (3-hop) | 30µs |
Single binary. No GPU required. Content-hash dedup ensures identical memories are never stored twice.
TUI Dashboard
shodh tui
37 MCP Tools
Full list of tools available to Claude, Cursor, and other MCP clients:
<details> <summary>Memory</summary>remember · recall · proactive_context · context_summary · list_memories · read_memory · forget
add_todo · list_todos · update_todo · complete_todo · delete_todo · reorder_todo · list_subtasks · add_todo_comment · list_todo_comments · update_todo_comment · delete_todo_comment · todo_stats
add_project · list_projects · archive_project · delete_project
set_reminder · list_reminders · dismiss_reminder
memory_stats · verify_index · repair_index · token_status · reset_token_session · consolidation_report · backup_create · backup_list · backup_verify · backup_restore · backup_purge
REST API
160+ endpoints on http://localhost:3030. All /api/* endpoints require X-API-Key header.
# Store a memory
curl -X POST http://localhost:3030/api/remember \
-H "Content-Type: application/json" \
-H "X-API-Key: your-key" \
-d '{"user_id": "user-1", "content": "User prefers dark mode", "memory_type": "Decision"}'
# Search memories
curl -X POST http://localhost:3030/api/recall \
-H "Content-Type: application/json" \
-H "X-API-Key: your-key" \
-d '{"user_id": "user-1", "query": "user preferences", "limit": 5}'
Robotics & ROS2
Shodh-Memory isn't just for chat agents. It's persistent memory for robots — Spot, drones, humanoids, any system running ROS2 or Zenoh.
# Enable Zenoh transport (compile with --features zenoh)
SHODH_ZENOH_ENABLED=true SHODH_ZENOH_LISTEN=tcp/0.0.0.0:7447 shodh server
# ROS2 robots connect via zenoh-bridge-ros2dds or rmw_zenoh — zero code changes
ros2 run zenoh_bridge_ros2dds zenoh_bridge_ros2dds
What robots can do over Zenoh:
| Operation | Key Expression | Description |
|---|---|---|
| Remember | shodh/{user_id}/remember | Store with GPS, local position, heading, sensor data, mission context |
| Recall | shodh/{user_id}/recall | Spatial search (haversine), mission replay, action-outcome filtering |
| Stream | shodh/{user_id}/stream/sensor | Auto-remember high-frequency sensor data via extraction pipeline |
| Mission | shodh/{user_id}/mission/start | Track mission boundaries, searchable across missions |
| Fleet | shodh/fleet/** | Automatic peer discovery via Zenoh liveliness tokens |
Each robot uses its own user_id as the key segment (e.g., shodh/spot-1/remember). The robot_id is an optional payload field for fleet grouping.
Every Experience carries 26 robotics-specific fields: geo_location, local_position, heading, sensor_data, robot_id, mission_id, action_type, reward, terrain_type, nearby_agents, decision_context, action_params, outcome_type, confidence, failure/anomaly tracking, recovery actions, and prediction learning.
{
"user_id": "spot-1",
"content": "Detected crack in concrete at waypoint alpha",
"robot_id": "spot_v2",
"mission_id": "building_inspection_2026",
"geo_location": [37.7749, -122.4194, 10.0],
"local_position": [12.5, 3.2, 0.0],
"heading": 90.0,
"sensor_data": {"battery": 72.5, "temperature": 28.3},
"action_type": "inspect",
"reward": 0.9,
"terrain_type": "indoor",
"tags": ["crack", "concrete", "structural"]
}
{
"user_id": "spot-1",
"query": "structural damage near entrance",
"mode": "spatial",
"lat": 37.7749,
"lon": -122.4194,
"radius_meters": 50.0,
"mission_id": "building_inspection_2026"
}
SHODH_ZENOH_ENABLED=true # Enable Zenoh transport
SHODH_ZENOH_MODE=peer # peer | client | router
SHODH_ZENOH_LISTEN=tcp/0.0.0.0:7447 # Listen endpoints
SHODH_ZENOH_CONNECT=tcp/1.2.3.4:7447 # Connect endpoints
SHODH_ZENOH_PREFIX=shodh # Key expression prefix
# Auto-subscribe to ROS2 topics (via zenoh-bridge-ros2dds)
SHODH_ZENOH_AUTO_TOPICS='[
{"key_expr": "rt/spot1/status", "user_id": "spot-1", "mode": "sensor"},
{"key_expr": "rt/nav/events", "user_id": "spot-1", "mode": "event"}
]'
Works with ROS2 Kilted (rmw_zenoh), PX4 drones, Boston Dynamics Spot, humanoids — anything that speaks Zenoh or ROS2 DDS.
Platform Support
Linux x86_64 · Linux ARM64 · macOS Apple Silicon · macOS Intel · Windows x86_64
Production Deployment
<details> <summary>Environment variables</summary>SHODH_ENV=production # Production mode
SHODH_API_KEYS=key1,key2,key3 # Comma-separated API keys
SHODH_HOST=127.0.0.1 # Bind address (default: localhost)
SHODH_PORT=3030 # Port (default: 3030)
SHODH_MEMORY_PATH=/var/lib/shodh # Data directory
SHODH_REQUEST_TIMEOUT=60 # Request timeout in seconds
SHODH_MAX_CONCURRENT=200 # Max concurrent requests
SHODH_CORS_ORIGINS=https://app.example.com
services:
shodh-memory:
image: varunshodh/shodh-memory:latest
environment:
- SHODH_ENV=production
- SHODH_HOST=0.0.0.0
- SHODH_API_KEYS=${SHODH_API_KEYS}
volumes:
- shodh-data:/data
networks:
- internal
caddy:
image: caddy:latest
ports:
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
networks:
- internal
volumes:
shodh-data:
networks:
internal:
The server binds to 127.0.0.1 by default. For network deployments, place behind a reverse proxy:
memory.example.com {
reverse_proxy localhost:3030
}
Community
| Project | Description | Author |
|---|---|---|
| SHODH on Cloudflare | Edge-native implementation on Cloudflare Workers | @doobidoo |
References
[1] Cowan, N. (2010). The Magical Mystery Four. Current Directions in Psychological Science. [2] Magee & Grienberger (2020). Synaptic Plasticity Forms and Functions. Annual Review of Neuroscience. [3] Subramanya et al. (2019). DiskANN. NeurIPS 2019.
License
Apache 2.0
<p align="center"> <a href="https://registry.modelcontextprotocol.io/v0/servers?search=shodh">MCP Registry</a> · <a href="https://hub.docker.com/r/varunshodh/shodh-memory">Docker Hub</a> · <a href="https://pypi.org/project/shodh-memory/">PyPI</a> · <a href="https://www.npmjs.com/package/@shodh/memory-mcp">npm</a> · <a href="https://crates.io/crates/shodh-memory">crates.io</a> · <a href="https://www.shodh-memory.com">Docs</a> </p>
常见问题
io.github.varun29ankuS/shodh-memory 是什么?
提供带 semantic search 的持久化 AI memory,可跨会话存储、检索并召回上下文信息。
相关 Skills
前端设计
by anthropics
面向组件、页面、海报和 Web 应用开发,按鲜明视觉方向生成可直接落地的前端代码与高质感 UI,适合做 landing page、Dashboard 或美化现有界面,避开千篇一律的 AI 审美。
✎ 想把页面做得既能上线又有设计感,就用前端设计:组件到整站都能产出,难得的是能避开千篇一律的 AI 味。
网页构建器
by anthropics
面向复杂 claude.ai HTML artifact 开发,快速初始化 React + Tailwind CSS + shadcn/ui 项目并打包为单文件 HTML,适合需要状态管理、路由或多组件交互的页面。
✎ 在 claude.ai 里做复杂网页 Artifact 很省心,多组件、状态和路由都能顺手搭起来,React、Tailwind 与 shadcn/ui 组合效率高、成品也更精致。
网页应用测试
by anthropics
用 Playwright 为本地 Web 应用编写自动化测试,支持启动开发服务器、校验前端交互、排查 UI 异常、抓取截图与浏览器日志,适合调试动态页面和回归验证。
✎ 借助 Playwright 一站式验证本地 Web 应用前端功能,调 UI 时还能同步查看日志和截图,定位问题更快。
相关 MCP Server
GitHub
编辑精选by GitHub
GitHub 是 MCP 官方参考服务器,让 Claude 直接读写你的代码仓库和 Issues。
✎ 这个参考服务器解决了开发者想让 AI 安全访问 GitHub 数据的问题,适合需要自动化代码审查或 Issue 管理的团队。但注意它只是参考实现,生产环境得自己加固安全。
Context7 文档查询
编辑精选by Context7
Context7 是实时拉取最新文档和代码示例的智能助手,让你告别过时资料。
✎ 它能解决开发者查找文档时信息滞后的问题,特别适合快速上手新库或跟进更新。不过,依赖外部源可能导致偶尔的数据延迟,建议结合官方文档使用。
by tldraw
tldraw 是让 AI 助手直接在无限画布上绘图和协作的 MCP 服务器。
✎ 这解决了 AI 只能输出文本、无法视觉化协作的痛点——想象让 Claude 帮你画流程图或白板讨论。最适合需要快速原型设计或头脑风暴的开发者。不过,目前它只是个基础连接器,你得自己搭建画布应用才能发挥全部潜力。