io.github.Lyellr88/marm-mcp-server
平台与服务by lyellr88
通用 MCP Server,具备高级 AI 记忆能力与 semantic search,可支持更智能的上下文检索。
给 AI 应用补上长期记忆和语义检索能力,让上下文检索更聪明、更准;通用 MCP Server 形态也让接入现有工作流省心不少。
什么是 io.github.Lyellr88/marm-mcp-server?
通用 MCP Server,具备高级 AI 记忆能力与 semantic search,可支持更智能的上下文检索。
README
Memory Accurate Response Mode v2.2.6 - The intelligent persistent memory system for AI agents (supports HTTP and STDIO), stop fighting your memory and control it. Experience long-term recall, session continuity, and reliable conversation history, so your LLMs never lose track of what matters.
Note: This is the official MARM repository. All official versions and releases are managed here.
Forks may experiment, but official updates will always come from this repo.
</div>🚀 Project Update: Q2 2026
I am currently finalizing the launch of SysDX, a forensic-grade Windows hardware diagnostics toolkit designed to catch intermittent PCIe flapping and TDR crashes through 800+ automated Pester tests. This production-heavy development phase has served as the ultimate stress test for MARM’s core logic, and I am now porting those real-world stability and memory optimizations back into this repository.
🛠️ Q2 Roadmap
- Session Persistence Fix: Resolving the marm_log_session state logic to ensure active sessions persist correctly before log entries fire.
- Intelligent Rate Limiting: Reducing the 10-minute localhost lockout and implementing a Retry-After header for smoother MCP client handling.
- Directory-Based Memory: Transitioning to a per-project SQLite architecture to allow isolated memory pools for different coding environments.
- Context Bloat Reduction: Implementing lazy-loading for documentation to prevent unnecessary token consumption during session startup.
- Automation Checkpoints: Adding an opt-in "Auto-Log" feature that triggers semantic summaries every N messages.
- Core Cleanup: Removing duplicate root files to establish a single source of truth and prevent future installation drift.
Thank you for your patience and support.
MARM Demo Video: Docker Install + Seeing Persistent AI Memory in Action
https://github.com/user-attachments/assets/c7c6a162-5408-4eda-a461-610b7e713dfe
This demo video walks through a Docker pull of MARM MCP and connecting it to Claude using the claude add mcp transport command and then shows multiple AI agents (Claude, Gemini, Qwen) instantly sharing logs and notebook entries via MARM’s persistent, universal memory proving seamless cross-agent recall and “absolute truth” notebooks in action.
Why MARM MCP: The Problem & Solution
Your AI forgets everything. MARM MCP doesn't.
Modern LLMs lose context over time, repeat prior ideas, and drift off requirements. MARM MCP solves this with a unified, persistent, MCP‑native memory layer that sits beneath any AI client you use. It blends semantic search, structured session logs, reusable notebooks, and smart summaries so your agents can remember, reference, and build on prior work—consistently, across sessions, and across tools.
MCP in One Sentence: MARM MCP provides persistent memory and structured session context beneath any AI tool, so your agents learn, remember, and collaborate across all your workflows.
The Problem → The MARM Solution
- Problem: Conversations reset; decisions get lost; work scatters across multiple AI tools.
- Solution: A universal, persistent memory layer that captures and classifies the important bits (decisions, configs, code, rationale), then recalls them by meaning—not keywords.
Before vs After
- Without MARM: lost context, repeated suggestions, drifting scope, "start from scratch."
- With MARM: session memory, cross-session continuity, concrete recall of decisions, and faster, more accurate delivery.
What MARM MCP Delivers
| Memory | Multi-AI | Architecture |
|---|---|---|
| Semantic Search - Find by meaning using AI embeddings | Unified Memory Layer - Works with Claude, Qwen, Gemini, MCP clients | 18 Complete MCP Tools - Full Model Context Protocol coverage |
| Auto-Classification - Content categorized (code, project, book, general) | Cross-Platform Intelligence - Different AIs learn from shared knowledge | Database Optimization - SQLite with WAL mode and connection pooling |
| Persistent Cross-Session Memory - Memories survive across agent conversations | User-Controlled Memory - "Bring Your Own History," granular control | Rate Limiting - IP-based tiers for stability |
| Smart Recall - Vector similarity search with context-aware fallbacks | MCP Compliance - Response size management for predictable performance | |
| Docker Ready - Containerized deployment with health/readiness checks |
Learn More
- Protocol walkthrough, commands, and reseeding patterns:
MARM-HANDBOOK.md - Join the community for updates and support: MARM Discord
What Users Are Saying
“MARM successfully handles our industrial automation workflows in production. We've validated session management, persistent logging, and smart recall across container restarts in our Windows 11 + Docker environment. The system reliably tracks complex technical decisions and maintains data integrity through deployment cycles.”
@Ophy21, GitHub user (Industrial Automation Engineer)
“MARM proved exceptionally valuable for DevOps and complex Docker projects. It maintained 100% memory accuracy, preserved context on 46 services and network configurations, and enabled standards-compliant Python/Terraform work. Semantic search and automated session logs made solving async and infrastructure issues far easier. Value Rating: 9.5/10 - indispensable for enterprise-grade memory, technical standards, and long-session code management.” @joe_nyc, Discord user (DevOps/Infrastructure Engineer)
MARM MCP Server Guide
Now that you understand the ecosystem, here's info and how to use the MCP server with your AI agents
🚀 Quick Start for MCP (HTTP & Stdio)
<br> <div align="center"> <picture> <img src="https://raw.githubusercontent.com/Lyellr88/MARM-Systems/MARM-main/media/installation-flow.svg" width="850" height="500" </picture> </div> <br>Docker Install:
docker pull lyellr88/marm-mcp-server:latest
docker run -d --name marm-mcp-server -p 8001:8001 -v ~/.marm:/home/marm/.marm lyellr88/marm-mcp-server:latest
claude mcp add --transport http marm-memory http://localhost:8001/mcp
Local http Install:
pip install marm-mcp-server==2.2.6
pip install -r marm-mcp-server/requirements.txt
python marm-mcp-server
claude mcp add --transport http marm-memory http://localhost:8001/mcp
Stdio Install:
pip install marm-mcp-server==2.2.6
pip install -r marm-mcp-server/requirements_stdio.txt
<platform> mcp add --transport stdio marm-memory-stdio python "your/file/path/to/marm-mcp-server/server_stdio.py"
python marm-mcp-server/server_stdio.py
Docker (Fastest - 30 seconds):
docker pull lyellr88/marm-mcp-server:latest
docker run -d --name marm-mcp-server -p 8001:8001 -v ~/.marm:/home/marm/.marm lyellr88/marm-mcp-server:latest
claude mcp add --transport http marm-memory http://localhost:8001/mcp
Quick Local http Install:
pip install marm-mcp-server==2.2.6
pip install -r marm-mcp-server/requirements.txt
python marm-mcp-server
claude mcp add --transport http marm-memory http://localhost:8001/mcp
Http Manual JSON Configuration:
{
"mcpServers": {
"marm-memory": {
"httpUrl": "http://localhost:8001/mcp",
"authentication": {
"type": "oauth",
"clientId": "local_client_b6f3a01e",
"clientSecret": "local_secret_ad6703cd2b4243ab",
"authorizationUrl": "http://localhost:8001/oauth/authorize",
"tokenUrl": "http://localhost:8001/oauth/token"
}
}
}
}
Local Development Authentication (Development Only)
MARM includes mock OAuth 2.0 credentials for local testing—not a production authentication system.
Why hardcoded credentials? When developing locally, you don't have external OAuth providers (GitHub, Google, etc.). MARM includes dev credentials so you can test the full MCP authentication flow without external dependencies.
For local development, use these credentials:
- Client ID:
local_client_b6f3a01e - Client Secret:
local_secret_ad6703cd2b4243ab
The server validates against these hardcoded values only during development.
For production deployment: Replace this entire section with real OAuth 2.1 authentication. These hardcoded credentials are for development only and not suitable for production.
Roadmap: Multi-user OAuth authentication is planned for a future release to support team deployments and cloud environments.
STDIO Transport Support (NEW 12/07/2025)
The MARM MCP Server supports STDIO transport for MCP clients that require stdin/stdout communication (orchestration platforms, CLI tools, and integrated development environments).
Quick Guide Stdio Install
pip install marm-mcp-server==2.2.6
pip install -r marm-mcp-server/requirements_stdio.txt
<platform> mcp add --transport stdio marm-memory-stdio python "your/file/path/to/marm-mcp-server/server_stdio.py"
python marm-mcp-server/server_stdio.py
First Step:
pip install marm-mcp-server==2.2.6
Second Step: Install STDIO-specific dependencies:
pip install -r marm-mcp-server/requirements_stdio.txt
Third Step: Configuration
Choose one of the two setup methods below:
Option 1: CLI Configuration (Recommended)
Use your platform's MCP command to add MARM as a STDIO server:
<platform> mcp add --transport stdio marm-memory-stdio python "your/file/path/to/marm-mcp-server/server_stdio.py"
Replace <platform> with:
qwenfor Qwen CLIclaudefor Claude CLIgeminifor Gemini CLI
Example:
claude mcp add --transport stdio marm-memory-stdio python "/home/user/marm-mcp-server/server_stdio.py"
Option 2: JSON Configuration
For IDEs and clients that require manual configuration, add this to your settings file:
macOS/Linux:
{
"mcpServers": {
"marm-memory": {
"command": "python",
"args": ["/path/to/marm-mcp-server/server_stdio.py"],
"cwd": "/path/to/marm-mcp-server"
}
}
}
Windows:
{
"mcpServers": {
"marm-memory": {
"command": "python",
"args": ["C:\\Users\\YourUsername\\path\\to\\marm-mcp-server\\server_stdio.py"],
"cwd": "C:\\Users\\YourUsername\\path\\to\\marm-mcp-server"
}
}
}
Step 4 (Optional): Running the Server Manually
To run the server locally:
python marm-mcp-server/server_stdio.py
The server will start and listen on stdin/stdout for JSON-RPC 2.0 messages from connected MCP clients.
Configuration Notes
- Use
python(notpython3on Windows) - The
cwdparameter is required — it allows the server to locate core modules - Do NOT include
runas an argument - Replace
/path/to/with your actual installation path
Supported Platforms
Tested and working on:
- ✅ Qwen CLI (Windows, macOS, Linux)
- ✅ Claude CLI (Windows, macOS, Linux)
- ✅ Gemini CLI (Windows, macOS, Linux)
- ✅ Cursor (Windows, macOS, Linux) — use JSON configuration
For Other Platforms
If your platform isn't listed above:
- Try the JSON configuration — most MCP clients support the standard configuration format
- Use AI assistance — provide your platform name and MCP documentation to an AI assistant, which can help adapt the command pattern shown above
- Check platform documentation — refer to your MCP client's documentation for STDIO transport setup
WebSocket Transport Support (Beta - In Testing)
The MARM MCP Server includes experimental WebSocket support for real-time MCP communication. This transport has been implemented and tested internally but is not yet actively used in production workflows.
Quick Guide WebSocket Install
pip install marm-mcp-server==2.2.6
pip install -r marm-mcp-server/requirements.txt
python marm-mcp-server/server.py
Connect via WebSocket (Beta):
# Claude CLI
claude mcp add marm-memory ws://localhost:8001/mcp/ws
# Grok CLI
grok mcp add marm-memory --transport websocket --url "ws://localhost:8001/mcp/ws"
WebSocket Endpoint: ws://localhost:8001/mcp/ws
WebSocket Features
- Real-time communication - Full-duplex WebSocket protocol support
- JSON-RPC 2.0 compliance - All 19 MCP methods supported
- Same tool coverage - Access all MARM memory and session tools
- Beta status - Tested but not actively used; feedback welcome
Supported Platforms
- ✅ Claude CLI (WebSocket transport)
- ✅ Grok CLI (WebSocket transport)
- ✅ Qwen CLI (with manual WebSocket configuration)
- ✅ Gemini CLI (with manual WebSocket configuration)
Transport Comparison
| Feature | HTTP | STDIO | WebSocket |
|---|---|---|---|
| Deployment | Requires HTTP server | Process-based | HTTP server |
| Resource Isolation | Shared server | Per-process | Shared server |
| Platform Support | Web-based clients | CLI/orchestration tools | CLI tools (Beta) |
| Setup Complexity | Medium | Low | Medium |
| Use Case | Web apps, remote access | Local tools, automation | Real-time apps (Beta) |
| Status | Stable | Stable | Beta |
Key Information:
- Server Endpoint:
http://localhost:8001/mcp - API Documentation:
http://localhost:8001/docs - Supported Clients: Claude Code, Qwen CLI, Gemini CLI, and any MCP-compatible LLM client or LLM platform
All Installation Options:
- Docker (Fastest): One command, works everywhere
- Automated Setup: One command with dependency validation
- Manual Installation: Step-by-step with virtual environment
- Quick Test: Zero-configuration trial run
Choose your installation method:
| Installation Type | Guide | Best For |
|---|---|---|
| Docker | INSTALL-DOCKER.md | Cross-platform, production deployment |
| Windows | INSTALL-WINDOWS.md | Native Windows development |
| Linux | INSTALL-LINUX.md | Native Linux development |
| Platforms | INSTALL-PLATFORM.md | App & API integration |
Complete MCP Tool Suite (18 Tools)
💡 Pro Tip: You don't need to manually call these tools! Just tell your AI agent what you want in natural language:
- "Claude, log this session as 'Project Alpha' and add this conversation as 'database design discussion'"
- "Remember this code snippet in your notebook for later"
- "Search for what we discussed about authentication yesterday"
The AI agent will automatically use the appropriate tools. Manual tool access is available for power users who want direct control.
| Category | Tool | Description |
|---|---|---|
| Memory Intelligence | marm_smart_recall | AI-powered semantic similarity search across all memories. Supports global search with search_all=True flag |
marm_contextual_log | Intelligent auto-classifying memory storage using vector embeddings | |
| Session Management | marm_start | Activate MARM intelligent memory and response accuracy layers |
marm_refresh | Refresh AI agent session state and reaffirm protocol adherence | |
| Logging System | marm_log_session | Create or switch to named session container |
marm_log_entry | Add structured log entry with auto-date formatting | |
marm_log_show | Display all entries and sessions (filterable) | |
marm_log_delete | Delete specified session or individual entries | |
| Reasoning & Workflow | marm_summary | Generate context-aware summaries with intelligent truncation for LLM conversations |
marm_context_bridge | Smart context bridging for seamless AI agent workflow transitions | |
| Notebook Management | marm_notebook_add | Add new notebook entry with semantic embeddings |
marm_notebook_use | Activate entries as instructions (comma-separated) | |
marm_notebook_show | Display all saved keys and summaries | |
marm_notebook_delete | Delete specific notebook entry | |
marm_notebook_clear | Clear the active instruction list | |
marm_notebook_status | Show current active instruction list | |
| System Utilities | marm_current_context | Background Tool - Automatically provides current date/time for log entries (AI agents use automatically) |
marm_system_info | Comprehensive system information, health status, and loaded docs | |
marm_reload_docs | Reload documentation into memory system |
Architecture Overview
Core Technology Stack
FastAPI (0.115.4) + FastAPI-MCP (0.4.0)
├── SQLite with WAL Mode + Custom Connection Pooling
├── Sentence Transformers (all-MiniLM-L6-v2) + Semantic Search
├── Structured Logging (structlog) + Memory Monitoring (psutil)
├── IP-Based Rate Limiting + Usage Analytics
├── MCP Response Size Compliance (1MB limit)
├── Event-Driven Automation System
├── Docker Containerized Deployment + Health Monitoring
└── Advanced Memory Intelligence + Auto-Classification
Database Schema (5 Tables)
memories - Core Memory Storage
CREATE TABLE memories (
id TEXT PRIMARY KEY,
session_name TEXT NOT NULL,
content TEXT NOT NULL,
embedding BLOB, -- AI vector embeddings for semantic search
timestamp TEXT NOT NULL,
context_type TEXT DEFAULT 'general', -- Auto-classified content type
metadata TEXT DEFAULT '{}',
created_at TEXT DEFAULT CURRENT_TIMESTAMP
);
sessions - Session Management
CREATE TABLE sessions (
session_name TEXT PRIMARY KEY,
marm_active BOOLEAN DEFAULT FALSE,
created_at TEXT DEFAULT CURRENT_TIMESTAMP,
last_accessed TEXT DEFAULT CURRENT_TIMESTAMP,
metadata TEXT DEFAULT '{}'
);
Plus: log_entries, notebook_entries, user_settings
<div align="center"> <picture> <img src="https://raw.githubusercontent.com/Lyellr88/MARM-Systems/MARM-main/media/memory-intelligence.svg" width="900" height="625" </picture> </div>
📈 Performance & Scalability
Production Optimizations
- Custom SQLite Connection Pool: Thread-safe with configurable limits (default: 5)
- WAL Mode: Write-Ahead Logging for concurrent access performance
- Lazy Loading: Semantic models loaded only when needed (resource efficient)
- Intelligent Caching: Memory usage optimization with cleanup cycles
- Response Size Management: MCP 1MB compliance with smart truncation
Rate Limiting Tiers
- Default: 60 requests/minute, 5min cooldown
- Memory Heavy: 20 requests/minute, 10min cooldown (semantic search)
- Search Operations: 30 requests/minute, 5min cooldown
Documentation for MCP
| Guide Type | Document | Description |
|---|---|---|
| Docker Setup | INSTALL-DOCKER.md | Cross-platform, production deployment |
| Windows Setup | INSTALL-WINDOWS.md | Native Windows development |
| Linux Setup | INSTALL-LINUX.md | Native Linux development |
| Platform Integration | INSTALL-PLATFORM.md | App & API integration |
| MCP Handbook | MCP-HANDBOOK.md | Complete usage guide with all 18 MCP tools, cross-app memory strategies, pro tips, and FAQ |
Competitive Advantage
vs. Basic MCP Implementations
| Feature | MARM v2.2.6 | Basic MCP Servers |
|---|---|---|
| Memory Intelligence | AI-powered semantic search with auto-classification | Basic key-value storage |
| Tool Coverage | 18 complete MCP protocol tools | 3-5 basic wrappers |
| Scalability | Database optimization + connection pooling | Single connection |
| MCP Compliance | 1MB response size management | No size controls |
| Deployment | Docker containerization + health monitoring | Local development only |
| Analytics | Usage tracking + business intelligence | No tracking |
Contributing
Aren't you sick of explaining every project you're working on to every LLM you work with?
MARM is building the solution to this. Support now to join a growing ecosystem - this is just Phase 1 of a 3-part roadmap and our next build will complement MARM like peanut butter and jelly.
Join the repo that's working to give YOU control over what is remembered and how it's remembered.
Why Contribute Now?
- Ground floor opportunity - Be part of the MCP memory revolution from the beginning
- Real impact - Your contributions directly solve problems you face daily with AI agents
- Growing ecosystem - Help build the infrastructure that will power tomorrow's AI workflows
- Phase 1 complete - Proven foundation ready for the next breakthrough features
Development Priorities
- Load Testing: Validate deployment performance under real AI workloads
- Documentation: Expand API documentation and LLM integration guides
- Performance: AI model caching and memory optimization
- Features: Additional MCP protocol tools and multi-tenant capabilities
Join the MARM Community
Help build the future of AI memory - no coding required!
Connect: MARM Discord | GitHub Discussions
Easy Ways to Get Involved
- Try the MCP server or Coming soon CLI and share your experience
- Star the repo if MARM solves a problem for you
- Share on social - help others discover memory-enhanced AI
- Open issues with bugs, feature requests, or use cases
- Join discussions about AI reliability and memory
For Developers
- Build integrations - MCP tools, browser extensions, API wrappers
- Enhance the memory system - improve semantic search and storage
- Expand platform support - new deployment targets and integrations
- Submit Pull Requests - Every PR helps MARM grow. Big or small, I review each with respect and openness to see how it can improve the project
⭐ Star the Project
If MARM helps with your AI memory needs, please star the repository to support development!
<div align="center"> </div>
License & Usage Notice
This project is licensed under the MIT License. Forks and derivative works are permitted.
However, use of the MARM name and version numbering is reserved for releases from the official MARM repository.
Derivatives should clearly indicate they are unofficial or experimental.
Project Documentation
Usage Guides
- MARM-HANDBOOK.md - Original MARM protocol handbook for chatbot usage
- MCP-HANDBOOK.md - Complete MCP server usage guide with commands, workflows, and examples
- PROTOCOL.md - Quick start commands and protocol reference
- FAQ.md - Answers to common questions about using MARM
MCP Server Installation
- INSTALL-DOCKER.md - Docker deployment (recommended)
- INSTALL-WINDOWS.md - Windows installation guide
- INSTALL-LINUX.md - Linux installation guide
- INSTALL-PLATFORMS.md - Platform installation guide
Chatbot Installation
- CHATBOT-SETUP.md - Web chatbot setup guide
Project Information
- README.md - This file - ecosystem overview and MCP server guide
- CONTRIBUTING.md - How to contribute to MARM
- DESCRIPTION.md - Protocol purpose and vision overview
- CHANGELOG.md - Version history and updates
- ROADMAP.md - Planned features and development roadmap
- LICENSE - MIT license terms
mcp-name: io.github.Lyellr88/marm-mcp-server
Built with ❤️ by MARM Systems - Universal MCP memory intelligence
常见问题
io.github.Lyellr88/marm-mcp-server 是什么?
通用 MCP Server,具备高级 AI 记忆能力与 semantic search,可支持更智能的上下文检索。
相关 Skills
MCP构建
by anthropics
聚焦高质量 MCP Server 开发,覆盖协议研究、工具设计、错误处理与传输选型,适合用 FastMCP 或 MCP SDK 对接外部 API、封装服务能力。
✎ 想让 LLM 稳定调用外部 API,就用 MCP构建:从 Python 到 Node 都有成熟指引,帮你更快做出高质量 MCP 服务器。
Slack动图
by anthropics
面向Slack的动图制作Skill,内置emoji/消息GIF的尺寸、帧率和色彩约束、校验与优化流程,适合把创意或上传图片快速做成可直接发送的Slack动画。
✎ 帮你快速做出适配 Slack 的动图,内置约束规则和校验工具,少踩上传与播放坑,做表情包和演示都更省心。
接口设计评审
by alirezarezvani
审查 REST API 设计是否符合行业规范,自动检查命名、HTTP 方法、状态码与文档覆盖,识别破坏性变更并给出设计评分,适合评审接口方案和版本迭代前把关。
✎ 做API和架构方案时,它能帮你提前揪出接口设计问题并对齐最佳实践,评审视角系统,团队协作更省心。
相关 MCP Server
Slack 消息
编辑精选by Anthropic
Slack 是让 AI 助手直接读写你的 Slack 频道和消息的 MCP 服务器。
✎ 这个服务器解决了团队协作中需要 AI 实时获取 Slack 信息的痛点,特别适合开发团队让 Claude 帮忙汇总频道讨论或发送通知。不过,它目前只是参考实现,文档有限,不建议在生产环境直接使用——更适合开发者学习 MCP 如何集成第三方服务。
by netdata
io.github.netdata/mcp-server 是让 AI 助手实时监控服务器指标和日志的 MCP 服务器。
✎ 这个工具解决了运维人员需要手动检查系统状态的痛点,最适合 DevOps 团队让 Claude 自动分析性能数据。不过,它依赖 NetData 的现有部署,如果你没用过这个监控平台,得先花时间配置。
by d4vinci
Scrapling MCP Server 是专为现代网页设计的智能爬虫工具,支持绕过 Cloudflare 等反爬机制。
✎ 这个工具解决了爬取动态网页和反爬网站时的头疼问题,特别适合需要批量采集电商价格或新闻数据的开发者。不过,它依赖外部浏览器引擎,资源消耗较大,不适合轻量级任务。