io.github.MauriceIsrael/SmartMemory

编码与调试

by mauriceisrael

面向 LLMs 的 neuro-symbolic memory 原型(POC),用于增强长期记忆与推理能力。

什么是 io.github.MauriceIsrael/SmartMemory

面向 LLMs 的 neuro-symbolic memory 原型(POC),用于增强长期记忆与推理能力。

README

SmartMemory

Give your LLM structured memory | Transform conversations into verified knowledge graphs

<p align="center"> <em>An MCP server that teaches AI assistants business rules through natural dialogue</em> </p>

[!CAUTION] Proof of Concept Only: This project is an experimental implementation of a Neuro-Symbolic architecture. It is designed to demonstrate how LLMs can interact with knowledge graphs for rule learning. It is NOT intended for production or professional use. Use it for research, experimentation, and learning purposes only.


🚀 Quick Start

New user?5-Minute Quick Start Guide

Having issues?Troubleshooting Guide

Need to configure?Configuration Reference

Want to understand how it works?Neuro-Symbolic Architecture | Technical Architecture

Looking for specific docs?📚 Documentation Index


🎯 What is SmartMemory?

SmartMemory enables your favorite LLM (Claude, Gemini, etc.) to remember facts, learn business rules, and deduce new information.

You can use it in two main ways:

1. 💬 Conversational Mode (The "Brain")

  • For: Individuals using LLM clients (Claude Desktop, etc.).
  • Goal: Have your assistant remember facts and learn logic naturally as you chat.
  • How: Configure it as an MCP server.
  • 👉 Go to Setup

2. 🏗️ Supervision Mode (The "Factory")

  • For: Teams, developers, or heavy users.
  • Goal: Extract thousands of rules from documents (PDFs) and visualize the knowledge graph.
  • How: Deploy the full Dashboard via Docker.
  • 👉 Go to Setup

💬 Mode 1: Conversational Setup (MCP)

This mode gives your LLM "long-term memory" and logical deduction capabilities.

Option A: Install via Docker (Recommended) 🐳

Best for: Everyone! No Python installation required.

The SmartMemory Docker image is available on GitHub Container Registry.

Simply add to your MCP client configuration:

For Claude Desktop, edit ~/Library/Application Support/Claude/claude_desktop_config.json:

json
{
  "mcpServers": {
    "smart-memory": {
      "command": "docker",
      "args": ["run", "--rm", "-i", "ghcr.io/mauriceisrael/smart-memory:latest"]
    }
  }
}

For Gemini (Cline), edit ~/.cline/mcp_settings.json:

json
{
  "mcpServers": {
    "smart-memory": {
      "command": "docker",
      "args": ["run", "--rm", "-i", "ghcr.io/mauriceisrael/smart-memory:latest"]
    }
  }
}

Restart your client and you're done! ✅


Option B: Local Server (Private) 🔒

Best for: Developers & Privacy-conscious users who want to run from source.

Installation Steps (Local)

  1. Clone & Install

    bash
    git clone https://github.com/MauriceIsrael/SmartMemory
    cd SmartMemory
    python3 -m venv venv
    source venv/bin/activate
    pip install -e .
    
  2. Connect to Claude Desktop Edit your configuration file (~/Library/Application Support/Claude/claude_desktop_config.json on macOS):

    json
    {
      "mcpServers": {
        "smartmemory": {
          "command": "/absolute/path/to/SmartMemory/venv/bin/python",
          "args": ["-m", "smart_memory.server"]
        }
      }
    }
    

    (Replace /absolute/path/... with your actual path)

  3. Chat! Restart Claude and try:

    "I know Bob. He goes to work by car. Can he vote?"

    See Interactive Demo below for what to expect.


🏗️ Mode 2: Supervision Setup (Docker)

This mode runs the Web Dashboard and API server. Ideally suited for:

  • Visualizing the Knowledge Graph.
  • Extracting rules from documents (PDFs).
  • Hosting a shared memory server for a team.

Quick Start (Docker)

You don't need Python installed. Just Docker.

  1. Run the container

    For Dashboard mode (web interface):

    For Ollama (local):

    bash
    docker run -p 8080:8080 \
      -e LLM_PROVIDER=ollama \
      -e LLM_MODEL=llama3 \
      -e LLM_BASE_URL=http://172.17.0.1:11434 \
      -v $(pwd)/brain:/app/data \
      ghcr.io/mauriceisrael/smart-memory:latest dashboard
    

    For OpenAI:

    bash
    docker run -p 8080:8080 \
      -e LLM_PROVIDER=openai \
      -e LLM_MODEL=gpt-4 \
      -e LLM_API_KEY=your-api-key \
      -v $(pwd)/brain:/app/data \
      ghcr.io/mauriceisrael/smart-memory:latest dashboard
    

    (Note: Add dashboard at the end to start web server. Without it, starts MCP mode)

    (The -v volume persists your knowledge graph and rules)

  2. Open the Dashboard Go to http://localhost:8080

LLM Configuration

SmartMemory uses an LLM to extract business rules from documents. Configure it in two ways:

Option 1: Via Dashboard (Local Development)

  1. Go to Admin page
  2. Select your provider (Ollama, OpenAI, Anthropic, Google)
  3. Enter your configuration (API key or Ollama URL)
  4. Test connection
  5. Save

Option 2: Via Environment Variables (Docker)

Already shown above! Pass -e LLM_PROVIDER=... when starting Docker.

Supported Providers:

  • Ollama (local, free): llama3, qwen2.5-coder, mistral
  • OpenAI: gpt-4, gpt-3.5-turbo
  • Anthropic: claude-3-5-sonnet
  • Google: gemini-1.5-pro

Full Configuration Guide

Advanced Deployment

We support deploying to Google Cloud Run, GitHub, etc. 👉 Read the Deployment Guide (DEPLOY.md) for full instructions on hosting it online.

Extracting Rules from Documents

Once your dashboard is running (or via CLI):

  1. Upload a PDF (e.g., "Company_Policy.pdf").
  2. Configure LLM: The server needs an LLM API key (OpenAI, Anthropic, or local Ollama) to read the doc.
  3. Review: The system will propose rules. You can approve them in bulk in the dashboard.

💬 Interactive Demo: From Facts to Rules

Here is what happens when you use Conversational Mode:

code
> I know Bob
LLM: ✦ I've processed your statement "I know Bob".

> He goes at work by car
LLM: ✦ I've noted that Bob goes to work by car.

> Can Bob vote?
LLM: ✦ I can't answer yet. But since he drives, he is likely an adult.
     May I add a rule: "Drivers are adults"?

> yes
LLM: ✨ Rule 'drivers_are_adults' added.
     Now, may I add a rule: "Adults can vote"?

> yes
LLM: ✨ Rule 'adults_can_vote' added.
     ✦ Therefore, yes, Bob can vote.

🛠️ Technical Stack

  • Backend: Python 3.11+, RDFLib, FastAPI
  • Frontend: SvelteKit, TypeScript, TailwindCSS
  • Reasoning: Neuro-Symbolic (LLM + SPARQL/OWL)
  • Protocol: Model Context Protocol (MCP)

📜 License

MIT License - see LICENSE

常见问题

io.github.MauriceIsrael/SmartMemory 是什么?

面向 LLMs 的 neuro-symbolic memory 原型(POC),用于增强长期记忆与推理能力。

相关 Skills

网页构建器

by anthropics

Universal
热门

面向复杂 claude.ai HTML artifact 开发,快速初始化 React + Tailwind CSS + shadcn/ui 项目并打包为单文件 HTML,适合需要状态管理、路由或多组件交互的页面。

在 claude.ai 里做复杂网页 Artifact 很省心,多组件、状态和路由都能顺手搭起来,React、Tailwind 与 shadcn/ui 组合效率高、成品也更精致。

编码与调试
未扫描114.1k

前端设计

by anthropics

Universal
热门

面向组件、页面、海报和 Web 应用开发,按鲜明视觉方向生成可直接落地的前端代码与高质感 UI,适合做 landing page、Dashboard 或美化现有界面,避开千篇一律的 AI 审美。

想把页面做得既能上线又有设计感,就用前端设计:组件到整站都能产出,难得的是能避开千篇一律的 AI 味。

编码与调试
未扫描114.1k

网页应用测试

by anthropics

Universal
热门

用 Playwright 为本地 Web 应用编写自动化测试,支持启动开发服务器、校验前端交互、排查 UI 异常、抓取截图与浏览器日志,适合调试动态页面和回归验证。

借助 Playwright 一站式验证本地 Web 应用前端功能,调 UI 时还能同步查看日志和截图,定位问题更快。

编码与调试
未扫描114.1k

相关 MCP Server

GitHub

编辑精选

by GitHub

热门

GitHub 是 MCP 官方参考服务器,让 Claude 直接读写你的代码仓库和 Issues。

这个参考服务器解决了开发者想让 AI 安全访问 GitHub 数据的问题,适合需要自动化代码审查或 Issue 管理的团队。但注意它只是参考实现,生产环境得自己加固安全。

编码与调试
83.4k

by Context7

热门

Context7 是实时拉取最新文档和代码示例的智能助手,让你告别过时资料。

它能解决开发者查找文档时信息滞后的问题,特别适合快速上手新库或跟进更新。不过,依赖外部源可能导致偶尔的数据延迟,建议结合官方文档使用。

编码与调试
52.2k

by tldraw

热门

tldraw 是让 AI 助手直接在无限画布上绘图和协作的 MCP 服务器。

这解决了 AI 只能输出文本、无法视觉化协作的痛点——想象让 Claude 帮你画流程图或白板讨论。最适合需要快速原型设计或头脑风暴的开发者。不过,目前它只是个基础连接器,你得自己搭建画布应用才能发挥全部潜力。

编码与调试
46.3k

评论