ai.smithery/neverinfamous-memory-journal-mcp

平台与服务

by neverinfamous

面向开发者的 MCP 服务器,支持基于 Git 的项目管理,并记录项目与个人工作日志。

什么是 ai.smithery/neverinfamous-memory-journal-mcp

面向开发者的 MCP 服务器,支持基于 Git 的项目管理,并记录项目与个人工作日志。

README

Memory Journal MCP Server

<!-- mcp-name: io.github.neverinfamous/memory-journal-mcp -->

GitHub npm Docker Pulls License: MIT Status MCP Registry Security TypeScript Coverage Tests E2E Tests CI

🎯 AI Context + Project Intelligence: Bridge disconnected AI sessions with persistent project memory and automatic session handoff — with full GitHub workflow integration.

GitHubWikiChangelogRelease Article

🚀 Quick Deploy:

🎯 What This Does

What Sets Us Apart

65 MCP Tools · 17 Workflow Prompts · 38 Resources · 10 Tool Groups · Code Mode · GitHub Commander (Issue Triage, PR Review, Milestone Sprints, Security/Quality/Perf Audits) · GitHub Integration (Issues, PRs, Actions, Kanban, Milestones, Insights) · Team Collaboration (Shared DB, Vector Search, Cross-Project Insights)

FeatureDescription
Session IntelligenceAgents auto-query project history, create entries at checkpoints, and hand off context between sessions via /session-summary and team-session-summary
GitHub Integration16 tools for Issues, PRs, Actions, Kanban, Milestones (%), Copilot Reviews, and 14-day Insights
Dynamic Project RoutingSeamlessly switch contexts and access CI/Issue tracking across multiple repositories using a single server instance via PROJECT_REGISTRY
Knowledge Graphs8 relationship types linking specs → implementations → tests → PRs with Mermaid visualization
Hybrid SearchReciprocal Rank Fusion combining FTS5 keywords, semantic vector similarity, auto-heuristics, and date-range filters
Code ModeExecute multi-step operations in a secure sandbox — up to 90% token savings via mj.* API
Configurable Briefing12 env vars / CLI flags control memory://briefing content — entries, team, GitHub detail, skills awareness
Reports & AnalyticsStandups, retrospectives, PR summaries, digests, period analyses, and milestone tracking
Team Collaboration22 tools with full parity — CRUD, vector search, relationship graphs, cross-project insights, author attribution
Data InteroperabilityBidirectional Markdown roundtripping, unified IO namespace, and schema-safe JSON exports with hard bounds-checked path traversal defenses
Backup & RestoreOne-command backup/restore with automated scheduling, retention policies, and safety-net auto-backups
Security & TransportOAuth 2.1 (RFC 9728/8414, JWT/JWKS, scopes), Streamable HTTP + SSE, rate limiting, CORS, SQL injection prevention, non-root Docker
Structured Error HandlingEvery tool returns {success, error, code, category, suggestion, recoverable} — agents get classification, remediation hints, and recoverability signals
Agent CollaborationIDE agents and Copilot share context; review findings become searchable knowledge; agents suggest reusable rules and skills (setup)
GitHub CommanderSkills for issue triage, PR reviews, sprint milestones, and security/quality/performance audits with journal trails (docs)

🎯 Why Memory Journal?

When managing large projects with AI assistance, you face a critical challenge:

  • Thread Amnesia - Each new AI conversation starts from zero, unaware of previous work
  • Lost Context - Decisions, implementations, and learnings scattered across disconnected threads
  • Repeated Work - AI suggests solutions you've already tried or abandoned
  • Context Overload - Manually copying project history into every new conversation

Memory Journal solves this by acting as your project's long-term memory, bridging the gap between fragmented AI sessions.


Ask Agent naturally:

  • "Show me my recent journal entries"
  • "Create a backup of my journal"
  • "Check the server health status"
  • "Find entries related to performance"

See complete examples & prompts →


mermaid
flowchart TB
    subgraph Session["🤖 AI Session Start"]
        Briefing["📋 Read Briefing<br/>(memory://briefing)"]
    end

    subgraph Core["📝 Journal Operations"]
        Create["Create Entry"]
        Retrieve["Retrieve & Search"]
        Link["Link Entries"]
    end

    subgraph Search["🔍 Hybrid Search"]
        FTS["Keyword (FTS5)"]
        Semantic["Semantic (Vector)"]
        DateRange["Date Range"]
        RRF["Reciprocal Rank Fusion"]
    end

    subgraph GitHub["🐙 GitHub Integration"]
        Issues["Issues & Milestones"]
        PRs["Pull Requests"]
        Actions["GitHub Actions"]
        Kanban["Kanban Boards"]
        Insights["Repository Insights"]
    end

    subgraph Outputs["📊 Outputs"]
        Reports["Standups & Retrospectives"]
        Graphs["Knowledge Graphs"]
        Timeline["Project Timelines"]
    end

    Session --> Core
    Core --> Search
    Core <--> GitHub
    Search --> Outputs
    GitHub --> Outputs

Suggested Rule (Add to AGENTS.md, GEMINI.md, etc):

🛑 MANDATORY SESSION START ROUTINE

Execute BEFORE fulfilling any user request in a new session:

  1. TARGET: Infer repo_name from the active workspace context or user prompt. If the task is not associated with a specific project, fallback to using the generic resource without a repo name (which defaults to the first registered workspace).
  2. FETCH: Use the MCP read_resource tool (Server: memory-journal-mcp) to read memory://briefing/{repo_name} (or memory://briefing if falling back).
    • RESTRICTION: Do NOT use execute_code for this step.
  3. RENDER TABLE: Extract userMessage and output it EXCLUSIVELY as a vertical Markdown Table (2 columns: Field and Value).
    • RESTRICTION: NO bulleted lists. NO truncation of arrays or lists.
    • REQUIRED MAP: Output all data comprehensively. Map these fields to the "Field" column:
      • Application / Project
      • Journal Entries
      • Team DB Entries
      • Latest Entry (Journal)
      • Latest Entry (Team)
      • GitHub (Include Repo, Branch, CI Status, Issues, PRs, Insights on separate lines in the Value column)
      • Milestone Progress
      • Template Resources (Output count only, not URLs)
      • Registered Workspaces (Output FULL list of project names)
      • Available Extensions (Rules, Skills, Workflows)

Tool Filtering

[!IMPORTANT] All shortcuts and tool groups include Code Mode (mj_execute_code) by default for token-efficient operations. To exclude it, add -codemode to your filter: --tool-filter starter,-codemode

Control which tools are exposed via MEMORY_JOURNAL_MCP_TOOL_FILTER (or CLI: --tool-filter):

FilterToolsUse Case
full65All tools (default)
starter~11Core + search + codemode
essential~7Minimal footprint
readonly~15Disable all mutations
-github49Exclude a group
-github,-analytics47Exclude multiple groups

Filter Syntax: shortcut or group or tool_name (whitelist mode) · -group (disable group) · -tool (disable tool) · +tool (re-enable after group disable)

Custom Selection: List individual tool names to create your own whitelist: --tool-filter "create_entry,search_entries,semantic_search"

Groups: core, search, analytics, relationships, io, admin, github, backup, team, codemode

Complete tool filtering guide →


📋 Core Capabilities

🛠️ 65 MCP Tools (10 Groups)

GroupToolsDescription
codemode1Code Mode (sandboxed code execution) 🌟 Recommended
core6Entry CRUD, tags, test
search4Text search, date range, semantic, vector stats
analytics2Statistics, cross-project insights
relationships2Link entries, visualize graphs
io3JSON/Markdown export and File-level Markdown Data Integration Interoperability (Import/Export)
admin5Update, delete, rebuild/add to vector index, merge tags
github16Issues, PRs, context, Kanban, Milestones, Insights, issue lifecycle, Copilot Reviews
backup4Backup, list, restore, cleanup
team22CRUD, search, stats, relationships, IO (Markdown import/export), backup, vector search, cross-project insights (requires TEAM_DB_PATH)

Complete tools reference →

🎯 17 Workflow Prompts

  • find-related - Discover connected entries via semantic similarity
  • prepare-standup - Daily standup summaries
  • prepare-retro - Sprint retrospectives
  • weekly-digest - Day-by-day weekly summaries
  • analyze-period - Deep period analysis with insights
  • goal-tracker - Milestone and achievement tracking
  • get-context-bundle - Project context with Git/GitHub/Kanban
  • get-recent-entries - Formatted recent entries
  • project-status-summary - GitHub Project status reports
  • pr-summary - Pull request journal activity summary
  • code-review-prep - Comprehensive PR review preparation
  • pr-retrospective - Completed PR analysis with learnings
  • actions-failure-digest - CI/CD failure analysis
  • project-milestone-tracker - Milestone progress tracking
  • confirm-briefing - Acknowledge session context to user
  • session-summary - Create a session summary entry with accomplishments, pending items, and next-session context
  • team-session-summary - Create a retrospective team session summary entry securely isolated to the team database

Complete prompts guide →

📡 38 Resources (25 Static + 13 Template)

Static Resources (appear in resource lists):

  • memory://briefing / memory://briefing/{repo} - Session initialization: compact context for AI agents (~300 tokens)
  • memory://instructions - Behavioral guidance: complete server instructions for AI agents
  • memory://recent - 10 most recent entries
  • memory://significant - Significant milestones and breakthroughs
  • memory://graph/recent - Live Mermaid diagram of recent relationships
  • memory://health - Server health & diagnostics
  • memory://graph/actions - CI/CD narrative graph
  • memory://actions/recent - Recent workflow runs
  • memory://tags - All tags with usage counts
  • memory://statistics - Journal statistics
  • memory://rules - User rules file content for agent awareness
  • memory://workflows - Available agent workflows summary
  • memory://skills - Agent skills index (names, paths, excerpts)
  • memory://github/status - GitHub repository status overview
  • memory://github/insights - Repository stars, forks, and 14-day traffic summary
  • memory://github/milestones - Open milestones with completion percentages
  • memory://team/recent - Recent team entries with author attribution
  • memory://team/statistics - Team entry counts, types, and author breakdown
  • memory://help - Tool group index with descriptions and tool counts
  • memory://help/gotchas - Field notes, edge cases, and critical usage patterns
  • memory://metrics/summary - Aggregate tool call metrics since server start (calls, errors, token estimates, duration) — HIGH priority
  • memory://metrics/tokens - Per-tool token usage breakdown sorted by output token cost — MEDIUM priority
  • memory://metrics/system - Process-level metrics: memory (MB), uptime (s), Node.js version, platform — MEDIUM priority
  • memory://metrics/users - Per-user call counts (populated when OAuth user identifiers are present) — LOW priority
  • memory://audit - Last 50 write/admin tool call entries from the JSONL audit log (requires AUDIT_LOG_PATH)

Template Resources (require parameters, fetch directly by URI):

  • memory://github/status/{repo} - Repository status targeted by repo
  • memory://github/insights/{repo} - Repository insights targeted by repo
  • memory://github/milestones/{repo} - Open milestones targeted by repo
  • memory://milestones/{repo}/{number} - Milestone detail targeted by repo
  • memory://projects/{number}/timeline - Project activity timeline
  • memory://issues/{issue_number}/entries - Entries linked to issue
  • memory://prs/{pr_number}/entries - Entries linked to PR
  • memory://prs/{pr_number}/timeline - Combined PR + journal timeline
  • memory://kanban/{project_number} - GitHub Project Kanban board
  • memory://kanban/{project_number}/diagram - Kanban Mermaid visualization
  • memory://milestones/{number} - Milestone detail with completion progress
  • memory://help/{group} - Per-group tool reference with parameters and annotations

Code Mode: Maximum Efficiency

Code Mode (mj_execute_code) dramatically reduces token usage (70–90%) and is included by default in all presets.

Code executes in a sandboxed VM context with multiple layers of security. All mj.* API calls execute against the journal within the sandbox, providing:

  • Static code validation — blocked patterns include require(), process, eval(), and filesystem access
  • Rate limiting — 60 executions per minute per client
  • Hard timeouts — configurable execution limit (default 30s)
  • Full API access — all 10 tool groups are available via mj.* (e.g., mj.core.createEntry(), mj.search.searchEntries(), mj.github.getGithubIssues(), mj.analytics.getStatistics())
  • Strict Readonly Contract — Calling any mutation method under --tool-filter readonly safely halts the sandbox to prevent execution, returning a structured { success: false, error: "..." } response to the agent instead of a raw MCP protocol exception.

⚡ Code Mode Only (Maximum Token Savings)

Run with only Code Mode enabled — a single tool that provides access to all 65 tools' worth of capability through the mj.* API:

json
{
  "mcpServers": {
    "memory-journal-mcp": {
      "command": "memory-journal-mcp",
      "args": ["--tool-filter", "codemode"]
    }
  }
}

This exposes just mj_execute_code. The agent writes JavaScript against the typed mj.* SDK — composing operations across all 10 tool groups and returning exactly the data it needs — in one execution. This mirrors the Code Mode pattern pioneered by Cloudflare for their entire API: fixed token cost regardless of how many capabilities exist.

Disabling Code Mode

If you prefer individual tool calls, exclude codemode:

json
{
  "args": ["--tool-filter", "starter,-codemode"]
}

🚀 Quick Start

Option 1: npm (Recommended)

bash
npm install -g memory-journal-mcp

Option 2: From Source

bash
git clone https://github.com/neverinfamous/memory-journal-mcp.git
cd memory-journal-mcp
npm install
npm run build

Add to MCP Config

Add this to your ~/.cursor/mcp.json, Claude Desktop config, or equivalent:

Basic Configuration

json
{
  "mcpServers": {
    "memory-journal-mcp": {
      "command": "memory-journal-mcp",
      "env": {
        "GITHUB_TOKEN": "ghp_your_token_here",
        "PROJECT_REGISTRY": "{\"my-repo\":{\"path\":\"/path/to/your/git/repo\",\"project_number\":1}}"
      }
    }
  }
}

Advanced Configuration (Recommended)

Showcasing the full power of the server, including Multi-Project Routing, Team Collaboration, Copilot awareness, and Context Injections.

json
{
  "mcpServers": {
    "memory-journal-mcp": {
      "command": "memory-journal-mcp",
      "env": {
        "DB_PATH": "/path/to/your/memory_journal.db",
        "TEAM_DB_PATH": "/path/to/shared/team.db",
        "GITHUB_TOKEN": "ghp_your_token_here",
        "PROJECT_REGISTRY": "{\"my-repo\":{\"path\":\"/path/to/repo\",\"project_number\":1},\"other-repo\":{\"path\":\"/path/to/other\",\"project_number\":5}}",
        "AUTO_REBUILD_INDEX": "true",
        "MEMORY_JOURNAL_MCP_TOOL_FILTER": "codemode",
        "BRIEFING_ENTRY_COUNT": "3",
        "BRIEFING_INCLUDE_TEAM": "true",
        "BRIEFING_ISSUE_COUNT": "1",
        "BRIEFING_PR_COUNT": "1",
        "BRIEFING_PR_STATUS": "true",
        "BRIEFING_WORKFLOW_COUNT": "1",
        "BRIEFING_WORKFLOW_STATUS": "true",
        "BRIEFING_COPILOT_REVIEWS": "true",
        "RULES_FILE_PATH": "/path/to/your/RULES.md",
        "SKILLS_DIR_PATH": "/path/to/your/skills",
        "MEMORY_JOURNAL_WORKFLOW_SUMMARY": "/deploy: prod deployment | /audit: security scan"
      }
    }
  }
}

Variants (modify the config above):

VariantChange
Minimal (no GitHub)Remove the env block entirely
npx (no install)Replace "command" with "npx" and add "args": ["-y", "memory-journal-mcp"]
From sourceReplace "command" with "node" and add "args": ["dist/cli.js"]
Code Mode onlyAdd "args": ["--tool-filter", "codemode"] (single tool, all capabilities)
DockerReplace "command" with "docker" and use run -i --rm -v ./data:/app/data writenotenow/memory-journal-mcp:latest as args
Team collaborationAdd "TEAM_DB_PATH": "./team.db" to env

Restart your MCP client and start journaling!

Option 3: HTTP/SSE Transport (Remote Access)

For remote access or web-based clients, run the server in HTTP mode:

bash
memory-journal-mcp --transport http --port 3000

To bind to all interfaces (required for containers):

bash
memory-journal-mcp --transport http --port 3000 --server-host 0.0.0.0

Endpoints:

EndpointDescriptionMode
GET /Server info and available endpointsBoth
POST /mcpJSON-RPC requests (initialize, tools/call, etc.)Both
GET /mcpSSE stream for server-to-client notificationsStateful
DELETE /mcpSession terminationStateful
GET /sseLegacy SSE connection (MCP 2024-11-05)Stateful
POST /messagesLegacy SSE message endpointStateful
GET /healthHealth check ({ status, timestamp })Both
GET /.well-known/oauth-protected-resourceRFC 9728 Protected Resource MetadataBoth

Session Management: The server uses stateful sessions by default. Include the mcp-session-id header (returned from initialization) in subsequent requests.

  • OAuth 2.1 — RFC 9728/8414, JWT/JWKS, granular scopes (opt-in via --oauth-enabled)
  • 7 Security Headers — CSP, HSTS (opt-in), X-Frame-Options, and more
  • Rate Limiting — 100 req/min per IP · CORS — configurable multi-origin (exact-match) · 1MB body limit
  • Server Timeouts — Request (120s), keep-alive (65s), headers (66s) · 404 handler · Cross-protocol guard
  • Build Provenance · SBOM · Supply Chain Attestations · Non-root execution

Example with curl:

Initialize session (returns mcp-session-id header):

bash
curl -X POST http://localhost:3000/mcp \
  -H "Content-Type: application/json" \
  -H "Accept: application/json, text/event-stream" \
  -d '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{"protocolVersion":"2025-03-26","capabilities":{},"clientInfo":{"name":"test","version":"1.0"}}}'

List tools (with session):

bash
curl -X POST http://localhost:3000/mcp \
  -H "Content-Type: application/json" \
  -H "Accept: application/json, text/event-stream" \
  -H "mcp-session-id: YOUR_SESSION_ID" \
  -d '{"jsonrpc":"2.0","id":2,"method":"tools/list","params":{}}'

Stateless Mode (Serverless)

For serverless deployments (Lambda, Workers, Vercel), use stateless mode:

bash
memory-journal-mcp --transport http --port 3000 --stateless
ModeProgress NotificationsLegacy SSEServerless
Stateful (default)✅ Yes✅ Yes⚠️ Complex
Stateless (--stateless)❌ No❌ No✅ Native

Automated Scheduling (HTTP Only)

When running in HTTP/SSE mode, enable periodic maintenance jobs with CLI flags. These jobs run in-process on setInterval — no external cron needed.

Note: These flags are ignored for stdio transport because stdio sessions are short-lived (tied to your IDE session). For stdio, use OS-level scheduling (Task Scheduler, cron) or run the backup/cleanup tools manually.

bash
memory-journal-mcp --transport http --port 3000 \
  --backup-interval 60 --keep-backups 10 \
  --vacuum-interval 1440 \
  --rebuild-index-interval 720
FlagDefaultDescription
--backup-interval <min>0 (off)Create timestamped database backups and prune old ones automatically
--keep-backups <count>5Max backups retained during automated cleanup
--vacuum-interval <min>0 (off)Run PRAGMA optimize and flush database to disk
--rebuild-index-interval <min>0 (off)Full vector index rebuild to maintain semantic search quality

Each job is error-isolated — a failure in one job won't affect the others. Scheduler status (last run, result, next run) is visible via memory://health.

GitHub Integration Configuration

The GitHub tools (get_github_issues, get_github_prs, etc.) auto-detect the repository from your git context when PROJECT_REGISTRY is configured or the MCP server is run inside a git repository.

Environment VariableDescription
DB_PATHDatabase file location (CLI: --db; default: ./memory_journal.db)
TEAM_DB_PATHTeam database file location (CLI: --team-db)
TEAM_AUTHOROverride author name for team entries (default: git config user.name)
GITHUB_TOKENGitHub personal access token for API access
DEFAULT_PROJECT_NUMBERDefault GitHub Project number for auto-assignment when creating issues
PROJECT_REGISTRYJSON map of repos to { path, project_number } for multi-project auto-detection and routing
AUTO_REBUILD_INDEXSet to true to rebuild vector index on server startup
MCP_HOSTServer bind host (0.0.0.0 for containers, default: localhost)
MCP_AUTH_TOKENBearer token for HTTP transport authentication (CLI: --auth-token)
MCP_CORS_ORIGINAllowed CORS origins for HTTP transport, comma-separated (default: *)
MCP_RATE_LIMIT_MAXMax requests per minute per client IP, HTTP only (default: 100)
LOG_LEVELLog verbosity: error, warn, info, debug (default: info; CLI: --log-level)
MCP_ENABLE_HSTSEnable HSTS security header on HTTP responses (CLI: --enable-hsts; default: false)
OAUTH_ENABLEDSet to true to enable OAuth 2.1 authentication (HTTP only)
OAUTH_ISSUEROAuth issuer URL (e.g., https://auth.example.com/realms/mcp)
OAUTH_AUDIENCEExpected JWT audience claim
OAUTH_JWKS_URIJWKS endpoint for token signature verification
BRIEFING_ENTRY_COUNTJournal entries in briefing (CLI: --briefing-entries; default: 3)
BRIEFING_INCLUDE_TEAMInclude team DB entries in briefing (true/false; default: false)
BRIEFING_ISSUE_COUNTIssues to list in briefing; 0 = count only (default: 0)
BRIEFING_PR_COUNTPRs to list in briefing; 0 = count only (default: 0)
BRIEFING_PR_STATUSShow PR status breakdown (open/merged/closed; default: false)
BRIEFING_WORKFLOW_COUNTWorkflow runs to list in briefing; 0 = status only (default: 0)
BRIEFING_WORKFLOW_STATUSShow workflow status breakdown in briefing (default: false)
BRIEFING_COPILOT_REVIEWSAggregate Copilot review state in briefing (default: false)
RULES_FILE_PATHPath to user rules file for agent awareness (CLI: --rules-file)
SKILLS_DIR_PATHPath to skills directory for agent awareness (CLI: --skills-dir)
MEMORY_JOURNAL_WORKFLOW_SUMMARYFree-text workflow summary for memory://workflows (CLI: --workflow-summary)
INSTRUCTION_LEVELBriefing depth: essential, standard, full (CLI: --instruction-level; default: standard)
PROJECT_LINT_CMDProject lint command for GitHub Commander validation gates (default: npm run lint)
PROJECT_TYPECHECK_CMDProject typecheck command (default: npm run typecheck; empty = skip)
PROJECT_BUILD_CMDProject build command (default: npm run build; empty = skip)
PROJECT_TEST_CMDProject test command (default: npm run test)
PROJECT_E2E_CMDProject E2E test command (default: empty = skip)
PROJECT_PACKAGE_MANAGERPackage manager override: npm, yarn, pnpm, bun (default: auto-detect from lockfile)
PROJECT_HAS_DOCKERFILEEnable Docker audit steps (default: auto-detect)
COMMANDER_HITL_FILE_THRESHOLDHuman-in-the-loop checkpoint if changes touch > N files (default: 10)
COMMANDER_SECURITY_TOOLSOverride security tool auto-detection (comma-separated; default: auto-detect)
COMMANDER_BRANCH_PREFIXBranch naming prefix for PRs (default: fix)
AUDIT_LOG_PATHPath for the JSONL audit log of write/admin tool calls. Rotates at 10 MB (keeps 5 archives). Omit to disable audit logging.
AUDIT_REDACTSet to true to omit tool arguments from audit log entries for privacy (default: false)
AUDIT_READSLog read-scoped tool calls in addition to write/admin (CLI: --audit-reads; default: false)
AUDIT_LOG_MAX_SIZEMaximum audit log file size in bytes before rotation (CLI: --audit-log-max-size; default: 10485760)
MCP_METRICS_ENABLEDSet to false to disable in-memory tool call metrics accumulation (default: true)

Multi-Project Workflows: For agents to seamlessly support multiple projects, provide PROJECT_REGISTRY.

Dynamic Context Resolution & Auto-Detection

When executing GitHub tools (issues, PRs, context, etc.), the server resolves repository context in this order:

  1. Dynamic Project Routing: If the agent passes a repo string that matches a key in your PROJECT_REGISTRY, the server dynamically mounts the physical directory mapped to that project. It executes git commands locally and automatically infers the owner.
  2. Explicit Override: If the agent provides both owner and repo explicitly, those values override auto-detection for API calls.
  3. Missing Context: Without PROJECT_REGISTRY or explicit parameters, the server blocks execution and returns {requiresUserInput: true} to prompt the agent.

Automatic Project Routing (Kanban / Issues)

When opening an issue or viewing/moving a Kanban card, the server needs a GitHub Project number. It determines this via:

  1. Exploring the raw project_number argument passed by the agent.
  2. Checking if the repo string precisely matches an entry in your PROJECT_REGISTRY, seamlessly mapping it to its pre-configured project_number.
  3. Falling back to the globally defined DEFAULT_PROJECT_NUMBER if set.

🔐 OAuth 2.1 Authentication

For production deployments, enable OAuth 2.1 authentication on the HTTP transport:

ComponentStatusDescription
Protected Resource MetadataRFC 9728 /.well-known/oauth-protected-resource
Auth Server DiscoveryRFC 8414 metadata discovery with caching
Token ValidationJWT validation with JWKS support
Scope EnforcementGranular read, write, admin scopes
HTTP TransportStreamable HTTP with OAuth middleware

Supported Scopes:

ScopeTool Groups
readcore, search, analytics, relationships, io
writegithub, team (+ all read groups)
adminadmin, backup, codemode (+ all write/read groups)

Quick Start:

bash
memory-journal-mcp --transport http --port 3000 \
  --oauth-enabled \
  --oauth-issuer https://auth.example.com/realms/mcp \
  --oauth-audience memory-journal-mcp \
  --oauth-jwks-uri https://auth.example.com/realms/mcp/protocol/openid-connect/certs

Or via environment variables:

bash
export OAUTH_ENABLED=true
export OAUTH_ISSUER=https://auth.example.com/realms/mcp
export OAUTH_AUDIENCE=memory-journal-mcp
memory-journal-mcp --transport http --port 3000

Note: OAuth is opt-in. When not enabled, the server falls back to simple token authentication via MCP_AUTH_TOKEN environment variable, or runs without authentication.

🔄 Session Management

  1. Session start → agent reads memory://briefing (or memory://briefing/{repo}) and shows project context
  2. Session summary → use /session-summary to capture progress and next-session context
  3. Next session's briefing includes the previous summary — context flows seamlessly

🔧 Configuration

GitHub Integration (Optional)

bash
export GITHUB_TOKEN="your_token"              # For Projects/Issues/PRs

Scopes: repo, project, read:org (org-level project discovery only)

GitHub Management Capabilities

Memory Journal provides a hybrid approach to GitHub management:

Capability SourcePurpose
MCP ServerSpecialized features: Kanban visualization, Milestones, journal linking, project timelines
Agent (gh CLI)Full GitHub mutations: create/close issues, create/merge PRs, manage releases

MCP Server Tools (Read + Kanban + Milestones + Issue Lifecycle):

  • get_github_issues / get_github_issue - Query issues
  • get_github_prs / get_github_pr - Query pull requests
  • get_github_context - Full repository context
  • get_kanban_board / move_kanban_item - Kanban management
  • get_github_milestones / get_github_milestone - Milestone tracking with completion %
  • create_github_milestone / update_github_milestone / delete_github_milestone - Milestone CRUD
  • get_repo_insights - Repository traffic & analytics (stars, clones, views, referrers, popular paths)
  • create_github_issue_with_entry / close_github_issue_with_entry - Issue lifecycle with journal linking

Why this design? The MCP server focuses on value-added features that integrate journal entries with GitHub (Kanban views, Milestones, timeline resources, context linking). Standard GitHub mutations (create/close issues, merge PRs, manage releases) are handled directly by agents via gh CLI.

Complete GitHub integration guide →

🏗️ Architecture

Data Flow

mermaid
flowchart TB
    AI["🤖 AI Agent<br/>(Cursor, Windsurf, Claude)"]

    subgraph MCP["Memory Journal MCP Server"]
        Tools["🛠️ 65 Tools"]
        Resources["📡 38 Resources"]
        Prompts["💬 17 Prompts"]
    end

    subgraph Storage["Persistence Layer"]
        SQLite[("💾 SQLite<br/>Entries, Tags, Relationships")]
        Vector[("🔍 Vector Index<br/>Semantic Embeddings")]
        Backups["📦 Backups"]
    end

    subgraph External["External Integrations"]
        GitHub["🐙 GitHub API<br/>Issues, PRs, Actions"]
        Kanban["📋 Projects v2<br/>Kanban Boards"]
    end

    AI <-->|"MCP Protocol"| MCP
    Tools --> Storage
    Tools --> External
    Resources --> Storage
    Resources --> External

Stack

code
┌─────────────────────────────────────────────────────────────┐
│ MCP Server Layer (TypeScript)                               │
│  ┌─────────────────┐  ┌─────────────────┐  ┌─────────────┐  │
│  │ Tools (65)      │  │ Resources (38)  │  │ Prompts (17)│  │
│  │ with Annotations│  │ with Annotations│  │             │  │
│  └─────────────────┘  └─────────────────┘  └─────────────┘  │
├─────────────────────────────────────────────────────────────┤
│ Native SQLite Engine                                        │
│  ┌─────────────────┐  ┌─────────────────┐  ┌─────────────┐  │
│  │ better-sqlite3  │  │ sqlite-vec      │  │ transformers│  │
│  │ (High-Perf I/O) │  │ (Vector Index)  │  │ (Embeddings)│  │
│  └─────────────────┘  └─────────────────┘  └─────────────┘  │
├─────────────────────────────────────────────────────────────┤
│ SQLite Database with Hybrid Search                          │
│  ┌─────────────────────────────────────────────────────────┐│
│  │ entries + tags + relationships + embeddings + backups   ││
│  └─────────────────────────────────────────────────────────┘│
└─────────────────────────────────────────────────────────────┘

🔧 Technical Highlights

Performance & Portability

  • TypeScript + Native SQLite - High-performance better-sqlite3 with synchronous I/O
  • sqlite-vec - Vector similarity search via SQLite extension
  • @huggingface/transformers - ML embeddings in JavaScript
  • Lazy loading - ML models load on first use, not startup

Performance Benchmarks

Memory Journal is designed for extremely low overhead during AI task execution. We include a vitest bench suite to maintain these baseline guarantees:

  • Database Reads: Operations execute in fractions of a millisecond. calculateImportance is ~13-14x faster than retrieving 50 recent entries.
  • Vector Search Engine: Both search (~140-220 ops/sec) and indexing (~1600-1900+ ops/sec) are high-throughput via sqlite-vec with SQL-native KNN queries.
  • Core MCP Routines: getTools uses cached O(1) dispatch (~4800-7000x faster than get_recent_entries). create_entry and search_entries execute through the full MCP layer with sub-millisecond overhead.

To run the benchmarking suite locally:

bash
npm run bench

Testing

Extensively tested across two frameworks:

SuiteCommandCovers
Vitest (unit/integration)npm testDatabase, tools, resources, handlers, security, GitHub, vector search, codemode
Playwright (e2e)npm run test:e2eHTTP/SSE transport, auth, sessions, CORS, security headers, scheduler
bash
npm test          # Unit + integration tests
npm run test:e2e  # End-to-end HTTP/SSE transport tests

Security

  • Deterministic error handling - Every tool returns structured {success, error, code, category, suggestion, recoverable} responses with actionable context — no raw exceptions, no silent failures, no misleading messages
  • Local-first - All data stored locally, no external API calls (except optional GitHub)
  • Input validation - Zod schemas, content size limits, SQL injection prevention
  • Path traversal protection - Backup filenames validated
  • MCP 2025-03-26 annotations - Behavioral hints (readOnlyHint, destructiveHint, etc.)
  • HTTP transport hardening - 7 security headers, configurable multi-origin CORS, 1MB body limit, built-in rate limiting (100 req/min), server timeouts, HSTS (opt-in), 30-min session timeout, 404 handler, cross-protocol guard
  • Token scrubbing - GitHub tokens and credentials automatically redacted from error logs

Data & Privacy

  • Single SQLite file - You own your data
  • Portable - Move your .db file anywhere
  • Soft delete - Entries can be recovered
  • Auto-backup on restore - Never lose data accidentally

📚 Documentation & Resources


📄 License

MIT License - See LICENSE file for details.

🤝 Contributing

Built by developers, for developers. PRs welcome! See CONTRIBUTING.md for guidelines.


Migrating from v2.x? Your existing database is fully compatible. The TypeScript version uses the same schema and data format.

常见问题

ai.smithery/neverinfamous-memory-journal-mcp 是什么?

面向开发者的 MCP 服务器,支持基于 Git 的项目管理,并记录项目与个人工作日志。

相关 Skills

MCP构建

by anthropics

Universal
热门

聚焦高质量 MCP Server 开发,覆盖协议研究、工具设计、错误处理与传输选型,适合用 FastMCP 或 MCP SDK 对接外部 API、封装服务能力。

想让 LLM 稳定调用外部 API,就用 MCP构建:从 Python 到 Node 都有成熟指引,帮你更快做出高质量 MCP 服务器。

平台与服务
未扫描111.8k

Slack动图

by anthropics

Universal
热门

面向Slack的动图制作Skill,内置emoji/消息GIF的尺寸、帧率和色彩约束、校验与优化流程,适合把创意或上传图片快速做成可直接发送的Slack动画。

帮你快速做出适配 Slack 的动图,内置约束规则和校验工具,少踩上传与播放坑,做表情包和演示都更省心。

平台与服务
未扫描111.8k

MCP服务构建器

by alirezarezvani

Universal
热门

从 OpenAPI 一键生成 Python/TypeScript MCP server 脚手架,并校验 tool schema、命名规范与版本兼容性,适合把现有 REST API 快速发布成可生产演进的 MCP 服务。

帮你快速搭建 MCP 服务与后端 API,脚手架完善、扩展顺手,尤其适合想高效验证服务能力的开发者。

平台与服务
未扫描9.8k

相关 MCP Server

Slack 消息

编辑精选

by Anthropic

热门

Slack 是让 AI 助手直接读写你的 Slack 频道和消息的 MCP 服务器。

这个服务器解决了团队协作中需要 AI 实时获取 Slack 信息的痛点,特别适合开发团队让 Claude 帮忙汇总频道讨论或发送通知。不过,它目前只是参考实现,文档有限,不建议在生产环境直接使用——更适合开发者学习 MCP 如何集成第三方服务。

平台与服务
83.1k

by netdata

热门

io.github.netdata/mcp-server 是让 AI 助手实时监控服务器指标和日志的 MCP 服务器。

这个工具解决了运维人员需要手动检查系统状态的痛点,最适合 DevOps 团队让 Claude 自动分析性能数据。不过,它依赖 NetData 的现有部署,如果你没用过这个监控平台,得先花时间配置。

平台与服务
78.3k

by d4vinci

热门

Scrapling MCP Server 是专为现代网页设计的智能爬虫工具,支持绕过 Cloudflare 等反爬机制。

这个工具解决了爬取动态网页和反爬网站时的头疼问题,特别适合需要批量采集电商价格或新闻数据的开发者。不过,它依赖外部浏览器引擎,资源消耗较大,不适合轻量级任务。

平台与服务
34.9k

评论