Kindex

AI 与智能体

by jmcentire

Persistent knowledge graph for AI workflows with context tiers and .kin inheritance.

什么是 Kindex

Persistent knowledge graph for AI workflows with context tiers and .kin inheritance.

README

Kindex

Python 3.10+ MIT License v0.16.1 PyPI Tests Claude Code Plugin

The memory layer Claude Code doesn't have.

Kindex does one thing. It knows what you know.

It's a persistent knowledge graph for AI-assisted workflows. It indexes your conversations, projects, and intellectual work so that Claude Code never starts a session blind. Available as a free Claude Code plugin (MCP server) or standalone CLI.

Memory plugins capture what happened. Kindex captures what it means and how it connects. Most memory tools are session archives with search. Kindex is a weighted knowledge graph that grows intelligence over time — understanding relationships, surfacing constraints, and managing exactly how much context to inject based on your available token budget.

Install as Claude Code Plugin

Two commands. Zero configuration.

bash
pip install kindex[mcp]
claude mcp add --scope user --transport stdio kindex -- kin-mcp
kin init

Claude Code now has 30 native tools: search, add, context, show, ask, learn, link, list_nodes, status, suggest, graph_stats, graph_merge, dream, changelog, ingest, tag_start, tag_update, tag_resume, remind_create, remind_list, remind_snooze, remind_done, remind_check, remind_exec, mode_activate, mode_list, mode_show, mode_create, mode_export, mode_import, mode_seed.

Or add .mcp.json to any repo for project-scope access:

json
{ "mcpServers": { "kindex": { "command": "kin-mcp" } } }

Install as CLI

bash
pip install kindex
kin init

With LLM-powered extraction:

bash
pip install kindex[llm]

With reminders (natural language time parsing):

bash
pip install kindex[reminders]

With everything (LLM + vectors + MCP + reminders):

bash
pip install kindex[all]

Why Kindex

Context-aware by design

Five context tiers auto-select based on available tokens. When other plugins dump everything into context, Kindex gives you 200 tokens of executive summary or 4000 tokens of deep context — whatever fits. Your plugin doesn't eat the context window.

TierBudgetUse Case
full~4000 tokensSession start, deep work
abridged~1500 tokensMid-session reference
summarized~750 tokensQuick orientation
executive~200 tokensPost-compaction re-injection
index~100 tokensExistence check only

Knowledge graph, not log file

Nodes have types, weights, domains, and audiences. Edges carry provenance and decay over time. The graph understands what matters — not just what was said.

Operational guardrails

Constraints block deploys. Directives encode preferences. Watches flag attention items. Checkpoints run pre-flight. No other memory plugin has this.

Cache-optimized LLM retrieval

Three-tier prompt architecture with Anthropic prompt caching. Stable knowledge (codebook) is cached at 10% cost. Query-relevant context is predicted via graph expansion and cached per-topic. Only the question pays full price. Transparent — kin ask just works better and cheaper.

Team and org ready

.kin inheritance chains let a service repo inherit from a platform context, which inherits from an org voice. Private/team/org/public scoping with PII stripping on export. Enterprise-ready from day one.

In Practice

A 162-file fantasy novel vault — characters, locations, magic systems, plot outlines — ingested in one pass. Cross-referenced by content mentions. Searched in milliseconds.

code
$ kin status
Nodes:     192
Edges:     11,802
Orphans:   3

$ time kin search "the Baker"
# Kindex: 10 results for "the Baker"

## [document] The Baker - Hessa's Profile and Message Broker System (w=0.70)
  → Thieves Guild, Five Marks, Thieves Guild Operations

## [person] Mia and The Baker (Hessa) -- Relationship (w=0.70)
  → Sebastian and Mia, Mia -- Motivations and Goals

0.142 total

$ kin graph stats
Nodes:      192
Edges:      11,802
Density:    0.3218
Components: 5
Avg degree: 122.94

192 nodes. 11,802 edges. 5 context tiers. Hybrid FTS5 + graph traversal in 142ms.

Getting Claude to Actually Use It

Installing the MCP plugin gives Claude the tools. But Claude won't use them proactively unless you tell it to. Kindex ships with a recommended CLAUDE.md block that turns passive tools into active habits:

bash
# Print the recommended directives
kin setup-claude-md

# Or auto-append to your global CLAUDE.md
kin setup-claude-md --install

This adds session lifecycle rules (start/during/segment/end), explicit capture triggers (discoveries, decisions, key files, notable outputs), and search-before-add discipline. The difference between "Claude has a knowledge graph" and "Claude actively maintains a knowledge graph" is this block.

The SessionStart hook (kin setup-hooks) reinforces these directives at the start of every session with a "Session directives" block that reminds Claude to use kindex MCP tools throughout the session.

What gets captured

With the directives active, Claude will:

  • Search the graph before starting work and before adding nodes
  • Add discoveries, decisions, key files, notable outputs, and new terms as they emerge
  • Link related concepts when connections are found
  • Learn from long files and outputs via bulk extraction
  • Tag sessions to track work context across conversations
  • Remind with actions for deferred tasks (shell commands or headless Claude invocations)

Actionable Reminders

Reminders can carry shell commands and/or natural-language instructions. When due, the daemon executes them automatically — simple commands run directly, complex tasks launch headless claude -p. A Stop hook guard blocks Claude from exiting when actionable reminders are pending.

bash
# Kill a cloud instance in 1 hour (but download results first)
kin remind create "Kill vast.ai instance" --at "in 1 hour" \
  --action "vastai destroy instance 12345" \
  --instructions "Download results from /workspace/ before killing"

# Manual trigger
kin remind exec --reminder-id <id>

Dream — Knowledge Consolidation

Kindex dreams. After each Claude Code session, a detached background process runs fuzzy deduplication, auto-applies pending suggestions, and strengthens edges between nodes that share domains. Like memory consolidation during sleep — replay, strengthen important paths, prune noise.

bash
# See what would happen (no changes)
kin dream --dry-run

# Run full consolidation
kin dream

# Fast path: dedup + suggestions only
kin dream --lightweight

# Include LLM-powered cluster summarisation
kin dream --deep

# Fork and return immediately (used by Stop hook)
kin dream --detach --lightweight

Three triggers: manual CLI, periodic cron (step 11 of kin cron), and automatic detached subprocess on Claude Code session exit. File locking prevents concurrent cycles. The detached process uses start_new_session=True to survive Claude Code's exit.

Conversation Modes

Modes are reusable conversation-priming artifacts that induce a processing mode in an AI session. Based on research showing that induced understanding outperforms direct instruction by 5.4x, and that 15 tokens of mode-setting capture 98.8% of achievable priming benefit.

Five built-in modes: collaborate, code, create, research, chat. Create custom modes from any session and export them for team sharing (PII-free).

bash
# Seed default modes
kin mode seed

# Activate a mode — outputs the priming artifact
kin mode activate collaborate

# Create a custom mode
kin mode create debug-session \
  --primer "We're hunting a bug. Precision over speed..." \
  --boundary "Show your reasoning chain. Name assumptions." \
  --permissions "Speculate about root causes freely."

# Export for team sharing (PII-stripped)
kin mode export collaborate > collaborate.json

# Import a teammate's mode
kin mode import their-mode.json

Modes are not instructions — they're state inductions. A primer establishes how to think, a boundary defines what quality means, and permissions state what's allowed. The AI shifts processing mode rather than following a checklist.

Quick Start

bash
# Add knowledge (with optional tags)
kin add "Stigmergy is coordination through environmental traces" --tags biology,coordination

# Search with hybrid FTS5 + graph traversal
kin search stigmergy
kin search coordination --tags biology   # filter results by tag

# Ask questions (with automatic classification)
kin ask "How does weight decay work?"

# Get context for AI injection
kin context --topic stigmergy --level full

# List and filter by tags
kin list --tags python,ml              # nodes tagged with both
kin list --type concept --tags ai      # combine type and tag filters

# Track operational rules
kin add "Never break the API contract" --type constraint --trigger pre-deploy --action block

# Check status before deploy
kin status --trigger pre-deploy

# Ingest from all sources
kin ingest all

# Session tags — named work context handles
kin tag start auth-refactor --focus "OAuth2 flow" --remaining "tokens,tests"
kin tag segment --focus "Token storage" --summary "Flow design done"
kin tag resume auth-refactor   # context block for new session
kin tag end --summary "All done"

# Reminders — never forget, never nag
kin remind create "standup" --at "every weekday at 9am" --priority high
kin remind create "reply to Kevin" --at "in 30 minutes" --priority urgent
kin remind list
kin remind snooze --reminder-id <id> --duration 1h
kin remind done --reminder-id <id>

.kin/ Directory & Inheritance

Projects use .kin/ directories that encode their communication style, engineering standards, and values. Teams inherit from orgs. Repos inherit from teams. The knowledge graph carries the voice forward.

code
~/.kindex/voices/acme.kin             # Org voice (downloadable, public)
    ^
    |  inherits
~/Code/platform/.kin/config           # Platform team context
    ^
    |  inherits
~/Code/payments-service/.kin/config   # Service-specific context
yaml
# payments-service/.kin/config
name: payments-service
audience: team
domains: [payments, python]
inherits:
  - ../platform/.kin/config

The .kin/ directory is the standard location for all kindex project artifacts:

  • .kin/config — project metadata (voice, domains, audience, inheritance)
  • .kin/index.json — graph snapshot for git tracking

The payments service gets Acme's voice principles, the platform's engineering standards, AND its own domain context. Local values override ancestors. Lists merge with dedup. Parent directories auto-walk when no explicit inherits is set.

Old-style .kin files (plain YAML) are auto-upgraded to .kin/config on first access.

See examples/kin-voices/ for ready-to-use voice templates.

Architecture

code
SQLite + FTS5          <- primary store and full-text search
  nodes: id, title, content, type, weight, audience, domains, extra
  edges: from_id, to_id, type, weight, provenance
  fts5:  content synced via triggers

Retrieval pipeline:
  FTS5 BM25 --+
  Graph BFS --+-- RRF merge -- tier formatter -- context block
  (vectors) --+                   |
      |                   full | abridged | summarized | executive | index
      |
  Embedding providers (configurable):
      local (sentence-transformers) | openai | gemini

LLM cache tiers (kin ask):
  Tier 1: codebook (stable node index)     <- cached @ 10% cost
  Tier 2: query-relevant context           <- cached per-topic @ 10% cost
  Tier 3: user question                    <- full price, tiny

Reminders:
  reminders table (SQLite)    <- separate from knowledge graph
  Time parsing:  dateparser (NL) + dateutil.rrule (recurrence) + cronsim (cron)
  Channels:      system (macOS) | slack | email | claude (hook) | terminal
  Daemon:        launchd/cron adaptive interval -> check due -> notify -> auto-snooze
  Scheduling:    adaptive tiers (>7d=daily, >1d=hourly, >1h=10min, <1h=5min, none=disabled)
  Actions:       shell commands run directly | complex tasks launch claude -p
  Stop guard:    blocks session exit when actionable reminders pending

Dream (kin dream):
  Modes:         lightweight (<5s) | full (non-LLM) | deep (claude -p clusters)
  Triggers:      CLI | cron step 11 | Stop hook (detached, start_new_session=True)
  Dedup:         difflib.SequenceMatcher, 4-char title bucketing, 0.95 merge / 0.85 suggest
  Consolidation: suggestion auto-apply, domain edge strengthening, cluster summarisation
  Safety:        fcntl.flock exclusion, protected types skip, provenance tracking

Three integration paths:
  MCP plugin --> Claude calls tools natively (search, add, learn, remind, ...)
  CLI hooks  --> SessionStart / PreCompact / Stop lifecycle events
  Adapters   --> Entry-point discovery for custom ingestion sources
  Code       --> ctags + cscope + tree-sitter structural analysis

Node Types

Knowledge: concept, document, session, person, project, decision, question, artifact, skill

Code Intelligence

Ingest repository structure with kin ingest code --directory .:

  • Module nodes (artifact) — one per source file with structural summary: classes, public functions, signatures, imports
  • Symbol nodes (concept) — one per class/interface/type with method signatures
  • Edges — imports (depends_on), inheritance (implements), containment (context_of), call graph (relates_to)
  • Three extraction tiers — ctags (100+ languages), cscope (C/C++ cross-refs), tree-sitter (AST call graphs)
  • Incremental — file hashing skips unchanged files on re-ingest

Code structure lives in the same graph as your decisions, watches, and constraints. Search finds both what calls a function and what broke last time someone changed it.

Operational: constraint (invariants), directive (soft rules), checkpoint (pre-flight), watch (attention flags)

CLI Reference (49 commands)

Core

CommandDescription
kin search <query>Hybrid FTS5 + graph search with RRF merging (--tags, --mine)
kin contextFormatted context block for AI injection (--level, --tokens)
kin add <text>Quick capture with auto-extraction and linking (--tags, --type)
kin show <id>Full node details with edges, provenance, and state
kin listList nodes (--type, --status, --tags, --audience, --mine, --limit)
kin ask <question>Question classification + LLM or context answer

Knowledge Management

CommandDescription
kin learnExtract knowledge from sessions and inbox
kin link <a> <b>Create weighted edge between nodes
kin alias <id> [add|remove|list]Manage AKA/synonyms for a node
kin register <id> <path>Associate a file path with a node
kin orphansNodes with no connections
kin trail <id>Temporal history and provenance chain
kin decayApply weight decay to stale nodes/edges
kin recentRecently active nodes
kin tag [action]Session tags: start, update, segment, pause, end, resume, list, show
kin remind [action]Reminders: create, list, show, snooze, done, cancel, check, exec
kin mode [action]Conversation modes: activate, list, show, create, export, import, seed

Graph Analytics

CommandDescription
kin graph [mode]Dashboard: stats, centrality, communities, bridges, trailheads
kin suggestBridge opportunity suggestions (--accept, --reject)
kin skills [person]Skill profile and expertise for a person
kin embedIndex all nodes for vector similarity search

Operational

CommandDescription
kin statusGraph health + operational summary (--trigger, --owner, --mine)
kin set-audience <id> <scope>Set privacy scope (private/team/org/public)
kin set-state <id> <key> <value>Set mutable state on directives/watches
kin exportAudience-aware graph export with PII stripping
kin import <file>Import nodes/edges from JSON/JSONL (--mode merge/replace)
kin sync-linksUpdate node content with connection references

Ingestion & External Sources

CommandDescription
kin ingest <source>Ingest from: projects, sessions, files, commits, github, linear, code, all
kin cronOne-shot maintenance cycle (for crontab/launchd)
kin dreamKnowledge consolidation: dedup, suggestions, edge strengthening (--deep, --detach)
kin watchWatch for new sessions and ingest them (--interval)
kin analyticsArchive session analytics and activity heatmap
kin indexWrite .kin/index.json for git tracking

Infrastructure

CommandDescription
kin initInitialize data directory
kin config [show|get|set]View or edit configuration
kin setup-hooksInstall lifecycle hooks into Claude Code
kin setup-cronInstall periodic maintenance (launchd/crontab)
kin setup-claude-mdOutput/install recommended CLAUDE.md kindex directives
kin stop-guardStop hook guard for actionable reminders
kin doctorHealth check with graph enforcement (--fix)
kin migrateImport markdown topics into SQLite
kin budgetLLM spend tracking
kin whoamiShow current user identity
kin changelogWhat changed (--since, --days, --actor)
kin logRecent activity log
kin git-hook [install|uninstall]Manage git hooks in a repository
kin primeGenerate context for SessionStart hook (--codebook)
kin compact-hookPre-compact knowledge capture

Configuration

Config is layered like git — global defaults, then global config, then local config. Each layer deep-merges over the previous, so you only set what you want to override.

LayerPathPurpose
Global~/.config/kindex/kin.yamlUser-wide defaults
Local.kin/config or kin.yaml in cwdProject-specific overrides

Use kin config set --global llm.enabled true for global settings, or kin config set llm.model claude-sonnet-4-6 for project-local.

yaml
data_dir: ~/.kindex

llm:
  enabled: false
  model: claude-haiku-4-5-20251001
  api_key_env: ANTHROPIC_API_KEY
  cache_control: true              # Prompt caching (90% savings on repeated prefixes)
  codebook_min_weight: 0.5         # Min node weight for codebook inclusion
  tier2_max_tokens: 4000           # Token budget for query-relevant context

embedding:
  provider: local                  # local, openai, or gemini
  # model: ""                      # empty = provider default
  # api_key_env: ""                # empty = provider default (OPENAI_API_KEY / GEMINI_API_KEY)
  # dimensions: 0                  # 0 = provider default (384 / 1536 / 3072)

budget:
  daily: 0.50
  weekly: 2.00
  monthly: 5.00

project_dirs:
  - ~/Code
  - ~/Personal

defaults:
  hops: 2
  min_weight: 0.1
  mode: bfs

reminders:
  enabled: true
  check_interval: 300            # 5 min base interval
  adaptive_scheduling: true      # adjust interval based on nearest reminder
  min_interval: 300              # floor for adaptive scheduling
  default_channels: [system]     # system, slack, email, claude, terminal
  snooze_duration: 900           # 15 min default snooze
  auto_snooze_timeout: 300       # auto-snooze after 5 min inaction
  idle_suppress_after: 600       # suppress if idle > 10 min
  channels:
    slack:
      enabled: false
      webhook_url: ""
    email:
      enabled: false
      smtp_host: ""
      to_addr: ""

Development

bash
make dev          # install with dev + LLM dependencies
make test         # run 980 tests
make check        # lint + test combined
make clean        # remove build artifacts

License

MIT

<!-- mcp-name: io.github.jmcentire/kindex -->

常见问题

Kindex 是什么?

Persistent knowledge graph for AI workflows with context tiers and .kin inheritance.

相关 Skills

Claude接口

by anthropics

Universal
热门

面向接入 Claude API、Anthropic SDK 或 Agent SDK 的开发场景,自动识别项目语言并给出对应示例与默认配置,快速搭建 LLM 应用。

想把Claude能力接进应用或智能体,用claude-api上手快、兼容Anthropic与Agent SDK,集成路径清晰又省心

AI 与智能体
未扫描109.6k

提示工程专家

by alirezarezvani

Universal
热门

覆盖Prompt优化、Few-shot设计、结构化输出、RAG评测与Agent工作流编排,适合分析token成本、评估LLM输出质量,并搭建可落地的AI智能体系统。

把提示优化、LLM评测到RAG与智能体设计串成一套方法,适合想系统提升AI开发效率的人。

AI 与智能体
未扫描9.0k

智能体流程设计

by alirezarezvani

Universal
热门

面向生产级多 Agent 编排,梳理顺序、并行、分层、事件驱动、共识五种工作流设计,覆盖 handoff、状态管理、容错重试、上下文预算与成本优化,适合搭建复杂 AI 协作系统。

帮你把多智能体流程设计、编排和自动化统一起来,复杂工作流也能更稳地落地,适合追求强控制力的团队。

AI 与智能体
未扫描9.0k

相关 MCP Server

顺序思维

编辑精选

by Anthropic

热门

Sequential Thinking 是让 AI 通过动态思维链解决复杂问题的参考服务器。

这个服务器展示了如何让 Claude 像人类一样逐步推理,适合开发者学习 MCP 的思维链实现。但注意它只是个参考示例,别指望直接用在生产环境里。

AI 与智能体
82.9k

知识图谱记忆

编辑精选

by Anthropic

热门

Memory 是一个基于本地知识图谱的持久化记忆系统,让 AI 记住长期上下文。

帮 AI 和智能体补上“记不住”的短板,用本地知识图谱沉淀长期上下文,连续对话更聪明,数据也更可控。

AI 与智能体
82.9k

PraisonAI

编辑精选

by mervinpraison

热门

PraisonAI 是一个支持自反思和多 LLM 的低代码 AI 智能体框架。

如果你需要快速搭建一个能 24/7 运行的 AI 智能体团队来处理复杂任务(比如自动研究或代码生成),PraisonAI 的低代码设计和多平台集成(如 Telegram)让它上手极快。但作为非官方项目,它的生态成熟度可能不如 LangChain 等主流框架,适合愿意尝鲜的开发者。

AI 与智能体
6.4k

评论