memory-distiller

by danxbuidl

Distill repeated user preferences, successful patterns, and durable working rules into reusable memory notes or prompt-ready context blocks. Use when a user wants to capture habits, preserve preferences, summarize lessons from prior work, or convert raw conversation/task outcomes into structured memory.

3.7k效率与工作流未扫描2026年3月23日

安装

claude skill add --url github.com/openclaw/skills/tree/main/skills/danxbuidl/danxbuidl-memory-distiller

文档

Memory Distiller

Overview

Use this skill when the user wants to turn raw interaction history into stable, reusable memory. The goal is not to summarize everything. The goal is to keep only the parts that are durable enough to improve future work.

Read references/output-format.md when the user wants a structured output template, a prompt-ready context block, or a reusable memory profile format.

Read references/example-prompts.md when the user needs prompt examples, variation ideas, or help choosing the right invocation pattern.

Quick Start

If the user does not specify a format, default to this flow:

  1. extract candidate memories from the source material
  2. keep only durable and evidence-backed items
  3. rewrite them as future-facing rules
  4. return:
    • stable preferences
    • working rules
    • anti-patterns
    • one short reusable context block

If the user already has a memory document, switch into review mode instead of rebuilding everything from scratch.

When To Use

Use this skill when the user asks to:

  • capture recurring preferences or habits
  • preserve successful working patterns
  • record constraints, defaults, or anti-patterns
  • turn task outcomes into future-facing rules
  • clean up or refine an existing memory/profile document
  • produce a compact context block for reuse in future prompts

Do not use this skill for:

  • one-off conversational summaries
  • temporary task state that will expire quickly
  • guesses about user preferences that are not supported by evidence
  • hidden or background memory injection into runtime code paths

Output Selection

Choose the narrowest output that matches the user's goal:

  • memory profile
    • use when the user wants a compact long-term preference document
  • cleaned memory list
    • use when the user already has notes and wants to remove weak items
  • prompt-ready context block
    • use when the user wants a short block to reuse in future prompts
  • review and rewrite report
    • use when the user wants to know what should be kept, rewritten, or removed

Read references/output-format.md before producing any structured output.

Core Rule

Only preserve information that looks durable.

Good candidates:

  • stable preferences
  • repeated defaults
  • persistent constraints
  • explicit dislikes
  • reusable procedures
  • recurring failure-avoidance rules

Weak candidates:

  • one-off requests
  • temporary deadlines
  • transient debugging state
  • personal guesses not explicitly supported by the source material

When a memory candidate is uncertain, mark it as tentative or exclude it.

Evidence Threshold

Prefer memories that are supported by one of these:

  • an explicit user statement
  • a repeated pattern across multiple examples
  • a successful workflow that clearly generalizes
  • a durable constraint that is unlikely to change soon

Prefer to exclude items that are supported only by:

  • one weak hint
  • a single accidental success
  • a temporary environment detail
  • a guess about personality or intent

Workflow

1. Gather source material

Start from the material the user provides or points to:

  • conversation excerpts
  • task outcomes
  • prior memory notes
  • preference documents
  • review summaries

If the source material is large, first compress it into candidate signals rather than copying everything forward.

2. Extract candidate memories

Look for statements that imply stable behavior, such as:

  • "always"
  • "prefer"
  • "do not"
  • "default to"
  • "use X when Y"
  • repeated successful patterns across multiple examples

Group candidates into a small set of categories:

  • preferences
  • defaults
  • constraints
  • anti-patterns
  • reusable procedures

When possible, tag each candidate mentally as one of:

  • confirmed
  • tentative
  • reject

3. Remove weak or noisy items

Drop any item that is:

  • purely situational
  • contradicted by newer evidence
  • too vague to be useful
  • likely to cause bad prompt injection if reused blindly

Prefer precision over recall. A small memory set with strong signal is better than a large noisy list.

4. Rewrite into future-facing rules

Rewrite valid items as clear, reusable guidance.

Prefer forms like:

  • "Prefer concise technical explanations."
  • "Use JSON output when the user asks for machine-readable results."
  • "Avoid storing one-off operational incidents as durable preferences."

Avoid forms like:

  • "The user once asked..."
  • "Yesterday they said..."
  • "Maybe they prefer..."

5. Produce the requested output

Choose the narrowest useful output for the user:

  • memory profile
  • cleaned memory list
  • prompt-ready context block
  • review of existing memory quality

If the user does not specify a format, default to:

  1. Stable preferences
  2. Working rules
  3. Anti-patterns
  4. A short reusable context block

Examples

Example: conversation to profile

If the source says:

  • "Please keep answers concise."
  • "I prefer JSON when I ask for structured output."
  • "Do not add long background explanations unless I ask."

The distilled result should look like:

  • Prefer concise responses by default.
  • Use JSON when the user explicitly asks for structured output.
  • Avoid long background explanations unless requested.

Example: task outcomes to rules

If repeated successful tasks show:

  • good results when output is checklist-based
  • repeated failures when assumptions are not surfaced

The distilled result should look like:

  • Prefer checklist-style outputs for execution-heavy tasks.
  • Surface assumptions explicitly before committing to a plan.

Example: weak candidate to exclude

If the only evidence is:

  • "Yesterday the user wanted a long poetic answer."

Do not convert that into a durable preference unless there is more support.

Output Guidance

When producing memory content:

  • keep wording concise
  • keep claims evidence-based
  • prefer durable rules over narrative summaries
  • avoid hidden assumptions about the user
  • separate "confirmed" from "tentative" when needed

If a prompt-ready context block is requested, keep it short enough that it can realistically be reused without bloating future prompts.

Safety And Quality

  • Do not invent personal traits or preferences.
  • Do not retain sensitive details unless the user clearly wants them preserved.
  • Do not turn one failure into a permanent rule without evidence that it is recurring.
  • When in doubt, exclude the item or mark it tentative.
  • Prefer omission over noisy memory.

相关 Skills

表格处理

by anthropics

Universal
热门

围绕 .xlsx、.xlsm、.csv、.tsv 做读写、修复、清洗、格式整理、公式计算与格式转换,适合修改现有表格、生成新报表或把杂乱数据整理成交付级电子表格。

做 Excel/CSV 相关任务很省心,能直接读写、修复、清洗和格式转换,尤其擅长把乱七八糟的表格整理成交付级文件。

效率与工作流
未扫描109.6k

PDF处理

by anthropics

Universal
热门

遇到 PDF 读写、文本表格提取、合并拆分、旋转加水印、表单填写或加解密时直接用它,也能提取图片、生成新 PDF,并把扫描件通过 OCR 变成可搜索文档。

PDF杂活别再来回切工具了,文本表格提取、合并拆分到OCR识别一次搞定,连扫描件也能变可搜索。

效率与工作流
未扫描109.6k

Word文档

by anthropics

Universal
热门

覆盖Word/.docx文档的创建、读取、编辑与重排,适合生成报告、备忘录、信函和模板,也能处理目录、页眉页脚、页码、图片替换、查找替换、修订批注及内容提取整理。

搞定 .docx 的创建、改写与精排版,目录、批量替换、批注修订和图片更新都能自动化,做正式文档尤其省心。

效率与工作流
未扫描109.6k

相关 MCP 服务

文件系统

编辑精选

by Anthropic

热门

Filesystem 是 MCP 官方参考服务器,让 LLM 安全读写本地文件系统。

这个服务器解决了让 Claude 直接操作本地文件的痛点,比如自动整理文档或生成代码文件。适合需要自动化文件处理的开发者,但注意它只是参考实现,生产环境需自行加固安全。

效率与工作流
82.9k

by wonderwhy-er

热门

Desktop Commander 是让 AI 直接执行终端命令、管理文件和进程的 MCP 服务器。

这工具解决了 AI 无法直接操作本地环境的痛点,适合需要自动化脚本调试或文件批量处理的开发者。它能让你用自然语言指挥终端,但权限控制需谨慎,毕竟让 AI 执行 rm -rf 可不是闹着玩的。

效率与工作流
5.8k

EdgarTools

编辑精选

by dgunning

热门

EdgarTools 是无需 API 密钥即可解析 SEC EDGAR 财报的开源 Python 库。

这个工具解决了金融数据获取的痛点——直接让 AI 读取结构化财报,比如让 Claude 分析苹果的 10-K 文件。适合量化分析师或金融开发者快速构建数据管道。但注意,它依赖 SEC 网站稳定性,高峰期可能延迟。

效率与工作流
1.9k

评论