.faf = Project DNA ✨ for AI-Context, On-Demand
AI 与智能体by wolfe-jam
33+ MCP tools for .faf Project DNA. AI-ready context for Claude Desktop, VS Code, and any MCP host.
什么是 .faf = Project DNA ✨ for AI-Context, On-Demand?
33+ MCP tools for .faf Project DNA. AI-ready context for Claude Desktop, VS Code, and any MCP host.
README
claude-faf-mcp
Tell AI what you're building, who it's for, and why it matters. 30 seconds. 🐘 It never forgets.
33 MCP tools. IANA-registered format (application/vnd.faf+yaml). 2,346 test executions per push.
The 3Ws — 3 Answers. That's It.
Every great product started with 3 answers to the 3Ws — Who, What, Why:
| WHO is it for? | WHAT does it do? | WHY build it? | |
|---|---|---|---|
| Uber | People who need a ride | Tap a button, car arrives | Taxis were broken |
| Airbnb | Travelers who can't afford hotels | Stay in someone's spare room | Millions of empty rooms exist |
| Slack | Teams drowning in email | Organized group messaging | Decisions buried in threads |
| Venmo | Friends splitting bills | Send money instantly | Someone always forgets to pay back |
Same pattern. Every product that works starts here. .faf captures it:
human_context:
who: "people who need a ride across town"
what: "tap a button, car arrives in minutes"
why: "taxis are slow, expensive, and hard to find"
30 seconds. Claude builds your project.faf from this. Every session after, AI starts smart.
The 6Ws — For Optimized AI
3Ws gets you started. For fully optimized AI, complete the set — Where, When, How:
where: "mobile app, iOS and Android" # where does it live?
when: "launch in 3 months" # when is it shipping?
how: "GPS matching, real-time pricing" # how does it work?
3Ws initiates the project with AI. 6Ws optimizes AI to 100%. Same YAML, same file. More examples → faf.one/ideas
Quick Start
Copy and paste this to Claude:
Install the FAF MCP server:
npm install -g claude-faf-mcp, then add this to my claude_desktop_config.json:{"mcpServers": {"faf": {"command": "npx", "args": ["-y", "claude-faf-mcp"]}}}and restart Claude Desktop.
Then tell Claude your 3Ws: "I'm building [what] for [who] because [why]"
How It Works
You → 3 answers → project.faf → AI reads it → every session → forever
project.faf ←── 8ms ──→ CLAUDE.md (bi-sync, free)
project.faf ←── 8ms ──→ MEMORY.md (tri-sync, Pro 🐘)
Claude does the rest. Zero-effort, right first time, fast, accurate, done. Language, framework, package manager, build tools — all auto-detected from your existing files. The human context is the part only you can give.
Scoring: From Blind to Optimized
| Tier | Score | What it means |
|---|---|---|
| 🏆 Trophy | 100% | Gold Code — AI is optimized |
| 🥇 Gold | 99%+ | Near-perfect context |
| 🥈 Silver | 95%+ | Excellent |
| 🥉 Bronze | 85%+ | Production ready |
| 🟢 Green | 70%+ | Solid foundation |
| 🟡 Yellow | 55%+ | AI flipping coins |
| 🔴 Red | <55% | AI working blind |
| 🤍 White | 0% | No context at all |
At 55%, AI guesses half the time. At 100%, AI knows your project. Same compiler as faf-cli — same score everywhere.
33 MCP Tools
All tools run standalone — zero CLI dependencies, 19ms average execution.
Create & Detect
| Tool | Purpose |
|---|---|
faf_init | Initialize project DNA |
faf_auto | Auto-detect stack and populate context |
faf_quick | Lightning-fast creation (3ms) |
faf_readme | Extract context from README (+25-35% boost) |
faf_formats | Discover all formats in your project |
faf_git | Extract context from any GitHub repo URL |
faf_human_add | Add human context (the 6Ws) |
Validate & Score
| Tool | Purpose |
|---|---|
faf_score | AI-readiness score (0-100%) with breakdown |
faf_check | Validate .faf structure |
faf_doctor | Diagnose and fix common issues |
faf_go | Guided interview to Gold Code |
Sync & Persist
| Tool | Purpose |
|---|---|
faf_sync | Sync .faf → CLAUDE.md |
faf_bi_sync | Bi-directional .faf ↔ CLAUDE.md |
faf_tri_sync | Tri-sync to MEMORY.md (Pro — 14-day free trial) |
faf_enhance | Intelligent enhancement |
Export & Interop
| Tool | Purpose |
|---|---|
faf_agents | Import/export AGENTS.md (OpenAI Codex) |
faf_cursor | Import/export .cursorrules (Cursor IDE) |
faf_gemini | Import/export GEMINI.md (Google Gemini) |
faf_conductor | Import/export Conductor directory |
Read & Write
| Tool | Purpose |
|---|---|
faf_read | Read any file |
faf_write | Write any file |
faf_status | Project status overview |
faf_debug | Environment inspection |
faf_about | What is .faf? |
🐘 Nelly Never Forgets (Pro)
bi-sync keeps .faf ↔ CLAUDE.md aligned. Free forever.
tri-sync adds MEMORY.md — your AI remembers across sessions. Feed Nelly, she never forgets.
bi-sync = .faf ↔ CLAUDE.md ← free forever
tri-sync = .faf ↔ CLAUDE.md ↔ MEMORY.md ← Pro 🐘
$3/mo · $19/yr · $29/yr Global. 14-day free trial, no signup. Friends of FAF → faf.one/pro
The .FAF Position
Model Context Protocol
───── ─────── ────────
Claude → .faf → MCP
Gemini → .faf → MCP
Codex → .faf → MCP
Any LLM → .faf → MCP
IANA-registered (application/vnd.faf+yaml). Works with any AI. Define once, use everywhere.
Ecosystem
| Package | Platform | Registry |
|---|---|---|
| claude-faf-mcp (this) | Claude | npm |
| faf-cli | Universal CLI | npm + Homebrew |
| gemini-faf-mcp | Google Gemini | PyPI |
| grok-faf-mcp | xAI Grok | npm |
| rust-faf-mcp | Rust | crates.io |
| faf-wasm | Browser/Edge | npm |
| Chrome Extension | Browser | Chrome Web Store |
Same project.faf. Same scoring. Same result. Different execution layer.
Quality
391 tests · 12 suites · 6 platforms (ubuntu/macos/windows × Node 18/20)
Privacy
Everything runs locally. No data leaves your machine. No analytics, no telemetry, no tracking, no accounts. Privacy policy →
License
MIT — Free and open source
FAF Family
| faf-cli | npx faf-cli init — create .faf for any project |
| claude-faf-mcp | MCP server for Claude Desktop |
| gemini-faf-mcp | MCP server for Gemini CLI |
| grok-faf-mcp | MCP server for Grok |
| faf-mcp | MCP server for Cursor, Windsurf, Cline, VS Code |
| rust-faf-mcp | MCP server in Rust |
| faf-skills | 17 Claude Code skills |
| faf.one | Blog, downloads, docs |
| IANA Registration | application/vnd.faf+yaml |
format | driven 🏎️⚡️ wolfejam.dev
常见问题
.faf = Project DNA ✨ for AI-Context, On-Demand 是什么?
33+ MCP tools for .faf Project DNA. AI-ready context for Claude Desktop, VS Code, and any MCP host.
相关 Skills
Claude接口
by anthropics
面向接入 Claude API、Anthropic SDK 或 Agent SDK 的开发场景,自动识别项目语言并给出对应示例与默认配置,快速搭建 LLM 应用。
✎ 想把Claude能力接进应用或智能体,用claude-api上手快、兼容Anthropic与Agent SDK,集成路径清晰又省心
提示工程专家
by alirezarezvani
覆盖Prompt优化、Few-shot设计、结构化输出、RAG评测与Agent工作流编排,适合分析token成本、评估LLM输出质量,并搭建可落地的AI智能体系统。
✎ 把提示优化、LLM评测到RAG与智能体设计串成一套方法,适合想系统提升AI开发效率的人。
智能体流程设计
by alirezarezvani
面向生产级多 Agent 编排,梳理顺序、并行、分层、事件驱动、共识五种工作流设计,覆盖 handoff、状态管理、容错重试、上下文预算与成本优化,适合搭建复杂 AI 协作系统。
✎ 帮你把多智能体流程设计、编排和自动化统一起来,复杂工作流也能更稳地落地,适合追求强控制力的团队。
相关 MCP Server
顺序思维
编辑精选by Anthropic
Sequential Thinking 是让 AI 通过动态思维链解决复杂问题的参考服务器。
✎ 这个服务器展示了如何让 Claude 像人类一样逐步推理,适合开发者学习 MCP 的思维链实现。但注意它只是个参考示例,别指望直接用在生产环境里。
知识图谱记忆
编辑精选by Anthropic
Memory 是一个基于本地知识图谱的持久化记忆系统,让 AI 记住长期上下文。
✎ 帮 AI 和智能体补上“记不住”的短板,用本地知识图谱沉淀长期上下文,连续对话更聪明,数据也更可控。
PraisonAI
编辑精选by mervinpraison
PraisonAI 是一个支持自反思和多 LLM 的低代码 AI 智能体框架。
✎ 如果你需要快速搭建一个能 24/7 运行的 AI 智能体团队来处理复杂任务(比如自动研究或代码生成),PraisonAI 的低代码设计和多平台集成(如 Telegram)让它上手极快。但作为非官方项目,它的生态成熟度可能不如 LangChain 等主流框架,适合愿意尝鲜的开发者。