什么是 Cokodo Agent?
向兼容 MCP 的 IDE 提供 .agent/ 项目上下文、参考资料与协作信息,便于智能开发。
README
AI Agent Collaboration Protocol
What does cokodo-agent do?
cokodo-agent (co) is a CLI + convention directory .agent/ that turns “how AI collaborates, what it remembers, and which rules it codes by” into versionable, syncable, checkable project assets.
| Problem | Approach |
|---|---|
| AI “forgets” every new chat | Single entry point: start-here.md, status.md, MCP for on-demand context |
| Each IDE has its own rules | co adapt generates Cursor / Claude / Copilot / Gemini / Codex entry files once; the protocol stays in .agent/ only |
| Spec and implementation drift | v1.9+ uses co change to manage “proposal → tasks → archive” under project/changes/, SDD-style like OpenSpec, with co lint checks |
In short: Ship protocol + session state + optional SDD change units inside the repo, wired by Python and co serve; it does not replace the IDE, it fixes “project-level context.”
cokodo-agent vs OpenSpec
The two can be used together; cokodo keeps SDD inside .agent/ and is oriented toward “one repo, one toolchain.”
| Aspect | OpenSpec | cokodo-agent |
|---|---|---|
| Focus | Align on “what to build” before coding (change units + living specs) | Align on “who the project is, where it stands” (protocol + cross-session state) |
| Changes/workflow | changes/ → propose → apply → archive; slash commands | co change new | apply | list | archive; same model as status.md and MCP |
| Living specs | openspec/specs/ as first-class | .agent/project/specs/ + optional merge on archive |
| MCP | Officially no MCP | co serve / co serve --workspace; IDE queries context, status, changes |
| Protocol upgrade | Manual template follow-up | co diff / co sync; project/ is not overwritten |
| Multi-IDE | AGENTS.md + per-IDE slash commands | co adapt cursor|claude|copilot|gemini|codex|all |
| Runtime | Node.js 20.19+ | Python 3.10+ |
| Cross-project | Single-repo oriented | co ref / co collab / global registration |
Relationship: OpenSpec excels at spec and change-unit expression and community workflow; cokodo-agent excels at session continuity, MCP, protocol versioning, and one flow for all IDEs. From v1.9, cokodo includes SDD change units; see OpenSpec comparison research and ADR-009 for tradeoffs. Thanks to OpenSpec (Fission-AI) for the inspiration.
Common scenarios and co commands (concise)
Install and version
pipx install cokodo-agent # or pip install cokodo-agent
# MCP is included; use co serve for IDE integration
co version
New project setup
cd your-repo
co init # interactive: create .agent/
co init -y --tools all # non-interactive + generate all IDE entry files
# then edit .agent/project/context.md, tech-stack.md
Existing .agent/, only add or refresh IDE entry files
co adapt cursor # or claude / copilot / gemini / codex / all
co detect # see which IDE files already exist in the repo
Align protocol with repo .agent/
co diff # compare with remote/bundled protocol
co sync -y # upgrade core etc., keep project/
co lint # compliance + changes-structure (when using change units)
One feature, spec-driven (SDD)
co change new feature-x # skeleton: proposal / specs / design / tasks
co change apply feature-x # set as active, update status.md
co change list # see progress; when done:
co change archive feature-x --merge-specs # archive, optionally merge specs snapshot
co change new fix-y --schema minimal # small change: specs + tasks only
Use MCP from the IDE (less typing, direct protocol access)
co serve # stdio; configure in Cursor/Claude/Copilot/Gemini/Codex
co serve --workspace # serve multiple project directories
# entry files from co adapt include MCP setup notes
Multi-repo context
co ref add ../other/.agent --name other
co collab add ../lib/.agent --name lib --role replica
co ref check / co collab status
Other common commands
co status # view or init status.md
co scaffold # fill missing project/ files
co context # list context files by stack/task
co journal # append to session-journal
co update-checksums # maintainers: refresh manifest checksums
Further documentation
| Document | Description |
|---|---|
| Usage guide (English) | Full command reference |
| 使用指南 (中文) | 命令与选项最全 |
| Protocol / CLI version config | How config and co version / lint use versions |
| cokodo-agent/README.md | Package and development notes |
| OpenSpec comparison research | Point-by-point comparison and ADR |
Agent Protocol overview (.agent/)
The protocol uses engine / instance separation: core/ holds generic rules, project/ holds project state and context. Removing .agent/ should not affect the build; copy it to another project to reuse.
.agent/
├── start-here.md # AI reads this first
├── manifest.json
├── core/ # Governance engine
├── project/ # context, status, changes, specs…
├── adapters/
└── scripts/
Repository structure
agent_protocol/
├── .agent/ # Protocol reference implementation
├── cokodo-agent/ # CLI source (PyPI package)
└── docs/ # Usage guides and SOP
Acknowledgements
- OpenSpec — Spec-driven development and change-unit workflow; this repo’s v1.9
co changeandproject/changes/are inspired by it, implemented with Python and native MCP integration. See.agent/project/research/openspec-analysis.md.
Protocol: 3.2.1 | CLI: see cokodo-agent
常见问题
Cokodo Agent 是什么?
向兼容 MCP 的 IDE 提供 .agent/ 项目上下文、参考资料与协作信息,便于智能开发。
相关 Skills
Claude接口
by anthropics
面向接入 Claude API、Anthropic SDK 或 Agent SDK 的开发场景,自动识别项目语言并给出对应示例与默认配置,快速搭建 LLM 应用。
✎ 想把Claude能力接进应用或智能体,用claude-api上手快、兼容Anthropic与Agent SDK,集成路径清晰又省心
计算机视觉
by alirezarezvani
聚焦目标检测、图像分割与视觉系统落地,覆盖 YOLO、DETR、Mask R-CNN、SAM 等方案,适合定制数据集训练、推理优化及 ONNX/TensorRT 部署。
✎ 把目标检测、图像分割到推理部署串成完整工程链路,主流框架与 YOLO、DETR、SAM 等方案都覆盖,落地视觉 AI 会省心很多。
RAG架构师
by alirezarezvani
聚焦生产级RAG系统设计与优化,覆盖文档切块、检索链路、索引构建、召回评估等关键环节,适合搭建可扩展、高准确率的知识库问答与检索增强应用。
✎ 面向RAG落地,把知识库、向量检索和生成链路系统串联起来,做架构设计时更清晰,也更少踩坑。
相关 MCP Server
知识图谱记忆
编辑精选by Anthropic
Memory 是一个基于本地知识图谱的持久化记忆系统,让 AI 记住长期上下文。
✎ 帮 AI 和智能体补上“记不住”的短板,用本地知识图谱沉淀长期上下文,连续对话更聪明,数据也更可控。
顺序思维
编辑精选by Anthropic
Sequential Thinking 是让 AI 通过动态思维链解决复杂问题的参考服务器。
✎ 这个服务器展示了如何让 Claude 像人类一样逐步推理,适合开发者学习 MCP 的思维链实现。但注意它只是个参考示例,别指望直接用在生产环境里。
PraisonAI
编辑精选by mervinpraison
PraisonAI 是一个支持自反思和多 LLM 的低代码 AI 智能体框架。
✎ 如果你需要快速搭建一个能 24/7 运行的 AI 智能体团队来处理复杂任务(比如自动研究或代码生成),PraisonAI 的低代码设计和多平台集成(如 Telegram)让它上手极快。但作为非官方项目,它的生态成熟度可能不如 LangChain 等主流框架,适合愿意尝鲜的开发者。