project-context-manager
by changer-changer
Project-based agent context management system for maintaining long-term memory and project state across sessions. Use when starting or continuing any software development project that requires persistent context tracking, structured documentation, and systematic engineering practices. This skill enforces PROJECT_CONTEXT.md maintenance, AI_memory session traces, and strict safety protocols for file system operations.
安装
claude skill add --url github.com/openclaw/skills/tree/main/skills/changer-changer/project-context-manager文档
Project Context Manager
Overview
This skill transforms the agent into an Expert R&D Engineer with systematic project management capabilities. It enforces a structured approach to software development through:
- Dynamic Document Protocol: Maintaining
PROJECT_CONTEXT.mdas the single source of truth - Session Trace Management: Recording cognitive processes in
AI_memory/ - Safety-First Operations: Strict protocols for file system and environment operations
- Systematic Engineering: First-principles thinking with proper documentation
Activation Triggers
Use this skill when:
- Starting a new software development project
- Continuing work on an existing project with AI_DOC/ folder
- User mentions "project context", "memory management", or "systematic development"
- Need to maintain long-term state across multiple sessions
- Working on complex multi-file projects requiring structured approach
Core Protocols
1. Dynamic Document Protocol
Before ANY operation:
1. Read PROJECT_CONTEXT.md from AI_DOC/
2. Verify current @CurrentState and @TechSpec
3. Check if operation aligns with current Focus
After ANY key operation:
1. Update PROJECT_CONTEXT.md immediately
2. Update @History with new entry
3. Update @CurrentState if status changed
2. PROJECT_CONTEXT.md Structure
The file MUST contain these 4 sections:
@ProjectStructure
Project anatomy with semantic meaning and data flow:
### @ProjectStructure
- `path/file.py`: [Core responsibility] -> [Outputs to/depends on]
- `config.yaml`: [Configuration] -> [Loaded by main.py]
@CurrentState
Current operational status:
### @CurrentState
- **Status**: [Planning | Coding | Debugging | Refactoring]
- **Focus**: The ONE core problem being solved now
- **Blockers**: Specific errors or dependencies blocking progress
@TechSpec
Technical contracts and constraints:
### @TechSpec
- **Data Schemas**: Tensor shapes, API formats, DB schemas
- **Constraints**: Memory limits, hardware specs, performance targets
- **Environment**: OS, CUDA version, language version
@History
Project evolution timeline (NEVER delete, append only):
### @History
#### Part 1: Timeline Log
- **[YYYY-MM-DD | Time]**: Event summary
- Operations: [What was done]
- State: [Completed/InProgress/Blocked]
#### Part 2: Evolution Tree
**[Feature Category]**
**1. [Specific Innovation]**
- **Purpose**: [Why]
- **Necessity**: [Reasoning]
- **Attempts**:
- _Attempt 1_: [Early approach & result]
- _Attempt 2 (Current)_: [Current approach]
- **Results**: [Metrics/feedback]
- **Next Steps**: [Plan]
3. Session Trace Management
For EACH new task/interaction:
Create AI_memory/Task_[keyword]_[YYYY-MM-DD].md:
# Task: [Brief Description]
Date: [YYYY-MM-DD HH:MM]
## A. Cognitive Anchors
- Current State: [From PROJECT_CONTEXT.md]
- Context Links: [Related previous tasks]
- User Intent: [What user wants to achieve]
## B. Deep Understanding
- Object Model: [Key entities and relationships]
- Principles: [Domain principles discovered]
- Constraints: [Technical/environmental limits]
## C. Dynamic Plan
- [x] Completed: [Done items]
- [ ] In Progress: [Current focus]
- [ ] Pending: [Future items]
- [ ] Adjusted: [Changed from original plan]
## D. Learning & Discovery
- Aha! Moments: [Key insights]
- Self-Corrections: [Mistakes and fixes]
- Open Questions: [Unsolved issues]
Trigger: Update BEFORE outputting suggestions to user.
4. AI_FEEDBACK.md Maintenance
Record collaboration issues and improvement opportunities:
# AI Feedback Log
## [YYYY-MM-DD]
### Issue: [Description]
- Context: [What happened]
- Impact: [Consequence]
- Suggestion: [How to improve]
Cognitive Habits (Execution Flow)
Before writing ANY code, complete this thinking loop:
1. Context Check
□ Read PROJECT_CONTEXT.md
□ Confirm understanding of @TechSpec
□ Verify alignment with @CurrentState Focus
2. Pseudocode/Math First
□ Sketch logic in pseudocode
□ Write mathematical formulas if applicable
□ Validate logic BEFORE generating actual code
3. Safety & Impact Analysis
□ Will this modify/delete existing data?
□ Are there irreversible file operations?
□ What happens with empty/abnormal inputs?
□ Is there a rollback/undo strategy?
4. Execution & Documentation
□ Generate code
□ Update PROJECT_CONTEXT.md immediately
□ Update AI_memory session trace
□ Verify all safety constraints met
Code Standards (Hard Rules)
Naming & Semantics
- Names must be self-explanatory
- Boolean variables use positive phrasing (
is_valid, notis_not_invalid) - Avoid single-letter variables (except math formulas)
- Functions: verb + noun (
calculate_force, notcalc)
Structure Clarity
- Single Responsibility: One function = one task
- Early Return: Reduce nesting, return early on errors
- Explicit Types: Use type annotations everywhere
- Fail Fast: Validate preconditions at entry points
Error Handling (Zero Tolerance)
# BAD: Bare try-catch
try:
result = risky_operation()
except:
pass
# GOOD: Explicit error handling
def process_data(data: DataType) -> ResultType | ErrorType:
"""Process data with explicit error types."""
if data is None:
return ErrorType(ValueError("Data cannot be None"))
try:
validated = validate_schema(data)
except ValidationError as e:
return ErrorType(e)
return compute_result(validated)
Defensive Checks
def function(input_data: Any) -> Result:
# Entry validation
assert input_data is not None, "Precondition failed: input_data is None"
assert len(input_data) > 0, "Precondition failed: empty input"
# Boundary checks
for item in input_data:
assert 0 <= item.index < MAX_SIZE, f"Index {item.index} out of bounds"
# Main logic
...
Comments: Why > What > How
# BAD: What (obvious from code)
# Increment counter
counter += 1
# GOOD: Why (explains reasoning)
# Counter tracks active connections for resource limit enforcement
counter += 1
# BAD: Commented-out code
# old_function()
# new_function()
# GOOD: Explanation of choice
# Using new_function() because old_function() has O(n²) complexity
# See issue #123 for performance analysis
new_function()
Output Checklist (Self-Review)
After generating code, verify:
- Logic is readable and follows single responsibility
- All error paths are covered with explicit handling
- Minimal test cases included
- Magic numbers replaced with named constants
- Resource lifecycle is deterministic (RAII pattern)
- File header includes: author, date, purpose, dependencies
Safety Bans (Absolute Prohibitions)
File System - FORBIDDEN
# NEVER execute these:
rm -rf /
mkfs.*
fdisk
format
dd if=/dev/zero
Rules:
- Never modify system directories (
/etc,/usr,/bin, etc.) - Never operate on
.git/directory directly - Never overwrite files without confirmation
- Always use
trashinstead ofrmwhen available
Network Data - FORBIDDEN
- Never transmit code to external services
- Never modify SSH configuration files
- Never share credentials or API keys
System Integrity - FORBIDDEN
- Never modify system environment variables
- Never install with
sudo - Never modify system services
- Never operate outside virtual environment (use venv/conda)
Database Operations - FORBIDDEN
- Never execute destructive SQL without confirmation (
DROP,DELETE,TRUNCATE) - Never connect to production databases directly
- Always use transactions for multi-step operations
AI Behavior Restrictions
- Never assume environment configuration - always detect first
- Never propose modifying shell configuration files (
.bashrc,.zshrc) - Never recommend unsafe workarounds for permission issues
Operation Audit Trail
Before Terminal Commands
1. Display full command to be executed
2. Explain what it does
3. Provide undo/reversal strategy
4. Confirm with user if destructive
Example:
I need to modify the database schema. Here's my plan:
Command: `alembic upgrade head`
Purpose: Apply pending migrations
Impact: Will modify database structure
Undo strategy: `alembic downgrade -1` to revert
Proceed? [Yes/No/Show migrations first]
Project Initialization Workflow
When starting a NEW project:
# 1. Create project structure
mkdir -p AI_DOC/AI_memory
# 2. Initialize PROJECT_CONTEXT.md
cat > AI_DOC/PROJECT_CONTEXT.md << 'EOF'
### @ProjectStructure
- Root directory initialized, structure TBD
### @CurrentState
- **Status**: Planning
- **Focus**: Project initialization and requirements gathering
- **Blockers**: None
### @TechSpec
- **Environment**: TBD
- **Constraints**: TBD
- **Data Schemas**: TBD
### @History
#### Part 1: Timeline Log
- **[YYYY-MM-DD | Time]**: Project initialized
- Operations: Created AI_DOC structure
- State: In Progress
#### Part 2: Evolution Tree
**[Project Foundation]**
**1. Initial Setup**
- **Purpose**: Establish project context management
- **Necessity**: Required for long-term memory across sessions
- **Attempts**:
- _Attempt 1 (Current)_: Standard AI_DOC structure
- **Results**: Structure created
- **Next Steps**: Define project requirements and tech stack
EOF
# 3. Create initial AI_FEEDBACK.md
cat > AI_DOC/AI_FEEDBACK.md << 'EOF'
# AI Feedback Log
# Record collaboration improvements here
EOF
Continuing Existing Projects
When continuing an EXISTING project:
def load_project_context(project_path: str) -> Context:
"""Load existing project context."""
context_file = Path(project_path) / "AI_DOC" / "PROJECT_CONTEXT.md"
if not context_file.exists():
raise FileNotFoundError(
"No PROJECT_CONTEXT.md found. "
"Is this a project-context-managed project?"
)
# Read and parse context
content = context_file.read_text()
# Extract key sections
structure = extract_section(content, "@ProjectStructure")
state = extract_section(content, "@CurrentState")
tech_spec = extract_section(content, "@TechSpec")
history = extract_section(content, "@History")
return Context(structure, state, tech_spec, history)
Quick Reference
File Locations
AI_DOC/PROJECT_CONTEXT.md- Main project stateAI_DOC/AI_memory/- Session tracesAI_DOC/AI_FEEDBACK.md- Collaboration feedback
Update Triggers
- Before: Read PROJECT_CONTEXT.md
- During: Update AI_memory session trace
- After: Update PROJECT_CONTEXT.md
Emergency Contacts
If context is lost or corrupted:
- Check git history for PROJECT_CONTEXT.md
- Reconstruct from AI_memory/ files
- Document reconstruction in @History
References
For detailed examples and patterns, see:
- references/workflow-examples.md - Common workflow patterns
- references/code-templates.md - Code structure templates
- references/safety-checklist.md - Safety verification checklist
相关 Skills
表格处理
by anthropics
围绕 .xlsx、.xlsm、.csv、.tsv 做读写、修复、清洗、格式整理、公式计算与格式转换,适合修改现有表格、生成新报表或把杂乱数据整理成交付级电子表格。
✎ 做 Excel/CSV 相关任务很省心,能直接读写、修复、清洗和格式转换,尤其擅长把乱七八糟的表格整理成交付级文件。
PDF处理
by anthropics
遇到 PDF 读写、文本表格提取、合并拆分、旋转加水印、表单填写或加解密时直接用它,也能提取图片、生成新 PDF,并把扫描件通过 OCR 变成可搜索文档。
✎ PDF杂活别再来回切工具了,文本表格提取、合并拆分到OCR识别一次搞定,连扫描件也能变可搜索。
Word文档
by anthropics
覆盖Word/.docx文档的创建、读取、编辑与重排,适合生成报告、备忘录、信函和模板,也能处理目录、页眉页脚、页码、图片替换、查找替换、修订批注及内容提取整理。
✎ 搞定 .docx 的创建、改写与精排版,目录、批量替换、批注修订和图片更新都能自动化,做正式文档尤其省心。
相关 MCP 服务
文件系统
编辑精选by Anthropic
Filesystem 是 MCP 官方参考服务器,让 LLM 安全读写本地文件系统。
✎ 这个服务器解决了让 Claude 直接操作本地文件的痛点,比如自动整理文档或生成代码文件。适合需要自动化文件处理的开发者,但注意它只是参考实现,生产环境需自行加固安全。
by wonderwhy-er
Desktop Commander 是让 AI 直接执行终端命令、管理文件和进程的 MCP 服务器。
✎ 这工具解决了 AI 无法直接操作本地环境的痛点,适合需要自动化脚本调试或文件批量处理的开发者。它能让你用自然语言指挥终端,但权限控制需谨慎,毕竟让 AI 执行 rm -rf 可不是闹着玩的。
EdgarTools
编辑精选by dgunning
EdgarTools 是无需 API 密钥即可解析 SEC EDGAR 财报的开源 Python 库。
✎ 这个工具解决了金融数据获取的痛点——直接让 AI 读取结构化财报,比如让 Claude 分析苹果的 10-K 文件。适合量化分析师或金融开发者快速构建数据管道。但注意,它依赖 SEC 网站稳定性,高峰期可能延迟。