SKILL: ai-cli-orchestrator (Multi AI CLI Orchestrator)
by cnatom
Version: 2.0.0 (2026-03-16)
安装
claude skill add --url github.com/openclaw/skills/tree/main/skills/cnatom/ai-cli-orchestrator文档
Version: 2.0.0 (2026-03-16)
Status: Stable
Expertise: CLI Automation, Error Recovery, Tool Chain Management
1. Description
ai-cli-orchestrator is a meta-skill that integrates multiple AI CLI tools (such as Gemini CLI, Cursor Agent, Claude Code) to build a highly available automation workflow. It intelligently identifies the AI toolchain in the current environment, allocates the optimal tool based on task type, and achieves seamless task context transfer with automatic fallback when the primary tool encounters rate limits, API failures, or logical bottlenecks.
2. Trigger Scenarios
- Complex Coding Tasks: When large-scale refactoring across files and modules is needed, and a single AI logic hits bottlenecks.
- High Stability Requirements: In CI/CD or automation scripts, tasks cannot be interrupted due to single AI service API fluctuations.
- Domain-Specific Optimization: Leveraging the strengths of different AIs (e.g., Gemini's long context, Claude's rigorous code logic).
- Resource Limits: When the primary tool triggers token or rate limits, need to switch to backup options.
3. Core Workflow
3.1 Discovery Phase
- Auto-Scan: Scan system PATH to detect installed AI CLI tools (
gemini,cursor-agent,claude, etc.). - Availability Check: Run
tool --versionor simple echo tests to verify API key validity. - Environment Sync: Read
.ai-config.yamlor.envfrom project root for permission config.
3.2 User Configuration
1. Auto-Scan Available AI CLI
🤖 AI Assistant Initialization
Detected AI CLI tools:
✅ gemini - Installed
❌ cursor-agent - Not detected
✅ claude - Installed
Select tools to enable (multi-select):
[1] gemini
[2] cursor-agent
[3] claude
[4] Add custom...
2. Add Custom AI CLI
Enter command name: kimi
Enter test command: kimi --version
Enter description: Moonshot AI
3. Set Priority
Priority (lower number = higher priority):
1. gemini
2. claude
4. Select Strategy
Choose AI response strategy:
[1] AI CLI First
- When receiving questions, automatically use AI CLI to search for answers first
[2] Direct Response
- Use model capabilities directly
[3] Hybrid Mode
- Simple questions answered directly, complex questions use AI CLI
3.3 Task Dispatching Phase
- Intent Recognition: Analyze user input (Research, Code, or Debug?).
- Priority Matching: Select preferred tool based on priority matrix.
- Session Management:
- Check for associated Session ID.
- For continuous tasks, try to inject intermediate outputs (diff or thought chain) as context to the new tool.
3.4 Monitoring & Fallback Phase
- Real-time Monitoring: Monitor CLI stderr and exit codes.
- Failure Detection:
- Non-zero exit code with "rate limit", "overloaded", "auth error".
- Output fails local validation 3 times consecutively.
- State Handover: Start backup tool, automatically retry failed instruction.
4. Configuration Example
Create .ai-cli-orchestrator.yaml in project root:
version: "2.0"
settings:
default_strategy: "balanced" # options: speed, quality, economy
auto_fallback: true
max_retries: 2
tools:
gemini:
priority: 1
alias: "gemini"
capabilities: ["long-context", "multimodal", "fast-search"]
cursor-agent:
priority: 2
alias: "cursor"
capabilities: ["codebase-indexing", "surgical-edit"]
claude-code:
priority: 3
alias: "claude"
capabilities: ["logic-reasoning", "unit-testing"]
strategies:
balanced:
primary: "gemini"
secondary: "cursor-agent"
emergency: "claude-code"
5. Error Handling
| Error Type | Detection | Response |
|---|---|---|
| Rate Limit | 429 Too Many Requests | Record offset, switch to next tool, delay 30s then reset. |
| Logic Loop | Same File Edit 3 times | Force interrupt, output context, request higher-level tool. |
| Auth Failed | 401 Unauthorized | Try local backup .env; if failed, skip and notify user. |
| Network Timeout | ETIMEDOUT | Retry once; if still fails, switch to offline mode or backup CLI. |
| Command Not Found | command not found | Skip this tool, switch to next available tool. |
| Stalled > 30s | Timeout | Force interrupt, switch tool and retry. |
6. Session Management
6.1 Task Metadata
Each task associates:
- TaskID (unique identifier)
- File snapshots (task-related files)
- Command history (executed commands)
- Last summary
6.2 Session Switching Rules
| Scenario | Action |
|---|---|
| Same task | Keep long conversation, don't create new session |
| Different task | Create new session |
| Return to previous task | Switch to corresponding session |
6.3 Context Recovery
When switching back to old task:
- Read task summary
- Load key history fragments
- Quickly restore state
7. AI CLI Priority
| Priority | Tool | Purpose | Fallback |
|---|---|---|---|
| 1 | gemini | Primary Q&A/Search | Auto-switch to 2 |
| 2 | cursor-agent | Code tasks | Auto-switch to 3 |
| 3 | claude-code | Emergency fallback | Error and notify user |
8. Best Practices
- Atomic Operations: Execute single-intent tasks to accurately transfer "last successful state" during fallback.
- Shared Context: When switching tools, always pass
git diffor latestsummary.mdto the接管 tool. - Protect Credentials: Never leak API Keys from environment variables in logs or AI prompts.
- Verification is King: Always verify with local tools like
npm testorruffregardless of which AI tool is used. - Regular Maintenance: Run updates monthly to sync the latest versions of all CLI tools.
9. Available Commands
ai-cli-orchestrator init: Interactive configuration of toolchain and priority.ai-cli-orchestrator run "<task>": Execute task based on strategy and manage lifecycle.ai-cli-orchestrator status: View availability report of all AI services.ai-cli-orchestrator session switch <id>: Manually migrate data between different AI sessions.
10. Extensibility
Support integrating new AI CLIs by writing simple adapters. Just provide:
detect(): How to find the tool.execute(prompt, context): How to call and get output.parse_error(): How to parse its unique error types.
12. Security & Credentials
Why We Need to Read Config Files
This skill requires reading shell and project configuration files to:
- Scan for installed AI CLI tools in PATH
- Verify API keys/credentials are valid
- Read project-specific AI configs (
.ai-config.yaml,.env)
Credential Protection
- Local Processing Only: All credential checks happen locally on your machine
- No Data Exfiltration: Credentials are never sent to external servers
- Minimal Access: Only reads necessary config files, never writes or modifies them
- Sandboxed Execution: AI CLI tools run in isolated processes
Best Practices
- Always verify which AI CLIs have access to your credentials
- Use environment-specific API keys (dev vs production)
- Regularly audit installed AI CLI tools
11. Version History
- v2.0.0 (2026-03-16) - Major update: initialization config, execution strategy, session management, automatic fallback
相关 Skills
Claude接口
by anthropics
面向接入 Claude API、Anthropic SDK 或 Agent SDK 的开发场景,自动识别项目语言并给出对应示例与默认配置,快速搭建 LLM 应用。
✎ 想把Claude能力接进应用或智能体,用claude-api上手快、兼容Anthropic与Agent SDK,集成路径清晰又省心
提示工程专家
by alirezarezvani
覆盖Prompt优化、Few-shot设计、结构化输出、RAG评测与Agent工作流编排,适合分析token成本、评估LLM输出质量,并搭建可落地的AI智能体系统。
✎ 把提示优化、LLM评测到RAG与智能体设计串成一套方法,适合想系统提升AI开发效率的人。
智能体流程设计
by alirezarezvani
面向生产级多 Agent 编排,梳理顺序、并行、分层、事件驱动、共识五种工作流设计,覆盖 handoff、状态管理、容错重试、上下文预算与成本优化,适合搭建复杂 AI 协作系统。
✎ 帮你把多智能体流程设计、编排和自动化统一起来,复杂工作流也能更稳地落地,适合追求强控制力的团队。
相关 MCP 服务
顺序思维
编辑精选by Anthropic
Sequential Thinking 是让 AI 通过动态思维链解决复杂问题的参考服务器。
✎ 这个服务器展示了如何让 Claude 像人类一样逐步推理,适合开发者学习 MCP 的思维链实现。但注意它只是个参考示例,别指望直接用在生产环境里。
知识图谱记忆
编辑精选by Anthropic
Memory 是一个基于本地知识图谱的持久化记忆系统,让 AI 记住长期上下文。
✎ 帮 AI 和智能体补上“记不住”的短板,用本地知识图谱沉淀长期上下文,连续对话更聪明,数据也更可控。
PraisonAI
编辑精选by mervinpraison
PraisonAI 是一个支持自反思和多 LLM 的低代码 AI 智能体框架。
✎ 如果你需要快速搭建一个能 24/7 运行的 AI 智能体团队来处理复杂任务(比如自动研究或代码生成),PraisonAI 的低代码设计和多平台集成(如 Telegram)让它上手极快。但作为非官方项目,它的生态成熟度可能不如 LangChain 等主流框架,适合愿意尝鲜的开发者。