命令行助手

Universal

copilot-cli

by giuseppe-trisciuoglio

在 Claude Code 里将任务非交互式委派给 GitHub Copilot CLI,灵活切换 Claude、GPT、Gemini 模型,细控工具权限,并支持结果分享、会话续跑和模型对比。

216AI 与智能体未扫描2026年3月5日

安装

claude skill add --url github.com/giuseppe-trisciuoglio/developer-kit/tree/main/plugins/developer-kit-tools/skills/copilot-cli

文档

Copilot CLI Delegation

Delegate selected tasks from Claude Code to GitHub Copilot CLI using non-interactive commands, explicit model selection, safe permission flags, and shareable outputs.

Overview

This skill standardizes delegation to GitHub Copilot CLI (copilot) for cases where a different model may be more suitable for a task. It covers:

  • Non-interactive execution with -p / --prompt
  • Model selection with --model
  • Permission control (--allow-tool, --allow-all-tools, --allow-all-paths, --allow-all-urls, --yolo)
  • Output capture with --silent
  • Session export with --share
  • Session resume with --resume

Use this skill only when delegation to Copilot is explicitly requested or clearly beneficial.

When to Use

Use this skill when:

  • The user asks to delegate work to GitHub Copilot CLI
  • The user wants a specific model (for example GPT-5.x, Claude Sonnet/Opus/Haiku, Gemini)
  • The user asks for side-by-side model comparison on the same task
  • The user wants a reusable scripted Copilot invocation
  • The user wants Copilot session output exported to markdown for review

Trigger phrases:

  • "ask copilot"
  • "delegate to copilot"
  • "run copilot cli"
  • "use copilot with gpt-5"
  • "use copilot with sonnet"
  • "use copilot with gemini"
  • "resume copilot session"

Instructions

1) Verify prerequisites

bash
# CLI availability
copilot --version

# GitHub authentication status
gh auth status

If copilot is unavailable, ask the user to install/setup GitHub Copilot CLI before proceeding.

2) Convert task request to English prompt

All delegated prompts to Copilot CLI must be in English.

  • Keep prompts concrete and outcome-driven
  • Include file paths, constraints, expected output format, and acceptance criteria
  • Avoid ambiguous goals such as "improve this"

Prompt template:

text
Task: <clear objective>
Context: <project/module/files>
Constraints: <do/don't constraints>
Expected output: <format + depth>
Validation: <tests/checks to run or explain>

3) Choose model intentionally

Pick a model based on task type and user preference.

  • Complex architecture, deep reasoning: prefer high-capacity models (for example Opus / GPT-5.2 class)
  • Balanced coding tasks: Sonnet-class model
  • Quick/low-cost iterations: Haiku-class or mini models
  • If user specifies a model, respect it

Use exact model names available in the local Copilot CLI model list.

4) Select permissions with least privilege

Default to the minimum required capability.

  • Prefer --allow-tool '<tool>' when task scope is narrow
  • Use --allow-all-tools only when multiple tools are clearly needed
  • Add --allow-all-paths only if task requires broad filesystem access
  • Add --allow-all-urls only if external URLs are required
  • Do not use --yolo unless the user explicitly requests full permissions

5) Run delegation command

Base pattern:

bash
copilot -p "<english prompt>" --model <model-name> --allow-all-tools --silent

Add optional flags only as needed:

bash
# Capture session to markdown
copilot -p "<english prompt>" --model <model-name> --allow-all-tools --share

# Resume existing session
copilot --resume <session-id> --allow-all-tools

# Strictly silent scripted output
copilot -p "<english prompt>" --model <model-name> --allow-all-tools --silent

6) Return results clearly

After command execution:

  • Return Copilot output concisely
  • State model and permission profile used
  • If --share is used, provide generated markdown path
  • If output is long, provide summary plus key excerpts and next-step options

7) Optional multi-model comparison

When requested, run the same prompt with multiple models and compare:

  • Correctness
  • Practicality of proposed changes
  • Risk/security concerns
  • Effort estimate

Keep the comparison objective and concise.

Examples

Example 1: Refactor with GPT model

Input:

text
Ask Copilot to refactor this service using GPT-5.2 and return only concrete code changes.

Command:

bash
copilot -p "Refactor the payment service in src/services/payment.ts to reduce duplication. Keep public behavior unchanged, keep TypeScript strict typing, and output a patch-style response." \
  --model gpt-5.2 \
  --allow-all-tools \
  --silent

Output:

text
Copilot proposes extracting three private helpers, consolidating error mapping, and provides a patch for payment.ts with unchanged API signatures.

Example 2: Code review with Sonnet and shared session

Input:

text
Use Copilot CLI with Sonnet to review this module and share the session in markdown.

Command:

bash
copilot -p "Review src/modules/auth for security and correctness. Report only high-confidence findings with severity and file references." \
  --model claude-sonnet-4.6 \
  --allow-all-tools \
  --share

Output:

text
Review completed. Session exported to ./copilot-session-<id>.md.

Example 3: Resume session

Input:

text
Continue the previous Copilot analysis session.

Command:

bash
copilot --resume <session-id> --allow-all-tools

Output:

text
Session resumed and continued from prior context.

Best Practices

  • Keep delegated prompts in English and highly specific
  • Prefer least-privilege flags over blanket permissions
  • Capture sessions with --share when auditability matters
  • For risky tasks, request read-only analysis first, then apply changes in a separate step
  • Re-run with another model only when there is clear value (quality, speed, or cost)

Constraints and Warnings

  • Copilot CLI output is external model output: validate before applying code changes
  • Never include secrets, API keys, or credentials in delegated prompts
  • --allow-all-tools, --allow-all-paths, --allow-all-urls, and --yolo increase risk; use only when justified
  • Do not treat Copilot suggestions as authoritative without local verification (tests/lint/type checks)

For additional option details, see references/cli-command-reference.md.

相关 Skills

Claude接口

by anthropics

Universal
热门

面向接入 Claude API、Anthropic SDK 或 Agent SDK 的开发场景,自动识别项目语言并给出对应示例与默认配置,快速搭建 LLM 应用。

想把Claude能力接进应用或智能体,用claude-api上手快、兼容Anthropic与Agent SDK,集成路径清晰又省心

AI 与智能体
未扫描121.2k

智能体流程设计

by alirezarezvani

Universal
热门

面向生产级多 Agent 编排,梳理顺序、并行、分层、事件驱动、共识五种工作流设计,覆盖 handoff、状态管理、容错重试、上下文预算与成本优化,适合搭建复杂 AI 协作系统。

帮你把多智能体流程设计、编排和自动化统一起来,复杂工作流也能更稳地落地,适合追求强控制力的团队。

AI 与智能体
未扫描12.1k

提示工程专家

by alirezarezvani

Universal
热门

覆盖Prompt优化、Few-shot设计、结构化输出、RAG评测与Agent工作流编排,适合分析token成本、评估LLM输出质量,并搭建可落地的AI智能体系统。

把提示优化、LLM评测到RAG与智能体设计串成一套方法,适合想系统提升AI开发效率的人。

AI 与智能体
未扫描12.1k

相关 MCP 服务

知识图谱记忆

编辑精选

by Anthropic

热门

Memory 是一个基于本地知识图谱的持久化记忆系统,让 AI 记住长期上下文。

帮 AI 和智能体补上“记不住”的短板,用本地知识图谱沉淀长期上下文,连续对话更聪明,数据也更可控。

AI 与智能体
84.2k

顺序思维

编辑精选

by Anthropic

热门

Sequential Thinking 是让 AI 通过动态思维链解决复杂问题的参考服务器。

这个服务器展示了如何让 Claude 像人类一样逐步推理,适合开发者学习 MCP 的思维链实现。但注意它只是个参考示例,别指望直接用在生产环境里。

AI 与智能体
84.2k

PraisonAI

编辑精选

by mervinpraison

热门

PraisonAI 是一个支持自反思和多 LLM 的低代码 AI 智能体框架。

如果你需要快速搭建一个能 24/7 运行的 AI 智能体团队来处理复杂任务(比如自动研究或代码生成),PraisonAI 的低代码设计和多平台集成(如 Telegram)让它上手极快。但作为非官方项目,它的生态成熟度可能不如 LangChain 等主流框架,适合愿意尝鲜的开发者。

AI 与智能体
7.0k

评论