Shell Gpt
by ckchzh
A command-line productivity tool powered by AI large language models like GPT-5, will help you accom shell gpt, python, chatgpt, cheat-sheet, cli, commands.
安装
claude skill add --url github.com/openclaw/skills/tree/main/skills/ckchzh/shell-ai文档
Shell AI
Terminal-first AI toolkit for configuring, benchmarking, comparing, prompting, evaluating, and fine-tuning AI models — all from the command line.
Why Shell AI?
- Works entirely offline — your data never leaves your machine
- Full AI workflow: configure → prompt → evaluate → benchmark → compare → optimize
- Fine-tuning tracking, cost analysis, and usage monitoring built in
- Export to JSON, CSV, or plain text anytime
- Automatic history and activity logging with timestamps
Getting Started
# See all available commands
shell-ai help
# Check current health status
shell-ai status
# View summary statistics
shell-ai stats
# Show recent activity
shell-ai recent
Commands
| Command | What it does |
|---|---|
shell-ai configure <input> | Configure AI model settings (or view recent configs with no args) |
shell-ai benchmark <input> | Benchmark model performance (or view recent benchmarks) |
shell-ai compare <input> | Compare models or outputs side-by-side (or view recent comparisons) |
shell-ai prompt <input> | Store and manage prompts (or view recent prompts) |
shell-ai evaluate <input> | Evaluate model outputs for quality (or view recent evaluations) |
shell-ai fine-tune <input> | Track fine-tuning jobs and parameters (or view recent fine-tunes) |
shell-ai analyze <input> | Analyze model behavior or outputs (or view recent analyses) |
shell-ai cost <input> | Track API costs and token usage (or view recent cost entries) |
shell-ai usage <input> | Monitor usage patterns and quotas (or view recent usage logs) |
shell-ai optimize <input> | Record optimization strategies (or view recent optimizations) |
shell-ai test <input> | Log test runs and results (or view recent tests) |
shell-ai report <input> | Generate reports on AI activity (or view recent reports) |
shell-ai stats | Show summary statistics across all data categories |
shell-ai export <fmt> | Export all data in a format: json, csv, or txt |
shell-ai search <term> | Search across all log entries for a keyword |
shell-ai recent | Show the 20 most recent activity entries |
shell-ai status | Health check: version, disk usage, entry counts |
shell-ai help | Show the full help message |
shell-ai version | Print current version (v2.0.0) |
Each AI command works in two modes:
- With arguments: saves the input with a timestamp to
<command>.logand logs to history - Without arguments: displays the 20 most recent entries for that command
Data Storage
All data is stored locally at ~/.local/share/shell-ai/:
configure.log,benchmark.log,prompt.log, etc. — one log file per commandhistory.log— unified activity log with timestampsexport.json,export.csv,export.txt— generated export files
Data format: each entry is stored as YYYY-MM-DD HH:MM|<value> (pipe-delimited).
Set the SHELL_AI_DIR environment variable to change the data directory.
Requirements
- Bash 4+ (uses
set -euo pipefail) - Standard UNIX utilities:
wc,du,grep,tail,sed,date,cat,basename - No external dependencies or network access required
When to Use
- Configuring AI models — use
configureto save model parameters, API keys references, and default settings - Benchmarking and comparing models — run
benchmarkandcompareto track performance across different models or prompts - Managing prompts and evaluations — store prompts with
prompt, then evaluate output quality withevaluate - Tracking costs and usage — monitor API spend with
costand usage patterns withusageto stay within budget - Optimizing and fine-tuning — log fine-tuning experiments with
fine-tuneand optimization strategies withoptimize
Examples
# Configure a model
shell-ai configure "model=gpt-4 temperature=0.7 max_tokens=2048"
# Store and evaluate a prompt
shell-ai prompt "Summarize the following article in 3 bullet points"
shell-ai evaluate "gpt-4 summary: accuracy=9/10 coherence=8/10"
# Benchmark and compare
shell-ai benchmark "gpt-4 latency=1.2s tokens/sec=45 cost=$0.03"
shell-ai compare "gpt-4 vs claude-3: gpt-4 faster, claude more detailed"
# Track costs and fine-tuning
shell-ai cost "2024-01 total: $47.20 (gpt-4: $32, claude: $15.20)"
shell-ai fine-tune "job-abc123: 500 samples, 3 epochs, loss=0.42"
# Export everything as CSV, then search
shell-ai export csv
shell-ai search "gpt-4"
# Check overall health
shell-ai status
shell-ai stats
Output
All commands return human-readable output to stdout. Redirect to a file for scripting:
shell-ai stats > report.txt
shell-ai export json
Powered by BytesAgain | bytesagain.com | hello@bytesagain.com
相关 Skills
表格处理
by anthropics
围绕 .xlsx、.xlsm、.csv、.tsv 做读写、修复、清洗、格式整理、公式计算与格式转换,适合修改现有表格、生成新报表或把杂乱数据整理成交付级电子表格。
✎ 做 Excel/CSV 相关任务很省心,能直接读写、修复、清洗和格式转换,尤其擅长把乱七八糟的表格整理成交付级文件。
PDF处理
by anthropics
遇到 PDF 读写、文本表格提取、合并拆分、旋转加水印、表单填写或加解密时直接用它,也能提取图片、生成新 PDF,并把扫描件通过 OCR 变成可搜索文档。
✎ PDF杂活别再来回切工具了,文本表格提取、合并拆分到OCR识别一次搞定,连扫描件也能变可搜索。
Word文档
by anthropics
覆盖Word/.docx文档的创建、读取、编辑与重排,适合生成报告、备忘录、信函和模板,也能处理目录、页眉页脚、页码、图片替换、查找替换、修订批注及内容提取整理。
✎ 搞定 .docx 的创建、改写与精排版,目录、批量替换、批注修订和图片更新都能自动化,做正式文档尤其省心。
相关 MCP 服务
文件系统
编辑精选by Anthropic
Filesystem 是 MCP 官方参考服务器,让 LLM 安全读写本地文件系统。
✎ 这个服务器解决了让 Claude 直接操作本地文件的痛点,比如自动整理文档或生成代码文件。适合需要自动化文件处理的开发者,但注意它只是参考实现,生产环境需自行加固安全。
by wonderwhy-er
Desktop Commander 是让 AI 直接执行终端命令、管理文件和进程的 MCP 服务器。
✎ 这工具解决了 AI 无法直接操作本地环境的痛点,适合需要自动化脚本调试或文件批量处理的开发者。它能让你用自然语言指挥终端,但权限控制需谨慎,毕竟让 AI 执行 rm -rf 可不是闹着玩的。
EdgarTools
编辑精选by dgunning
EdgarTools 是无需 API 密钥即可解析 SEC EDGAR 财报的开源 Python 库。
✎ 这个工具解决了金融数据获取的痛点——直接让 AI 读取结构化财报,比如让 Claude 分析苹果的 10-K 文件。适合量化分析师或金融开发者快速构建数据管道。但注意,它依赖 SEC 网站稳定性,高峰期可能延迟。