磁盘管家

disk

by BytesAgain

Monitor disk usage, find space hogs, and get cleanup suggestions. Use when checking space, finding large files, monitoring partitions.

4.2k效率与工作流未扫描2026年3月23日

安装

claude skill add --url github.com/openclaw/skills/tree/main/skills/bytesagain1/disk

文档

Disk

A sysops toolkit for scanning, monitoring, reporting, alerting, tracking usage, checking health, fixing issues, cleaning up, backing up, restoring, logging, benchmarking, and comparing disk-related operations — all from the command line with full history tracking.

Commands

CommandDescription
disk scan <input>Record and review disk scan entries (run without args to see recent)
disk monitor <input>Record and review monitoring entries
disk report <input>Record and review report entries
disk alert <input>Record and review alert entries
disk top <input>Record and review top-usage entries
disk usage <input>Record and review usage entries
disk check <input>Record and review health check entries
disk fix <input>Record and review fix entries
disk cleanup <input>Record and review cleanup entries
disk backup <input>Record and review backup entries
disk restore <input>Record and review restore entries
disk log <input>Record and review log entries
disk benchmark <input>Record and review benchmark entries
disk compare <input>Record and review comparison entries
disk statsShow summary statistics across all log files
disk export <fmt>Export all data in JSON, CSV, or TXT format
disk search <term>Search across all logged entries
disk recentShow the 20 most recent activity entries
disk statusHealth check — version, data dir, entry count, disk usage
disk helpShow usage info and all available commands
disk versionPrint version string

Each data command (scan, monitor, report, etc.) works in two modes:

  • With arguments: Logs the input with a timestamp and saves to the corresponding .log file
  • Without arguments: Displays the 20 most recent entries from that command's log

Data Storage

All data is stored locally in ~/.local/share/disk/. Each command writes to its own log file (e.g., scan.log, monitor.log, cleanup.log). A unified history.log tracks all activity across commands with timestamps.

  • Log format: YYYY-MM-DD HH:MM|<input>
  • History format: MM-DD HH:MM <command>: <input>
  • No external database or network access required
  • Set DISK_DIR environment variable to change data directory (default: ~/.local/share/disk/)

Requirements

  • Bash 4+ (uses set -euo pipefail)
  • Standard POSIX utilities: date, wc, du, head, tail, grep, basename, cat
  • No root privileges needed
  • No API keys or external dependencies

When to Use

  1. Tracking disk usage over time — Use disk usage and disk monitor to log periodic disk space readings, building a historical record you can search and export later
  2. Recording cleanup and maintenance actions — Use disk cleanup and disk fix to keep a timestamped log of what was cleaned, freed, or repaired on which servers
  3. Documenting backup and restore operations — Use disk backup and disk restore to maintain an audit trail of when backups were made and restores performed
  4. Benchmarking and comparing disk performance — Use disk benchmark and disk compare to log I/O test results across different drives or configurations
  5. Generating exportable reports — Use disk export json to produce a structured file of all logged activity for capacity planning, compliance, or team handoff

Examples

Log a disk scan and review history

bash
# Record a scan result
disk scan "/dev/sda1: 78% used, 45GB free"

# View recent scan entries
disk scan

Monitor and alert workflow

bash
# Log a monitoring observation
disk monitor "Server-01 /var at 92% — approaching threshold"

# Log an alert
disk alert "CRITICAL: /data partition at 98% on prod-db-03"

# Search for all entries mentioning a server
disk search "prod-db-03"

Cleanup and fix tracking

bash
# Log a cleanup action
disk cleanup "Removed 12GB of old Docker images on build-server"

# Log a fix
disk fix "Extended /var/log LVM volume by 10GB"

# View recent activity
disk recent

Export and statistics

bash
# Summary stats across all log files
disk stats

# Export everything as JSON
disk export json

# Export as CSV for spreadsheet analysis
disk export csv

# Health check
disk status

Backup and restore documentation

bash
# Log a backup
disk backup "Full backup of /home to s3://backups/2025-03-18"

# Log a restore
disk restore "Restored /etc/nginx from backup-2025-03-15.tar.gz"

# Log a benchmark
disk benchmark "fio sequential read: 520 MB/s on /dev/nvme0n1"

How It Works

Disk uses a simple case-dispatch architecture in a single Bash script. Each command maps to a log file under ~/.local/share/disk/. When called with arguments, the input is appended with a timestamp. When called without arguments, the last 20 lines of that log are displayed. The stats command aggregates counts across all logs, export serializes everything into your chosen format, and search greps across all log files for a given term.

Support


Powered by BytesAgain | bytesagain.com | hello@bytesagain.com

相关 Skills

PPT处理

by anthropics

Universal
热门

处理 .pptx 全流程:创建演示文稿、提取和解析幻灯片内容、批量修改现有文件,支持模板套用、合并拆分、备注评论与版式调整。

涉及PPTX的创建、解析、修改到合并拆分都能一站搞定,连备注、模板和评论也能处理,做演示文稿特别省心。

效率与工作流
未扫描119.1k

技能工坊

by anthropics

Universal
热门

覆盖 Skill 从创建到迭代优化全流程:起草能力、补测试提示、跑评测与基准方差分析,并持续改写内容和描述,提升效果与触发准确率。

技能工坊把技能从创建、迭代到评测串成闭环,方差分析加描述优化,特别适合把触发准确率打磨得更稳。

效率与工作流
未扫描119.1k

Word文档

by anthropics

Universal
热门

覆盖Word/.docx文档的创建、读取、编辑与重排,适合生成报告、备忘录、信函和模板,也能处理目录、页眉页脚、页码、图片替换、查找替换、修订批注及内容提取整理。

搞定 .docx 的创建、改写与精排版,目录、批量替换、批注修订和图片更新都能自动化,做正式文档尤其省心。

效率与工作流
未扫描119.1k

相关 MCP 服务

文件系统

编辑精选

by Anthropic

热门

Filesystem 是 MCP 官方参考服务器,让 LLM 安全读写本地文件系统。

这个服务器解决了让 Claude 直接操作本地文件的痛点,比如自动整理文档或生成代码文件。适合需要自动化文件处理的开发者,但注意它只是参考实现,生产环境需自行加固安全。

效率与工作流
83.9k

by wonderwhy-er

热门

Desktop Commander 是让 AI 直接执行终端命令、管理文件和进程的 MCP 服务器。

这工具解决了 AI 无法直接操作本地环境的痛点,适合需要自动化脚本调试或文件批量处理的开发者。它能让你用自然语言指挥终端,但权限控制需谨慎,毕竟让 AI 执行 rm -rf 可不是闹着玩的。

效率与工作流
5.9k

EdgarTools

编辑精选

by dgunning

热门

EdgarTools 是无需 API 密钥即可解析 SEC EDGAR 财报的开源 Python 库。

这个工具解决了金融数据获取的痛点——直接让 AI 读取结构化财报,比如让 Claude 分析苹果的 10-K 文件。适合量化分析师或金融开发者快速构建数据管道。但注意,它依赖 SEC 网站稳定性,高峰期可能延迟。

效率与工作流
2.0k

评论