LineCount

by BytesAgain

Count source lines by language, exclude comments, and compare codebase sizes. Use when measuring codebase size, comparing projects, reporting LOC stats.

3.7k编码与调试未扫描2026年3月23日

安装

claude skill add --url github.com/openclaw/skills/tree/main/skills/bytesagain3/linecount

文档

LineCount

A data toolkit for ingesting, transforming, querying, filtering, aggregating, and managing structured data entries. Each command logs timestamped records to local files, supports viewing recent entries, and provides full export/search/stats capabilities.

Commands

Core Data Operations

CommandDescription
linecount ingest <input>Ingest a new data entry (or view recent ingests with no args)
linecount transform <input>Record a transform operation on data
linecount query <input>Log a query against stored data
linecount filter <input>Apply and record a filter operation
linecount aggregate <input>Record an aggregation step
linecount visualize <input>Log a visualization task
linecount export <input>Log an export operation entry
linecount sample <input>Record a sampling operation
linecount schema <input>Log a schema definition or change
linecount validate <input>Record a validation check
linecount pipeline <input>Log a pipeline execution step
linecount profile <input>Record a data profiling result

Utility Commands

CommandDescription
linecount statsShow summary statistics across all log files
linecount export <fmt>Export all data in json, csv, or txt format
linecount search <term>Search all entries for a keyword (case-insensitive)
linecount recentShow the 20 most recent activity log entries
linecount statusHealth check: version, entry count, disk usage, last activity
linecount helpDisplay full command reference
linecount versionPrint current version (v2.0.0)

How It Works

Every core command accepts free-text input. When called with arguments, LineCount:

  1. Timestamps the entry (YYYY-MM-DD HH:MM)
  2. Appends it to the command-specific log file (e.g. ingest.log, transform.log)
  3. Records the action in a central history.log
  4. Reports the saved entry and running total

When called with no arguments, each command displays the 20 most recent entries from its log file.

Data Storage

All data is stored locally in plain-text log files:

code
~/.local/share/linecount/
├── ingest.log        # Ingested data entries
├── transform.log     # Transform operations
├── query.log         # Query records
├── filter.log        # Filter operations
├── aggregate.log     # Aggregation steps
├── visualize.log     # Visualization tasks
├── export.log        # Export operation entries
├── sample.log        # Sampling records
├── schema.log        # Schema definitions
├── validate.log      # Validation checks
├── pipeline.log      # Pipeline execution steps
├── profile.log       # Profiling results
├── history.log       # Central activity log
└── export.{json,csv,txt}  # Exported snapshots

Each log uses pipe-delimited format: timestamp|value.

Requirements

  • Bash 4.0+ with set -euo pipefail
  • Standard Unix utilities: wc, du, grep, tail, date, sed
  • No external dependencies — pure bash

When to Use

  1. Tracking data pipeline steps — log each ingest, transform, filter, and aggregate as you process data through a multi-step pipeline
  2. Building an audit trail — record every query and validation run so you can trace what happened and when
  3. Profiling and sampling datasets — quickly log schema snapshots, sample outputs, and profiling results for later review
  4. Exporting operational records — dump all logged activity to JSON, CSV, or plain text for reporting or ingestion into other tools
  5. Monitoring data processing health — use status and stats to check entry counts, disk usage, and last-activity timestamps at a glance

Examples

bash
# Ingest a new data record
linecount ingest "user_events batch 2024-03-18 — 4200 rows loaded"

# Record a transformation step
linecount transform "normalized timestamps to UTC, removed duplicates"

# Log a filter operation
linecount filter "country=US AND age>=18"

# View aggregation history (no args = show recent)
linecount aggregate

# Run a validation and record the result
linecount validate "schema check passed — 0 null columns"

# Search all logs for a keyword
linecount search "duplicates"

# Export everything to CSV
linecount export csv

# Check overall health
linecount status

# View summary stats across all log types
linecount stats

Configuration

Set the DATA_DIR variable in the script or modify the default path to change storage location. Default: ~/.local/share/linecount/


Powered by BytesAgain | bytesagain.com | hello@bytesagain.com

相关 Skills

前端设计

by anthropics

Universal
热门

面向组件、页面、海报和 Web 应用开发,按鲜明视觉方向生成可直接落地的前端代码与高质感 UI,适合做 landing page、Dashboard 或美化现有界面,避开千篇一律的 AI 审美。

想把页面做得既能上线又有设计感,就用前端设计:组件到整站都能产出,难得的是能避开千篇一律的 AI 味。

编码与调试
未扫描109.6k

网页构建器

by anthropics

Universal
热门

面向复杂 claude.ai HTML artifact 开发,快速初始化 React + Tailwind CSS + shadcn/ui 项目并打包为单文件 HTML,适合需要状态管理、路由或多组件交互的页面。

在 claude.ai 里做复杂网页 Artifact 很省心,多组件、状态和路由都能顺手搭起来,React、Tailwind 与 shadcn/ui 组合效率高、成品也更精致。

编码与调试
未扫描109.6k

网页应用测试

by anthropics

Universal
热门

用 Playwright 为本地 Web 应用编写自动化测试,支持启动开发服务器、校验前端交互、排查 UI 异常、抓取截图与浏览器日志,适合调试动态页面和回归验证。

借助 Playwright 一站式验证本地 Web 应用前端功能,调 UI 时还能同步查看日志和截图,定位问题更快。

编码与调试
未扫描109.6k

相关 MCP 服务

GitHub

编辑精选

by GitHub

热门

GitHub 是 MCP 官方参考服务器,让 Claude 直接读写你的代码仓库和 Issues。

这个参考服务器解决了开发者想让 AI 安全访问 GitHub 数据的问题,适合需要自动化代码审查或 Issue 管理的团队。但注意它只是参考实现,生产环境得自己加固安全。

编码与调试
82.9k

by Context7

热门

Context7 是实时拉取最新文档和代码示例的智能助手,让你告别过时资料。

它能解决开发者查找文档时信息滞后的问题,特别适合快速上手新库或跟进更新。不过,依赖外部源可能导致偶尔的数据延迟,建议结合官方文档使用。

编码与调试
51.5k

by tldraw

热门

tldraw 是让 AI 助手直接在无限画布上绘图和协作的 MCP 服务器。

这解决了 AI 只能输出文本、无法视觉化协作的痛点——想象让 Claude 帮你画流程图或白板讨论。最适合需要快速原型设计或头脑风暴的开发者。不过,目前它只是个基础连接器,你得自己搭建画布应用才能发挥全部潜力。

编码与调试
46.2k

评论