链上分析器
Onchain Analyzer
by BytesAgain
Analyze wallet on-chain activity with transaction history and behavior profiling. Use when investigating wallets, tracing transfers, profiling activity.
安装
claude skill add --url github.com/openclaw/skills/tree/main/skills/bytesagain1/onchain-analyzer文档
Onchain Analyzer
An AI and prompt engineering assistant CLI. Despite the name, this tool is focused on helping you craft, optimize, and evaluate prompts for large language models. It provides commands for generating prompts, building prompt chains, comparing AI models, estimating token costs, and following safety guidelines.
All operations are logged with timestamps for auditing and stored locally in flat files.
Commands
| Command | Description |
|---|---|
onchain-analyzer prompt <role> [task] [format] | Generate a structured prompt with role, task, and output format |
onchain-analyzer system <role> | Generate a system prompt for a given expert role |
onchain-analyzer chain | Display a 4-step prompt chain: Understand → Plan → Execute → Verify |
onchain-analyzer template | List prompt template patterns: Zero-shot, Few-shot, Chain-of-thought, Role-play |
onchain-analyzer compare | Compare major AI models (GPT-4 vs Claude vs Gemini) |
onchain-analyzer cost [tokens] | Estimate cost for a given number of tokens (default: 1000) |
onchain-analyzer optimize | Show prompt optimization tips and best practices |
onchain-analyzer evaluate | Evaluate output quality across accuracy, relevance, completeness, and tone |
onchain-analyzer safety | Display AI safety guidelines (no harmful content, no personal data, cite sources) |
onchain-analyzer tools | List popular AI tools: ChatGPT, Claude, Gemini, Perplexity, Midjourney |
onchain-analyzer help | Show the built-in help message |
onchain-analyzer version | Print the current version |
Data Storage
All data is stored in the directory defined by the ONCHAIN_ANALYZER_DIR environment variable. If not set, it defaults to ~/.local/share/onchain-analyzer/.
Files created in the data directory:
data.log— Main data log file (currently unused but created on init)history.log— Audit trail of every command executed with timestamps
Requirements
- bash 4.0 or later (uses
set -euo pipefail) - python3 — used by the
costcommand for token cost calculation (standard library only, no pip packages) - Standard POSIX utilities —
date,cat,echo,mkdir - No external API keys or network access required
When to Use
- Crafting prompts for LLMs — Use
promptandsystemto quickly scaffold well-structured prompts with role assignments and task definitions - Learning prompt engineering patterns — Use
templateto see common patterns (zero-shot, few-shot, chain-of-thought, role-play) andchainfor multi-step reasoning workflows - Estimating API costs — Use
costto calculate approximate spend before sending large batches of tokens to an API - Comparing AI models — Use
compareto get a quick reference of how GPT-4, Claude, and Gemini stack up in benchmarks - Ensuring responsible AI use — Use
safetyto review guardrails before deploying prompts in production environments
Examples
# Generate a prompt for a data analyst role
onchain-analyzer prompt "data analyst" "summarize sales data" "markdown table"
#=> Role: data analyst
#=> Task: summarize sales data
#=> Format: markdown table
# Create a system prompt for an expert role
onchain-analyzer system "cybersecurity researcher"
#=> You are an expert cybersecurity researcher. Be precise, helpful, and concise.
# View prompt chain methodology
onchain-analyzer chain
#=> Step 1: Understand | Step 2: Plan | Step 3: Execute | Step 4: Verify
# Estimate cost for 5000 tokens
onchain-analyzer cost 5000
#=> Tokens: ~5000 | Cost: ~$0.1500
# List available prompt templates
onchain-analyzer template
#=> 1. Zero-shot | 2. Few-shot | 3. Chain-of-thought | 4. Role-play
Configuration
Set the ONCHAIN_ANALYZER_DIR environment variable to change the data directory:
export ONCHAIN_ANALYZER_DIR="/path/to/custom/dir"
If unset, the tool respects XDG_DATA_HOME (defaulting to ~/.local/share/onchain-analyzer/).
How It Works
- On every invocation, the tool ensures the data directory exists (
mkdir -p) - The first argument selects the command via a
casedispatch - Each command performs its action and appends an entry to
history.logfor auditing - The
costcommand uses an inline Python snippet to computetokens × $0.00003 - All output goes to stdout for easy piping and redirection
Powered by BytesAgain | bytesagain.com | hello@bytesagain.com
相关 Skills
数据库建模
by alirezarezvani
把需求梳理成关系型数据库表结构,自动生成迁移脚本、TypeScript/Python 类型、种子数据、RLS 策略和索引方案,适合多租户、审计追踪、软删除等后端建模与 Schema 评审场景。
✎ 把数据库结构设计、ER图梳理和SQL建模放到一处,复杂业务也能快速统一数据模式,少走不少返工弯路。
资深数据科学家
by alirezarezvani
覆盖实验设计、特征工程、预测建模、因果推断与模型评估,适合用 Python/R/SQL 做 A/B 测试、时序分析和生产级 ML 落地,支撑数据驱动决策。
✎ 从 A/B 测试、因果分析到预测建模一条龙搞定,既有硬核统计方法也懂业务沟通,特别适合把数据结论真正落地。
数据库设计
by alirezarezvani
聚焦数据库 Schema 设计与演进,自动检查规范化、数据类型、约束和索引问题,生成 ERD,并为零停机迁移、数据变更和回滚提供可执行方案。
✎ 专注数据库设计与数据建模,帮你快速理清表结构和关系,减少后期返工,SQL 落地也更顺手。
相关 MCP 服务
PostgreSQL 数据库
编辑精选by Anthropic
PostgreSQL 是让 Claude 直接查询和管理你的数据库的 MCP 服务器。
✎ 这个服务器解决了开发者需要手动编写 SQL 查询的痛点,特别适合数据分析师或后端开发者快速探索数据库结构。不过,由于是参考实现,生产环境使用前务必评估安全风险,别指望它能处理复杂事务。
SQLite 数据库
编辑精选by Anthropic
SQLite 是让 AI 直接查询本地数据库进行数据分析的 MCP 服务器。
✎ 这个服务器解决了 AI 无法直接访问 SQLite 数据库的问题,适合需要快速分析本地数据集的开发者。不过,作为参考实现,它可能缺乏生产级的安全特性,建议在受控环境中使用。
Firecrawl 智能爬虫
编辑精选by Firecrawl
Firecrawl 是让 AI 直接抓取网页并提取结构化数据的 MCP 服务器。
✎ 它解决了手动写爬虫的麻烦,让 Claude 能直接访问动态网页内容。最适合需要实时数据的研究者或开发者,比如监控竞品价格或抓取新闻。但要注意,它依赖第三方 API,可能涉及隐私和成本问题。