链上分析器

Onchain Analyzer

by BytesAgain

Analyze wallet on-chain activity with transaction history and behavior profiling. Use when investigating wallets, tracing transfers, profiling activity.

3.9k数据与存储未扫描2026年3月23日

安装

claude skill add --url github.com/openclaw/skills/tree/main/skills/bytesagain1/onchain-analyzer

文档

Onchain Analyzer

An AI and prompt engineering assistant CLI. Despite the name, this tool is focused on helping you craft, optimize, and evaluate prompts for large language models. It provides commands for generating prompts, building prompt chains, comparing AI models, estimating token costs, and following safety guidelines.

All operations are logged with timestamps for auditing and stored locally in flat files.

Commands

CommandDescription
onchain-analyzer prompt <role> [task] [format]Generate a structured prompt with role, task, and output format
onchain-analyzer system <role>Generate a system prompt for a given expert role
onchain-analyzer chainDisplay a 4-step prompt chain: Understand → Plan → Execute → Verify
onchain-analyzer templateList prompt template patterns: Zero-shot, Few-shot, Chain-of-thought, Role-play
onchain-analyzer compareCompare major AI models (GPT-4 vs Claude vs Gemini)
onchain-analyzer cost [tokens]Estimate cost for a given number of tokens (default: 1000)
onchain-analyzer optimizeShow prompt optimization tips and best practices
onchain-analyzer evaluateEvaluate output quality across accuracy, relevance, completeness, and tone
onchain-analyzer safetyDisplay AI safety guidelines (no harmful content, no personal data, cite sources)
onchain-analyzer toolsList popular AI tools: ChatGPT, Claude, Gemini, Perplexity, Midjourney
onchain-analyzer helpShow the built-in help message
onchain-analyzer versionPrint the current version

Data Storage

All data is stored in the directory defined by the ONCHAIN_ANALYZER_DIR environment variable. If not set, it defaults to ~/.local/share/onchain-analyzer/.

Files created in the data directory:

  • data.log — Main data log file (currently unused but created on init)
  • history.log — Audit trail of every command executed with timestamps

Requirements

  • bash 4.0 or later (uses set -euo pipefail)
  • python3 — used by the cost command for token cost calculation (standard library only, no pip packages)
  • Standard POSIX utilitiesdate, cat, echo, mkdir
  • No external API keys or network access required

When to Use

  1. Crafting prompts for LLMs — Use prompt and system to quickly scaffold well-structured prompts with role assignments and task definitions
  2. Learning prompt engineering patterns — Use template to see common patterns (zero-shot, few-shot, chain-of-thought, role-play) and chain for multi-step reasoning workflows
  3. Estimating API costs — Use cost to calculate approximate spend before sending large batches of tokens to an API
  4. Comparing AI models — Use compare to get a quick reference of how GPT-4, Claude, and Gemini stack up in benchmarks
  5. Ensuring responsible AI use — Use safety to review guardrails before deploying prompts in production environments

Examples

bash
# Generate a prompt for a data analyst role
onchain-analyzer prompt "data analyst" "summarize sales data" "markdown table"
#=> Role: data analyst
#=>   Task: summarize sales data
#=>   Format: markdown table

# Create a system prompt for an expert role
onchain-analyzer system "cybersecurity researcher"
#=> You are an expert cybersecurity researcher. Be precise, helpful, and concise.

# View prompt chain methodology
onchain-analyzer chain
#=> Step 1: Understand | Step 2: Plan | Step 3: Execute | Step 4: Verify

# Estimate cost for 5000 tokens
onchain-analyzer cost 5000
#=> Tokens: ~5000 | Cost: ~$0.1500

# List available prompt templates
onchain-analyzer template
#=> 1. Zero-shot | 2. Few-shot | 3. Chain-of-thought | 4. Role-play

Configuration

Set the ONCHAIN_ANALYZER_DIR environment variable to change the data directory:

bash
export ONCHAIN_ANALYZER_DIR="/path/to/custom/dir"

If unset, the tool respects XDG_DATA_HOME (defaulting to ~/.local/share/onchain-analyzer/).

How It Works

  1. On every invocation, the tool ensures the data directory exists (mkdir -p)
  2. The first argument selects the command via a case dispatch
  3. Each command performs its action and appends an entry to history.log for auditing
  4. The cost command uses an inline Python snippet to compute tokens × $0.00003
  5. All output goes to stdout for easy piping and redirection

Powered by BytesAgain | bytesagain.com | hello@bytesagain.com

相关 Skills

数据库建模

by alirezarezvani

Universal
热门

把需求梳理成关系型数据库表结构,自动生成迁移脚本、TypeScript/Python 类型、种子数据、RLS 策略和索引方案,适合多租户、审计追踪、软删除等后端建模与 Schema 评审场景。

把数据库结构设计、ER图梳理和SQL建模放到一处,复杂业务也能快速统一数据模式,少走不少返工弯路。

数据与存储
未扫描9.8k

资深数据科学家

by alirezarezvani

Universal
热门

覆盖实验设计、特征工程、预测建模、因果推断与模型评估,适合用 Python/R/SQL 做 A/B 测试、时序分析和生产级 ML 落地,支撑数据驱动决策。

从 A/B 测试、因果分析到预测建模一条龙搞定,既有硬核统计方法也懂业务沟通,特别适合把数据结论真正落地。

数据与存储
未扫描9.8k

数据库设计

by alirezarezvani

Universal
热门

聚焦数据库 Schema 设计与演进,自动检查规范化、数据类型、约束和索引问题,生成 ERD,并为零停机迁移、数据变更和回滚提供可执行方案。

专注数据库设计与数据建模,帮你快速理清表结构和关系,减少后期返工,SQL 落地也更顺手。

数据与存储
未扫描9.8k

相关 MCP 服务

by Anthropic

热门

PostgreSQL 是让 Claude 直接查询和管理你的数据库的 MCP 服务器。

这个服务器解决了开发者需要手动编写 SQL 查询的痛点,特别适合数据分析师或后端开发者快速探索数据库结构。不过,由于是参考实现,生产环境使用前务必评估安全风险,别指望它能处理复杂事务。

数据与存储
83.1k

SQLite 数据库

编辑精选

by Anthropic

热门

SQLite 是让 AI 直接查询本地数据库进行数据分析的 MCP 服务器。

这个服务器解决了 AI 无法直接访问 SQLite 数据库的问题,适合需要快速分析本地数据集的开发者。不过,作为参考实现,它可能缺乏生产级的安全特性,建议在受控环境中使用。

数据与存储
83.1k

by Firecrawl

热门

Firecrawl 是让 AI 直接抓取网页并提取结构化数据的 MCP 服务器。

它解决了手动写爬虫的麻烦,让 Claude 能直接访问动态网页内容。最适合需要实时数据的研究者或开发者,比如监控竞品价格或抓取新闻。但要注意,它依赖第三方 API,可能涉及隐私和成本问题。

数据与存储
6.0k

评论