llmbooster

by danlct27

A 4-step thinking framework to boost LLM output quality. Enforces structured reasoning (Plan → Draft → Self-Critique → Refine) to improve low-end LLM responses. No LLM endpoint needed - LLM follows the framework itself. Triggered by "detailed analysis", "in-depth analysis", "use booster", or /booster command.

View Chinese version with editor review

安装

claude skill add --url github.com/openclaw/skills/tree/main/skills/danlct27/llmbooster

文档

LLMBooster Skill

A Thinking Framework, Not an Automation Tool

LLMBooster is a 4-step thinking framework that improves LLM output quality through structured reasoning. No LLM endpoint needed - the LLM follows the framework itself.

Core Philosophy

Problem with low-end LLMs: Jump to conclusions, miss details, lack self-review.

Booster solution: Enforce structured thinking process.

code
Plan → Draft → Self-Critique → Refine

Trigger Conditions

  • User says "use booster", "booster", or "/booster"
  • User requests: "detailed analysis", "in-depth analysis", "help me analyze"
  • User requests: "improve quality", "detailed analysis"
  • User asks for evaluation, comparison, or decision support
  • User requests code review or technical documentation
  • User asks complex questions (lengthy tasks, multi-step problems)

How It Works

LLM executes the framework itself, no Python calls needed:

  1. LLM reads prompts/plan.md → Create structured plan
  2. LLM reads prompts/draft.md → Write complete draft
  3. LLM reads prompts/self_critique.md → Review issues
  4. LLM reads prompts/refine.md → Polish final output

Command Handling

When user enters /booster command, execute:

bash
cd ~/.openclaw/workspace/skills/llmbooster && python3 -c "
from config_loader import ConfigLoader
from state_manager import SkillStateManager
from cli_handler import CLICommandHandler

loader = ConfigLoader()
config = loader.load('config.schema.json')
state_mgr = SkillStateManager(config)
cli = CLICommandHandler(state_mgr)
result = cli.handle('/booster status')
print(result.message)
"

CLI Commands

CommandDescription
/booster enableEnable LLMBooster
/booster disableDisable LLMBooster
/booster statusShow current status
/booster statsShow usage statistics
/booster depth <1-4>Set thinking depth
/booster helpShow help

Thinking Depth

DepthStepsQualitySpeedUse Case
1Plan★★☆☆FastestQuick analysis, brainstorm
2Plan → Draft★★★☆FastGeneral tasks, simple Q&A
3+ Self-Critique★★★★MediumCode review, technical docs
4Full pipeline★★★★★SlowestImportant docs, complex analysis

Visual Feedback

When executing, Booster displays:

code
🚀 **Booster Pipeline Started**: Analyzing task...
────────────────────────────────────────
🚀 Booster [█░░░░] Step 1/4: **Plan**
✅ Plan completed (2.3s)

🚀 Booster [██░░░] Step 2/4: **Draft**
✅ Draft completed (5.1s)

🚀 Booster [███░░] Step 3/4: **Self-Critique**
✅ Self-Critique completed (1.8s)

🚀 Booster [████] Step 4/4: **Refine**
✅ Refine completed (3.2s)
────────────────────────────────────────
✅ **Booster Complete** - 4 steps, 12.4s total

Prompt Templates

All templates are in prompts/ directory:

  • plan.md - Step 1: Create structured plan
  • draft.md - Step 2: Write complete draft
  • self_critique.md - Step 3: Review and list improvements
  • refine.md - Step 4: Apply improvements

Why It Works

Low-End LLM ProblemBooster Solution
Jumps to conclusionsPlan step forces structured thinking
Misses detailsDraft step requires complete coverage
No self-reviewSelf-Critique step finds issues
Rough outputRefine step polishes final result

Usage Statistics

bash
/booster stats
# 📊 **Booster Statistics**
# ───────────────────────
# Status: enabled
# Thinking Depth: 4
# Tasks Processed: 5
# Last Used: 2026-03-22T09:30:00

Files

FilePurpose
SKILL.mdSkill definition + trigger conditions
README.mdDocumentation
booster.pyCore module + helpers
cli_handler.pyCLI command processing
state_manager.pyState + statistics
stream_handler.pyVisual feedback
config_loader.pyConfig loading
prompts/*.mdStep prompt templates