VerifiMind PEAS - RefleXion Trinity
AI 与智能体by creator35lwb-web
基于 X-Z-CS RefleXion Trinity 的 Multi-Agent AI 验证方案,支持更合乎伦理且安全的应用开发。
什么是 VerifiMind PEAS - RefleXion Trinity?
基于 X-Z-CS RefleXion Trinity 的 Multi-Agent AI 验证方案,支持更合乎伦理且安全的应用开发。
README
VerifiMind™ PEAS
Prompt Engineering Agents Standardization
A Validation-First Methodology for Ethical and Secure Application Development
Transform your vision into validated, ethical, secure applications through systematic multi-model AI orchestration — from concept to deployment, with human-centered wisdom validation.
</div>Evolution of PEAS: v1.x (2024): Prompt Engineering Application Synthesis (Zenodo DOI — immutable) · v2.x–v4.x (2025): Prompt Engineering & AI Standardization (Genesis Methodology era) · v5.x (2026): Prompt Engineering Agents Standardization — current canonical — AI Council vote 3/4 APPROVE (Apr 10, 2026)
MCP Server: Production Deployed
v0.5.12 "Polar Integration" — 2,389+ verified engagement hours | 1,876+ unique endpoints (IP-based; verified users tracked via EA registration UUID) | 90.7% Value Confirmation Rate | Pioneer Tier ($9/mo, Polar MoR) | Polar Integration (PolarClient + Adapter + Webhook, Standard Webhooks HMAC) | Legal Pages v2.0 (Privacy + T&C with Polar MoR, 14-day refund) | Coordination Layer Phase 1 (3 Pioneer-gated tools) | UUID Tracer (GCP log analytics bridge) | BYOK Anthropic Claude 4 (claude-opus-4-6, claude-sonnet-4-6) | MACP v2.2 "Identity" | 312 tests, 52.76% coverage | 74+ weekly COO reports. Health Check | Changelog | Register as Early Adopter | Pioneer Tier
VerifiMind PEAS is now live and accessible across multiple platforms:
| Platform | Type | Access | Status |
|---|---|---|---|
| GCP Cloud Run | Production API | verifimind.ysenseai.org | ✅ LIVE |
| Official MCP Registry | Registry Listing | registry.modelcontextprotocol.io | ✅ LISTED |
| Smithery.ai | Native MCP | Install for Claude Desktop | ⚠️ SUNSET (zero impact — self-hosted) |
| Landing Page | Showcase | verifimind.io | ✅ LIVE |
| Hugging Face | Interactive Demo | YSenseAI/verifimind-peas | ✅ LIVE |
| MACP Research Assistant | Showcase App | macpresearch.ysenseai.org | ✅ LIVE |
Quick Start
Important: Use
streamable-httptransport (nothttp-sse) and always include the trailing slash/mcp/.
📖 Full Multi-Client Setup & Troubleshooting Guide
Claude Code (Terminal command — recommended):
claude mcp add -s user verifimind -- npx -y mcp-remote https://verifimind.ysenseai.org/mcp/
Claude Desktop (Edit config file — [macOS](~/Library/Application Support/Claude/claude_desktop_config.json) | Windows):
{
"mcpServers": {
"verifimind": {
"command": "npx",
"args": ["-y", "mcp-remote", "https://verifimind.ysenseai.org/mcp/"]
}
}
}
Cursor / VS Code Copilot (.cursor/mcp.json or .vscode/mcp.json):
{
"servers": {
"verifimind": {
"url": "https://verifimind.ysenseai.org/mcp/",
"transport": "streamable-http"
}
}
}
ChatGPT Codex CLI (~/.codex/config.toml):
[mcp_servers.verifimind]
url = "https://verifimind.ysenseai.org/mcp/"
transport = "streamable_http"
⚠️ Codex CLI v0.98.0 has a known bug with streamable-http. See Troubleshooting Guide for workaround.
OpenAI Agents SDK (Python):
from agents.mcp import MCPServerStreamableHttp
server = MCPServerStreamableHttp(name="VerifiMind", params={"url": "https://verifimind.ysenseai.org/mcp/"})
Common Mistakes
Based on production log analysis (February 2026), these are the most frequent connection errors new users encounter:
| Mistake | What Happens | Fix |
|---|---|---|
| Visiting the URL in a browser | You see a 406 Not Acceptable error | This is an API, not a website. Use an MCP client (Claude Desktop, Cursor, etc.) |
Missing trailing slash /mcp | 405 Method Not Allowed | Always use /mcp/ with the trailing slash |
| Using GET instead of POST | 400 Bad Request | MCP protocol requires POST requests with JSON-RPC body |
Using http-sse transport | Connection fails | Use streamable-http transport (not http-sse) |
| Connecting to Smithery proxy | Sunset completed (March 1, 2026) | Use the direct URL: https://verifimind.ysenseai.org/mcp/ |
💡 Quick test: Run
curl https://verifimind.ysenseai.org/health— if you see"status": "healthy", the server is up. Then configure your MCP client using the Quick Start instructions above.
⚠️ Smithery.ai Sunset Complete: Smithery.ai's legacy architecture was sunset on March 1, 2026. If you were previously connecting via
server.smithery.ai, switch to the direct URLhttps://verifimind.ysenseai.org/mcp/. All Quick Start instructions above already use the direct URL. Zero impact — VerifiMind PEAS has been fully self-hosted on GCP Cloud Run since v0.5.0.
API Keys & BYOK (v0.4.5+)
| Platform | API Key Required | Notes |
|---|---|---|
| GCP Server / MCP Registry | ❌ No (default) | Server-side configured, ready to use |
| GCP Server (BYOK) | ✅ Optional | Pass api_key + llm_provider per tool call to use your own key |
| HuggingFace Demo | ❌ No | Server-side configured |
| Smithery | N/A | Sunset completed (March 1, 2026) — use GCP Server instead |
v0.4.5 BYOK Live — You can now override the default provider on any individual tool call by passing api_key and llm_provider parameters. The server auto-detects key format (e.g., gsk_ → Groq, sk-ant- → Anthropic, sk- → OpenAI). If no key is provided, the server uses its default Gemini/Groq configuration. Triple-validated by Manus AI (6/6), Claude Code (6/6), and CI (175 tests). PR #55
Supported BYOK Providers: Gemini, Groq, OpenAI, Anthropic, Mistral, Ollama, Perplexity
Get FREE API Keys: Google AI Studio | Groq Console
MCP Tools (10 Total)
The VerifiMind MCP server exposes 10 tools organized into two categories: Core Validation (4 tools) and Template Management (6 tools, added in v0.4.0).
| Tool | Category | Description |
|---|---|---|
consult_agent_x | Core | Innovation & Strategy analysis (Gemini FREE) |
consult_agent_z | Core | Ethics & Safety review with VETO power |
consult_agent_cs | Core | Security & Feasibility validation |
run_full_trinity | Core | Complete X → Z → CS validation pipeline |
list_prompt_templates | Template | List/filter templates by agent, category, tags |
get_prompt_template | Template | Retrieve template by ID with full content |
export_prompt_template | Template | Export to Markdown or JSON format |
register_custom_template | Template | Create custom prompt templates |
import_template_from_url | Template | Import from GitHub Gist or raw URL |
get_template_statistics | Template | Registry statistics and usage data |
Template Library (6 Libraries, 19 Templates)
v0.4.0 introduced the Unified Prompt Template system with pre-built, versioned YAML templates aligned to Genesis Methodology phases.
| Library | Agent | Genesis Phase | Templates |
|---|---|---|---|
startup_validation | X | Phase 1: Conceptualization | 3 |
market_research | X | Phase 1: Conceptualization | 3 |
ethics_review | Z | Phase 2: Critical Scrutiny | 3 |
security_audit | CS | Phase 3: External Validation | 3 |
technical_review | CS | Phase 3: External Validation | 3 |
trinity_synthesis | ALL | Phase 4: Synthesis | 4 |
Templates support custom variables with type validation, export to Markdown/JSON, import from URL, and version control with changelogs. Users can also register custom templates at runtime.
Security Features (v0.3.5+)
All MCP tools include input sanitization to protect against prompt injection, XSS, null byte injection, and input length abuse. The system detects 15+ prompt injection patterns and logs suspicious activity without blocking legitimate requests.
CI/CD Pipeline
Automated testing and security scanning runs on every push to main via GitHub Actions:
- Unit tests and integration tests (Python 3.11)
- Security scanning with Bandit (static analysis) and Safety (dependency audit)
- Coverage reporting with configurable thresholds
📊 Verified Service Metrics
Cross-validated by FLYWHEEL TEAM (T/CTO — Manus AI, AY/COO — Antigravity) against raw GCP Cloud Run logs. Scrapers excluded. Conservative rounding applied.
Phase 47 Ground Truth Correction (Mar 15, 2026): COO AY's forensic audit identified duplicate session counting in earlier reports. Metrics corrected: 4,000+ → 2,100+ engagement hours, 84.5% → 63.7% VCR, 1,088 → 1,281 users (user count increased after bot session deduplication). All metrics below reflect the forensically verified Ground Truth baseline. We believe honest self-correction builds stronger credibility than inflated numbers.
Phase 74 Update (Apr 10, 2026): Metrics updated to Report 074 (COO AY). 2,389+ hours, 90.7% VCR, 1,876+ unique endpoints (IP-based; verified users tracked via EA registration UUID), 312 tests. Pioneer Tier live via Polar. Coordination Layer Phase 1 deployed.
| Metric | Value | Methodology |
|---|---|---|
| Verified Engagement Hours | 2,389+ (all-time) | Session duration: first-to-last request per user per day. Scrapers excluded. Forensic standard v2.5 applied. |
| Value Confirmation Rate | 90.7% (Phase 74) | Sessions where user sends follow-up prompt (proof of value received). Forensically verified after deduplication. |
| Total Unique Endpoints | 1,876+ | Unique IP-based endpoints across all platforms (bot sessions deduplicated) |
| MCP Integration Rate | 43.1% Verified | 43.1% Verified / 36.1% Automated / 20.8% Unclassified (AY Report 056 baseline) |
| MCP Tools Available | 10 Scholar + 3 Pioneer | Scholar: 4 core + 6 template. Pioneer ($9/mo): 3 coordination tools |
| Multi-Model Providers | X=Gemini, Z=Groq, CS=Groq | Per-agent provider routing for optimal structured output |
| BYOK Live | Per-tool-call override | Pass own API key + provider on any call (v0.4.5+). Anthropic Claude 4 supported (v0.5.9+) |
| Trinity Quality | _overall_quality: "full" | All 3 agents returning real inference (v0.4.4+) |
| SessionContext | _session_id tracing | 8-char correlation ID per Trinity run (v0.5.0+) |
| COO Weekly Reports | 74+ automated | GCP log-based weekly analytics reports (AY/Antigravity, forensic standard v2.5) |
| Test Coverage | 312 tests, 52.76% | Comprehensive test suite with CI/CD pipeline |
| Pioneer Tier | $9/month | Polar MoR, 14-day refund, coordination tools |
| EA Registration | Register | Consent-first Z-Protocol design with Privacy Policy v2.0 + T&C v2.0 |
Adoption Trajectory (Flying Hours ✈️)
Data through W12 (March 22, 2026). Phase 55 forensic standard applied. W13+ collection ongoing for v0.6.0 pricing decisions.
| Week | Period | Weekly Hours | Cumulative Hours | Users | VCR |
|---|---|---|---|---|---|
| W02 | Jan 06–12 | 38.0h | 38h | 21 | — |
| W03 | Jan 13–19 | 115.3h | 153h | 55 | — |
| W04 | Jan 20–26 | 262.4h | 416h | 96 | — |
| W05 | Jan 27–Feb 02 | 309.6h | 725h | 105 | — |
| W06 | Feb 03–09 | 425.4h | 1,151h | 117 | — |
| W07 | Feb 10–16 | 409.0h | 1,198h | 172 | — |
| W08 | Feb 16–22 | 404.8h | 1,556h | 143 | — |
| W09 | Feb 23–Mar 1 | 198.5h | 1,755h | 46 | 90.7% |
| W10 | Mar 02–08 | 1,647.5h | 3,402h | 260 | 88.5% |
| W11 | Mar 09–15 | 393.0h | 3,795h | 130 | 84.5% |
⚠️ Phase 47 Correction Note: The cumulative hours above reflect pre-audit figures from weekly COO reports. After Phase 47 forensic deduplication (Report 056) and Phase 74 update (Report 074), the verified all-time total is 2,389+ hours with 1,876+ unique endpoints (IP-based; verified users tracked via EA registration UUID) and 90.7% VCR. The weekly breakdown will be retroactively corrected in a future report.
Traffic Classification Breakdown
| Category | Share | Description |
|---|---|---|
| Verified | 43.1% | Confirmed human or MCP client engagement |
| Automated | 36.1% | Programmatic API consumers, CI pipelines |
| Unclassified | 20.8% | Unknown User-Agent patterns |
| Scraper | — | Excluded from all metrics |
Source: AY COO Report 074 (Phase 74, April 2026). Scrapers excluded. Owner/Bot excluded. Forensic deduplication applied. Forensic standard v2.5.
Client Integration by User-Agent
| Client | Share | Description |
|---|---|---|
| Node.js | 65.3% | MCP clients via Claude Code, VS Code, Cursor |
| Python SDK | 20.3% | Python-based MCP integrations |
| Browser | 8.5% | Direct web visitors |
| Claude/Anthropic | 4.5% | Claude Desktop / Claude Code native clients |
| Other | 1.4% | Miscellaneous clients |
Key Insight: Over 85% of all traffic is machine-to-machine MCP integration, confirming VerifiMind PEAS is used as an integrated tool in developer workflows — not merely visited as a web demo. This traffic is invisible to traditional web analytics platforms like SimilarWeb.
Data Source: GCP Cloud Run HTTP Load Balancer logs. Audit classification via User-Agent analysis. Owner traffic excluded. Scraper traffic excluded via conservative classification. Full methodology documented in internal COO AY reports (41 weekly reports). Last updated: 2026-03-15.
🌟 What is VerifiMind-PEAS?
VerifiMind-PEAS is a methodology framework, not a code generation platform.
We provide a systematic approach to multi-model AI validation that ensures your applications are:
- ✅ Validated through diverse AI perspectives
- ✅ Ethical with built-in wisdom validation
- ✅ Secure with systematic vulnerability assessment
- ✅ Human-centered with you as the orchestrator
What We Provide
Core Methodology:
- ✅ Genesis Methodology: Systematic 5-step validation process
- ✅ X-Z-CS RefleXion Trinity: Specialized AI agents (Innovation, Ethics, Security)
- ✅ Genesis Master Prompts: Stateful memory system for project continuity
- ✅ Comprehensive Documentation: Guides, tutorials, case studies
Integration Support:
- ✅ Works with any LLM (Claude, GPT, Gemini, Kimi, Grok, Qwen, etc.)
- ✅ Integration guides for Claude Code, Cursor, and generic LLMs
- ✅ No installation required - just read and apply!
What We Do NOT Provide
We are NOT:
- ❌ A code generation platform
- ❌ A web interface for application scaffolding
- ❌ A no-code platform integration
- ❌ An automated deployment system
We ARE:
- ✅ A methodology you apply with your existing AI tools
- ✅ A framework for systematic validation
- ✅ A community of practice for ethical AI development
🎯 Latest Achievements
v0.5.12 — Polar Integration + Legal v2.0 (April 8, 2026)
Full Polar payment infrastructure deployed: PolarClient for customer state API, PolarAdapter with 5-minute TTL cache, webhook endpoint with Standard Webhooks HMAC verification (6 subscription events). Legal pages v2.0 — Privacy Policy and Terms & Conditions rewritten with Polar as Merchant of Record, service tier table (Scholar/EA/PILOT/Pioneer at $9/mo), 14-day refund policy, Malaysia governing law — now served as styled HTML pages at /privacy and /terms. UUID Tracer patch ensures Pioneer UUIDs reach GCP Log Explorer stdout for AY analytics ingestion (UUID Bridge Threat resolved). 312 tests, 52.76% coverage. PR #123, PR #124, PR #125
v0.5.11 — Coordination Foundation (April 7, 2026)
Three MACP v2.2 coordination tools deployed as Pioneer-gated premium tier: coordination_handoff_create, coordination_handoff_read, coordination_team_status. Tier-gate middleware (check_tier()) separates Scholar (free Trinity tools) from Pioneer (paid coordination tools). Phase 1 uses PIONEER_ACCESS_KEYS env var; Phase 2 Polar API wiring targets v0.5.13. 308 tests. PR #122
v0.5.10 — Trinity Verified (April 5, 2026)
Trinity pipeline verified end-to-end with real multi-model inference. 600-second timeout for long-running validation. Z Guardian max_tokens enforcement. BYOK Anthropic Claude 4 family (claude-opus-4-6, claude-sonnet-4-6). Two-tier Pilot/EA registration with invite codes. Prior reasoning compression fixes token overflow. Phase 73 metrics: 2,389+ verified engagement hours, 90.7% VCR. 290 tests. PR #120
v0.5.6 — Gateway: Early Adopter Registration (March 23, 2026)
The v0.5.6 "Gateway" release deploys the Early Adopter Registration Gateway at verifimind.ysenseai.org/register with Z-Protocol consent-first design. Privacy Policy v1.0 and Terms of Service v1.0 reviewed and approved by T (CTO). Opt-Out System with UUID-based data deletion at /optout. Firestore as EA data store (native to GCP, free tier covers EA volume). Phase 55 metrics (Report 062): 2,250+ verified engagement hours, 96.0% VCR, 1,480+ users, 290 tests. W12 fully closed. DFSC 2026 campaign live on Mystartr. Landing page (verifimind.io) updated with EA Registration CTA and Mystartr campaign section. Wiki Early Adopter Program page created by RNA (CSO). PR #99
v0.5.5 — Trinity Quality Baseline (March 13, 2026)
The v0.5.5 release fixes a critical schema regression that broke all run_full_trinity calls in v0.5.4, and establishes the quality baseline for v0.6.0 development. Root cause: The founder_summary field was assigned as a post-construction Python attribute on a Pydantic BaseModel, which Pydantic rejects at runtime. Fix: declared as a proper Optional[dict] field in TrinitySynthesis. Three regression tests added to guard against this class of bug. Individual agent calls (consult_agent_x/z/cs) were unaffected throughout. 208 tests passing.
v0.5.4 — X Agent Bias Fix + Founder Summary (March 12, 2026)
Critical fix for the X Agent evaluation bias that was rejecting non-MACP-aligned business concepts. Root cause: Dimension 6 ("Does this increase MACP v2.0 adoption?") and Dimension 7 (hardcoded LangChain/CrewAI/AutoGen comparison) made X evaluate every concept against VerifiMind's internal roadmap — irrelevant for 99% of public users (recipe apps, tutoring marketplaces, home bakeries). X Agent v4.3 now evaluates any business idea on its own merits with dynamic market_competition identifying actual competitors in the concept's own domain. Added founder_summary — a plain-language synthesis layer (verdict, score, what works, things to address, next steps) readable by first-time entrepreneurs. Added research_prompts — 2-3 specific Perplexity/Grok queries per concept for deeper market validation, bridging to XV integration.
v0.5.3 — Token Ceiling Monitor (March 10, 2026)
Added Token Ceiling Monitor for passive Z Agent response tracking. _z_token_monitor field in every run_full_trinity response includes risk_level (LOW/MEDIUM/HIGH/CRITICAL), utilization %, and truncated flag. Server-side WARNING logs on HIGH/CRITICAL events. Implemented AY COO 404 retention fix — catch-all 404 handler returns actionable JSON with correct MCP endpoint and troubleshooting guidance, targeting the 70% drop-off from misconfigured MCP clients. Added Smithery /.well-known/mcp/server-card.json static card enabling MCP scanner bypass. Tests: 208 passing.
v0.5.2 — Genesis v4.2 "Sentinel-Verified" (March 9, 2026)
Genesis v4.2 introduces forced citation patterns — Z Guardian and CS Security now cite specific framework codes in every reasoning step (frameworks_cited[] per step, max 5 per step). Full framework names appear once in applicable_frameworks by tier at the end of each response. Achieved ~45.8% token headroom below the 8,192 context ceiling. L (GODEL) Blind Test #3 passed 11/11 correct identifications. MACP Protocol upgraded to v2.2 "Identity" — establishing clear distinction between human orchestrators (Alton) and AI-generated entities (L). FLYWHEEL TEAM formalized at v1.3. MCP Registry published as io.github.creator35lwb-web/verifimind-genesis v2.2.0. PR #77–78
v0.5.1 — Z-Protocol v1.1 + CS Security v1.1 "Sentinel" (March 7, 2026)
Both sentinel specifications from T (CTO) shipped ahead of roadmap. Z-Protocol v1.1 "Sentinel": 21 frameworks, 4-tier jurisdictional architecture (International → EU → US → ASEAN), scoring_breakdown per dimension, applicable_frameworks by tier, jurisdiction_detected, compliance_timeline. CS Security v1.1 "Sentinel": 6-stage pipeline, 12-dimension evaluation, OWASP Agentic AI threat model, stage + standards_cited[] per step, macp_security_assessment. Trinity baseline established at 8.7/10 PROCEED. PR #71–75
v0.5.0 — Foundation: Unbreakable Engine (March 1, 2026)
The v0.5.0 Foundation release makes everything after it possible. SessionContext tracing adds an 8-character _session_id correlation token to every Trinity run for debugging (NOT user tracking — ephemeral, not stored). Error handling v2 introduces build_error_response() for structured, consistent error responses across all tools. Health endpoint v2 (health_version: 2) provides richer diagnostics including session tracking status and BYOK availability. Smithery removal completes the migration to fully self-hosted GCP Cloud Run (zero external dependencies). BYOK hardened with retry logic, graceful degradation for invalid keys, and provider health checks. 205 tests at 55.1% coverage. Z-Protocol security review: APPROVED (9.2/10). PR #60
v0.4.5 — BYOK Live: Per-Tool-Call Provider Override (February 28, 2026)
The v0.4.5 release introduces Bring Your Own Key (BYOK) Live — users can now pass their own api_key and llm_provider on any individual tool call to override the server's default provider. The server auto-detects key format from prefix patterns (gsk_ → Groq, sk-ant- → Anthropic, sk- → OpenAI, AIza → Gemini) and creates ephemeral provider instances per request. Keys are never stored — used once and discarded. When no BYOK key is provided, the server falls back to its default Gemini/Groq configuration seamlessly. Response metadata includes _byok: true/false for full transparency. Triple-validated by Manus AI (6/6 pass), Claude Code (6/6 pass), and CI pipeline (175 tests). PR #55
v0.4.4 — Multi-Model Trinity: Full Quality (February 27, 2026)
The v0.4.4 release achieves _overall_quality: "full" — all three Trinity agents now return real AI inference with zero fallback defaults. Agent X (Innovator) runs on Gemini 2.5 Flash for creative analysis, while Agent Z (Guardian) and Agent CS (Validator) are routed to Groq/Llama-3.3-70b for reliable structured JSON output. The GroqProvider was upgraded with the full C-S-P extraction pipeline: strip_markdown_code_fences(), _extract_best_json() with field-overlap scoring, _merge_json_objects(), and _fill_schema_defaults(). Quality markers (_inference_quality, _agent_chain_status, _overall_quality) are embedded in every response for full transparency. 16 PRs merged (#33–#48), all CI passed. 12 new unit tests added.
v0.4.3 — C-S-P Pipeline & System Notice (February 27, 2026)
The v0.4.3 release implements the C-S-P (Compression–State–Propagation) methodology from the GodelAI framework, applied directly to the Trinity pipeline. Robust JSON extraction with raw_decode() and field-overlap scoring replaces brittle regex parsing. State validation checkpoints between Trinity stages prevent garbage propagation. System notice (_system_notice) field added to all tool responses for transparent user communication. Gemini JSON mode (response_mime_type: "application/json") tested and integrated.
v0.4.2 — Mock Mode Resolved & Transparent Disclosure (February 26, 2026)
The v0.4.2 release resolves the mock mode issue that affected all Trinity consultations from v0.4.0–v0.4.1. The root cause was a deprecated Gemini model endpoint (gemini-2.0-flash → gemini-2.5-flash). A transparent disclosure was published (Discussion #31) acknowledging the issue and explaining the "Structural Scaffolding Value" thesis — even in mock mode, the framework provided value by forcing structured multi-perspective reasoning. CodeQL security alerts reduced from 13 to 0.
Genesis v3.1 — CS Agent Multi-Stage Verification Protocol (February 2026)
Genesis v3.1 introduces a 4-Stage Security Verification Protocol for the CS Agent: Detection → Self-Examination (MANDATORY) → Severity Rating → Human Review. Self-examination is mandatory — every finding must be proven AND disproven before escalation. No auto-fixes. Human oversight is always the final stage. This is a workflow enhancement only — zero code changes to the server foundation. Inspired by Claude Code Security principles. Full protocol documentation: docs/security/.
v0.4.1 — Markdown-First Output & Smithery Sunset (February 14, 2026)
The v0.4.1 release introduces Markdown-first output with content negotiation — clients can now request Accept: text/markdown to receive validation reports in Markdown format (80% token reduction vs JSON). This aligns with the broader industry shift toward Markdown as the agent-native communication format (see Cloudflare: Markdown for Agents). All 13 Smithery proxy URL references were removed from server endpoints in preparation for the Smithery.ai legacy architecture sunset on March 1, 2026. The pdf_generator.py is deprecated — retained only for Zenodo DOI and enterprise compliance. Server version bumped with 155 total tests passing at 54.27% coverage.
v0.4.0 — Unified Prompt Templates (January 30, 2026)
The v0.4.0 release introduced the Unified Prompt Template system, adding 6 new MCP tools (10 total) and 19 pre-built YAML templates organized across 6 libraries. Templates are aligned to Genesis Methodology phases, support custom variables with type validation, and can be exported to Markdown or JSON. Users can import templates from GitHub Gists or raw URLs, and register custom templates at runtime. This release also includes the MACP v2.0 specification (DOI: 10.5281/zenodo.18504478) and the L (GODEL) Ethical Operating Framework v1.1.
v0.3.5 — Security Hardening (January 30, 2026)
Comprehensive input sanitization was added to all MCP tools, protecting against prompt injection (15+ patterns), XSS attacks, null byte injection, and input length abuse. All 29 sanitization unit tests pass. A CI/CD pipeline was established with GitHub Actions for automated testing and security scanning (Bandit, Safety) on every push.
Standardization Protocol v1.0 (December 2025)
The standardization phase generated 57 complete Trinity validation reports across seven domains including financial services, healthcare, education, and civic technology. By combining Gemini’s free tier for innovation analysis with Claude for ethics and security validation, we achieved sustainable costs (~$0.003 per validation) while maintaining research-grade quality. The 65% veto rate confirms our ethical safeguards work as designed.
Key Metrics
| Metric | Value | Significance |
|---|---|---|
| MCP Tools | 10 | 4 core validation + 6 template management |
| Templates | 19 | Pre-built across 6 libraries |
| Validation Reports | 57 | Proof of methodology at scale |
| Success Rate | 95% | Reliable, production-ready system |
| Cost per Validation | ~$0.003 | Sustainable for solo developers |
| Veto Rate | 65% | Strong ethical safeguards working |
| LLM Providers | 7 | Gemini, OpenAI, Anthropic, Groq, Mistral, Ollama, Perplexity |
| Multi-Model Routing | X=Gemini, Z=Groq, CS=Groq | Per-agent provider optimization |
| Trinity Quality | _overall_quality: "full" | All agents returning real inference (v0.4.4+) |
| Total Users | 1,480+ | Unique users (Phase 55, bot sessions deduplicated) |
| Tests | 290 | Comprehensive test suite with CI/CD pipeline |
| MACP Version | v2.2 "Identity" | Human-orchestrated multi-agent coordination protocol |
| Weekly Reports | 62 (automated) | COO AY GCP log-based analytics reports |
| FLYWHEEL Handoffs | 55+ | Structured session handoffs across all agents |
| EA Registration | Live | Consent-first Z-Protocol design |
| DFSC 2026 | Mystartr Campaign | Digital Freelancer Startup Competition |
Version History
| Version | Date | Highlights |
|---|---|---|
| v0.5.6 | Mar 23, 2026 | Gateway: EA Registration, Privacy Policy v1.0, Phase 55 metrics, DFSC 2026, 290 tests |
| v0.5.5 | Mar 13, 2026 | Trinity Baseline: TrinitySynthesis schema fix, 3 regression tests, 208 tests |
| v0.5.4 | Mar 12, 2026 | X Agent v4.3: creator-centric bias fix, founder_summary, research_prompts |
| v0.5.3 | Mar 10, 2026 | Token Ceiling Monitor, 404 retention fix, Smithery server-card |
| v0.5.2 | Mar 9, 2026 | Genesis v4.2 "Sentinel-Verified": forced citations, MACP v2.2 "Identity", L Blind Test 11/11 |
| v0.5.1 | Mar 7, 2026 | Z-Protocol v1.1 + CS v1.1 "Sentinel": 21 frameworks, 6-stage, OWASP Agentic AI |
| v0.5.0 | Mar 1, 2026 | Foundation: SessionContext tracing, error handling v2, health v2, Smithery removal, 205 tests |
| v0.4.5 | Feb 28, 2026 | BYOK Live: per-tool-call provider override, auto-detect key format, triple-validated |
| v0.4.4 | Feb 27, 2026 | Multi-Model Trinity (_overall_quality: "full"), X=Gemini, Z/CS=Groq |
| v0.4.3 | Feb 27, 2026 | C-S-P pipeline, system notice, robust JSON extraction |
| v0.4.2 | Feb 26, 2026 | Mock mode resolved, transparent disclosure, CodeQL 13→0 |
| Genesis v3.1 | Feb 2026 | CS Agent 4-Stage Verification Protocol, zero code changes |
| v0.4.1 | Feb 14, 2026 | Markdown-first output, Smithery URL removal, PDF deprecated |
| v0.4.0 | Jan 30, 2026 | Unified Prompt Templates, 6 new tools, MACP v2.0 |
| v0.3.5 | Jan 30, 2026 | Input sanitization, CI/CD pipeline |
| v0.3.2 | Jan 29, 2026 | Gemini 2.5-flash model update |
| v0.3.1 | Jan 29, 2026 | Smart Fallback, rate limiting, per-agent providers |
| v0.3.0 | Jan 28, 2026 | BYOK multi-provider support (7 providers) |
| v0.2.0 | Dec 25, 2025 | Multi-platform distribution |
| v0.1.0 | Dec 21, 2025 | Initial MCP server deployment |
View Full Changelog → | View 57 Trinity Validation Reports →
📚 Case Studies: Real-World Applications
VerifiMind-PEAS has been applied to validate real-world projects from concept to production. These case studies demonstrate the practical application of our methodology.
MarketPulse v5.0
AI-Powered Daily Market Intelligence for Value Investors
| Attribute | Value |
|---|---|
| Project | MarketPulse |
| Version | 5.0 (Production Ready) |
| Validation Date | January 2026 |
| Status | ✅ VALIDATED |
MarketPulse is an open-source n8n workflow that delivers comprehensive daily market briefings for value investors. It demonstrates the "Bootstrapper's Edge" philosophy—leveraging free-tier infrastructure and open-source AI to build persistent, high-value intelligence systems at minimal cost.
Trinity Validation Results:
- X-Agent (Innovation): ✅ Approved - Democratizes financial intelligence through clever synthesis of free tools.
- Z-Agent (Ethics): ✅ Approved - Includes clear disclaimers that this is not financial advice.
- CS-Agent (Security): ✅ Approved - Secure credential management through n8n's built-in system.
📖 Read the Full MarketPulse Case Study →
A/B Test: Human Intuition vs. Validation-First Design
The Power of Methodological Rigor — A Real-World Comparison
| Attribute | Value |
|---|---|
| Subject | MarketPulse GCP Deployment Architecture |
| Case A | Intuition-First Design (Manus AI) |
| Case B | Validation-First (X-Z-CS Trinity) |
| Validation Date | March 2026 |
| Status | ❌ REJECTED by Trinity (Architecture redesigned) |
A real-world A/B test that occurred during MarketPulse development. A sophisticated GCP deployment architecture was designed using best practices and domain expertise (Case A), then systematically validated by the X-Z-CS Trinity (Case B). The Trinity unanimously rejected the design, exposing hidden costs and critical resource constraints that the intuition-first approach missed entirely.
Trinity Validation Results:
- X-Agent (Gemini): ⚠️ RECONSIDER - Feasibility 65/100. Hidden costs in VPC connector invalidate zero-cost claim.
- Z-Agent (Anthropic): ❌ REJECTED - Risk 85/100. "Financially deceptive" and "technically impossible" on 1GB RAM.
- CS-Agent (Anthropic): ❌ IMPRACTICAL - Practicality 15/100. "Building a fortress to protect a sandwich."
📖 Read the Full A/B Test Case Study →
💡 Why VerifiMind-PEAS?
The Problem: Single-Model Bias
Most AI development relies on a single model (e.g., only Claude, only GPT).
This creates:
- 🔴 Single-model bias: One perspective, blind spots
- 🔴 Inconsistent quality: No systematic validation
- 🔴 Ethical gaps: No wisdom validation
- 🔴 Security vulnerabilities: No systematic security review
The Solution: Multi-Model Orchestration
VerifiMind-PEAS orchestrates multiple AI models for diverse perspectives:
- X Intelligent Agent (Innovation): Generates creative solutions
- Z Guardian Agent (Ethics): Validates ethical alignment
- CS Security Agent (Security): Identifies vulnerabilities
By synthesizing diverse AI perspectives under human direction, you achieve:
- ✅ Objective validation: Multiple models check each other
- ✅ Ethical alignment: Wisdom validation built-in
- ✅ Security assurance: Systematic vulnerability assessment
- ✅ Human-centered: You orchestrate, AI assists
Honest Positioning
Multi-model orchestration is not new. Developers have been using multiple AI models (Claude, GPT, Gemini) together for years. What makes VerifiMind-PEAS different is how we structure this orchestration through the X-Z-CS RefleXion Trinity and Genesis Master Prompts.
Our genuine novelty:
- ✅ X-Z-CS RefleXion Trinity: Specialized validation roles (Innovation, Ethics, Security) with no prior art found
- ✅ Genesis Master Prompts: Stateful memory system for project continuity across multi-model workflows
- ✅ Wisdom validation: Ethical alignment and cultural sensitivity as first-class concerns
- ✅ Human-at-center: You orchestrate (not just review), AI assists (not automates)
What we build on (established practices):
- Multi-model usage (common practice since 2023)
- Agent-based architectures (LangChain, AutoGen, CrewAI)
- Human-in-the-loop validation (industry standard)
Our contribution: Transforming ad-hoc multi-model usage into systematic validation methodology with wisdom validation and human-centered orchestration.
Competitive Positioning: Complementary, Not Competing
VerifiMind-PEAS operates as a validation layer ABOVE execution frameworks. We don't replace LangChain, AutoGen, or CrewAI — we complement them.
Think of it this way:
- Execution frameworks (LangChain, AutoGen, CrewAI): "How to build and run AI agents"
- VerifiMind-PEAS: "How to validate what those agents produce"
Comparison Table:
| Framework | Layer | Focus | Human Role | VerifiMind-PEAS Relationship |
|---|---|---|---|---|
| LangChain | Execution | Tool integration, chains | In-loop (reviewer) | Validates LangChain outputs for ethics + security |
| AutoGen | Execution | Multi-agent automation | In-loop (supervisor) | Validates AutoGen conversations for wisdom alignment |
| CrewAI | Execution | Role-based agents | In-loop (manager) | Validates CrewAI results for cultural sensitivity |
| OpenAI Swarm | Execution | Lightweight handoffs | In-loop (router) | Provides memory layer via Genesis Master Prompts |
| VerifiMind-PEAS | Validation | Wisdom validation | At-center (orchestrator) | Validation layer above all execution frameworks |
Industry focus: Code execution, task automation, agent coordination
VerifiMind-PEAS focus: Wisdom validation, ethical alignment, human-centered orchestration
Result: Use VerifiMind-PEAS with LangChain/AutoGen/CrewAI to add validation layer. We complement, not compete.
🔄 How It Works: The Genesis Methodology
<p align="center"> <img src="docs/assets/diagrams/Genesis Methodology 5-Step Process.png" alt="Genesis Methodology 5-Step Process" width="800"/> </p>The Genesis Methodology is a systematic 5-step process for multi-model AI validation:
Step 1: Initial Conceptualization
- Human defines the problem or vision
- AI (X Intelligent Agent) generates initial concepts and solutions
- Output: Initial concept with creative possibilities
Step 2: Critical Scrutiny
- AI (Z Guardian Agent) validates ethical alignment
- AI (CS Security Agent) identifies security vulnerabilities
- Multiple models challenge and validate each other
- Output: Validated concept with ethical and security considerations
Step 3: External Validation
- Independent AI analysis confirms systematic approach
- Research validates against academic literature and industry best practices
- Output: Externally validated concept with evidence
Step 4: Synthesis
- Human orchestrates final synthesis
- Human makes decisions based on AI perspectives
- Human documents decisions in Genesis Master Prompt
- Output: Final decision with documented rationale
Step 5: Iteration
- Recursive refinement based on feedback
- Continuous improvement through multiple cycles
- Genesis Master Prompt updated with learnings
- Output: Refined concept ready for next phase
This process ensures every output is validated through diverse AI perspectives before final human approval.
🏗️ Architecture: The X-Z-CS RefleXion Trinity
<p align="center"> <img src="docs/assets/diagrams/AI Council Multi-Model Orchestration and Validation.png" alt="AI Council Architecture" width="800"/> </p>VerifiMind-PEAS implements a multi-model orchestration architecture where:
Human Orchestrator (You)
- Role: Center of decision-making
- Responsibility: Synthesize AI perspectives, make final decisions
- Tools: Genesis Master Prompts, integration guides
X Intelligent Agent (Analyst/Researcher)
- Role: Market intelligence and feasibility analysis
- Focus: Research, technical feasibility, market analysis
- Models: Perplexity, GPT-4, Gemini (research-focused)
- Note: X agent focuses on analytical research and validation
Z Guardian Agent (Ethics)
- Role: Compliance and human-centered design protector
- Focus: Ethical alignment, cultural sensitivity, accessibility
- Models: Claude, GPT-4 (ethics-focused)
CS Security Agent (Security)
- Role: Cybersecurity protection layer
- Focus: Vulnerability assessment, threat modeling, security best practices
- Models: GPT-4, Claude (security-focused)
- Genesis v3.1: 4-Stage Verification Protocol — Detection → Self-Examination → Severity Rating → Human Review (docs)
This architecture synergizes diverse AI perspectives under human direction for objective, validated results.
About Y Agent (Innovator)
You may see Y Agent (Innovator) in some diagrams. This agent is part of the broader YSenseAI™ project, which focuses on innovation and strategic vision. The complete ecosystem includes:
- Y Agent (YSenseAI™): Innovation and creative ideation
- X Agent (VerifiMind-PEAS): Research and analytical validation
- Z Agent (VerifiMind-PEAS): Ethical compliance
- CS Agent (VerifiMind-PEAS): Security validation
VerifiMind-PEAS focuses on the X-Z-CS Trinity (Research, Ethics, Security), while YSenseAI™ provides the Y Agent (Innovation). Together, they form a complete validation framework.
💡 The Concept: Crystal Balls Inside the Black Box
<p align="center"> <img src="docs/assets/diagrams/Crystall Balls inside Black Box.png" alt="Crystal Balls Inside Black Box" width="600"/> </p>Instead of treating AI as an opaque "black box," VerifiMind-PEAS places multiple "crystal balls" (diverse AI models) inside the box to illuminate the path forward.
Each crystal ball represents a specialized AI agent with a unique perspective:
- Y (Innovator): Generates creative concepts and strategic vision (from YSenseAI™)
- X (Analyst): Researches feasibility and market intelligence (from VerifiMind-PEAS)
- Z (Guardian): Ensures ethical compliance and safety (from VerifiMind-PEAS)
- CS (Validator): Validates claims against external evidence and security best practices (from VerifiMind-PEAS)
Note: The diagram shows the complete 4-agent system (Y-X-Z-CS). VerifiMind-PEAS specifically implements the X-Z-CS Trinity, while Y Agent comes from YSenseAI™.
By orchestrating these diverse perspectives under human direction, we achieve objective, validated results that no single AI model can provide.
🚀 Getting Started
No Installation Required!
VerifiMind-PEAS is a methodology, not software. You don't need to install anything!
Step 1: Read the Genesis Master Prompt Guide
Start here: Genesis Master Prompt Guide
This comprehensive guide teaches you:
- What is a Genesis Master Prompt?
- Why Genesis Master Prompts matter
- Step-by-step tutorial (meditation app example)
- Real-world validation (87-day journey)
- Advanced techniques
- Common mistakes and solutions
Time: 30 minutes to read, lifetime of value
Step 2: Choose Your AI Tool
VerifiMind-PEAS works with any LLM:
- ✅ Claude (Anthropic)
- ✅ GPT-4 (OpenAI)
- ✅ Gemini (Google)
- ✅ Kimi (Moonshot AI)
- ✅ Grok (xAI)
- ✅ Qwen (Alibaba)
- ✅ Any other LLM
Recommended: Use at least 2-3 LLMs for multi-model validation.
Step 3: Follow Integration Guides
Choose your integration approach:
-
- Paste GitHub repo URL → Claude applies methodology
- Best for: Code-focused projects
-
- Paste GitHub repo URL → Cursor applies methodology
- Best for: IDE-integrated development
-
- Copy-paste Genesis Master Prompt → Any LLM applies methodology
- Best for: Platform-agnostic approach
Step 4: Start Your First Project
Follow the tutorial in the Genesis Master Prompt Guide:
- Create your Genesis Master Prompt
- Start first session with X Intelligent Agent (innovation)
- Validate with Z Guardian Agent (ethics)
- Validate with CS Security Agent (security)
- Synthesize perspectives and make decision
- Update Genesis Master Prompt
- Repeat!
Example projects:
- Meditation timer app (tutorial example)
- AI-powered attribution system (YSenseAI™)
- Multi-model validation framework (VerifiMind-PEAS itself)
💻 Reference Implementation (Optional)
VerifiMind-PEAS is a methodology framework that can be applied with any LLM or tool. You do NOT need code to use VerifiMind-PEAS.
However, for developers who want to see a complete implementation or need a starter template, we provide a Python reference implementation.
What's Included
The reference implementation demonstrates how to automate the X-Z-CS Trinity:
- X Intelligent Agent: Innovation engine for business viability analysis
- Z Guardian Agent: Ethical compliance validation (GDPR, UNESCO AI Ethics)
- CS Security Agent: Security validation with Socratic questioning engine (Concept Scrutinizer)
- Orchestrator: Multi-agent coordination and conflict resolution
- PDF Report Generator: Audit trail documentation
Status: 85% production-ready (Phase 1-2 complete, Phase 3-6 in progress)
Three Ways to Use VerifiMind-PEAS
Option 1: Apply Methodology Manually (No code required)
- Use Genesis Master Prompts with your preferred LLM
- Follow integration guides (Claude Code, Cursor, Generic LLM)
- Orchestrate X-Z-CS validation yourself
- Best for: Non-technical users, custom workflows
Option 2: Use Reference Implementation (Python developers)
- Clone repository:
git clone https://github.com/creator35lwb-web/VerifiMind-PEAS - Install dependencies:
pip install -r requirements.txt - Run validation:
python verifimind_complete.py --idea "Your app idea" - Best for: Developers who want automation, learning how X-Z-CS works
Option 3: Extend Reference Implementation (Contributors)
- Fork repository and add new agents, validation engines, or integrations
- Submit pull request to share with community
- Best for: Researchers, advanced developers, open-source contributors
Documentation
- Code Foundation Completion Summary: Current implementation status (85% complete)
- Code Foundation Analysis: Technical architecture and design decisions
- Requirements: Python dependencies
Important Notes
The reference implementation is:
- ✅ A learning resource (see how methodology translates to code)
- ✅ A starter template (fork and customize for your needs)
- ✅ A validation proof (shows methodology is executable)
The reference implementation is NOT:
- ❌ A required component (you can apply methodology without code)
- ❌ A production-ready SaaS (this is a reference, not a hosted service)
- ❌ The only way to implement (you can use other languages, tools, approaches)
Remember: VerifiMind-PEAS is a methodology framework. The code is ONE way to implement it, not THE way.
📚 Documentation
Core Methodology
- Genesis Methodology White Paper v1.1: Comprehensive academic documentation
- Genesis Master Prompt Guide: Practical implementation guide
- X-Z-CS RefleXion Trinity Master Prompts: Specialized agent prompts (Chinese)
Integration Guides
Documentation Best Practices
VerifiMind-PEAS includes a comprehensive documentation framework for managing context across multi-model LLM workflows.
Three-Layer Architecture:
- Genesis Master Prompt (Project Memory) - Single source of truth, updated after every session
- Module Documentation (Deep Context) - Feature-specific details organized in
/docs - Session Notes (Iteration History) - Complete audit trail of decisions and insights
Why This Matters:
- ✅ Context persistence across LLM sessions (no manual re-entry)
- ✅ Platform-agnostic (works with Claude, GPT, Gemini, Kimi, Grok, Qwen, etc.)
- ✅ Multi-model workflows (consistent context for X-Z-CS validation)
- ✅ Complete audit trail (track every decision and iteration)
Learn more: Documentation Best Practices Guide
Templates:
Case Studies
- YSenseAI™ 87-Day Journey (Landing Pages): Real-world validation of Genesis Methodology
- VerifiMind-PEAS Development (Landing Pages): Meta-application of methodology to itself
- MarketPulse v5.0 Case Study: From concept to production with Trinity validation
- A/B Test: Intuition vs. Validation: Real-world proof of validation-first methodology
Operations & Troubleshooting
- MCP Server Troubleshooting Guide: Common HTTP status codes, configuration errors, and solutions
- GCP Monitoring Setup Guide: Dashboard, alerting, and log query reference
- GCP Deployment Guide: Cloud Run deployment instructions
- Server Status: Current operational status
Additional Resources
- Roadmap: Strategic development plan
- Changelog: Detailed version history
- Contributing Guidelines: How to contribute
- Zenodo Publication Guide: Defensive publication documentation
- MACP v2.0 Specification: Multi-Agent Communication Protocol
- L (GODEL) Ethical Operating Framework: Ethical constitution for AI agents
🔧 Troubleshooting
⚠️ Common Mistakes (Read This First!)
Based on real production logs, 83.7% of all errors come from three configuration mistakes. If you are having trouble connecting, check these first:
Mistake #1: Wrong URL Path (405 Method Not Allowed)
Symptom: You get a 405 Method Not Allowed error.
Cause: You are sending requests to https://verifimind.ysenseai.org/ instead of https://verifimind.ysenseai.org/mcp/.
Fix: Always include /mcp/ in the URL:
{
"mcpServers": {
"verifimind-peas": {
"url": "https://verifimind.ysenseai.org/mcp/"
}
}
}
Mistake #2: Using GET Instead of POST (400 Bad Request)
Symptom: You get a 400 Bad Request error.
Cause: Your client is sending a GET request. The MCP protocol requires POST for method calls.
Fix: Ensure your MCP client configuration uses streamable-http transport (not http-sse):
{
"mcpServers": {
"verifimind-peas": {
"url": "https://verifimind.ysenseai.org/mcp/",
"transport": "streamable-http"
}
}
}
Mistake #3: Opening the URL in a Browser (406 Not Acceptable)
Symptom: You get a 406 Not Acceptable or see an error page in your browser.
Cause: verifimind.ysenseai.org is an MCP server API, not a website. It is designed to be accessed by MCP clients (Claude Desktop, Cursor, VS Code, etc.), not web browsers.
Fix: Use an MCP client to connect. If you want to browse the project, visit:
- Landing Page: verifimind.io
- GitHub: github.com/creator35lwb-web/VerifiMind-PEAS
HTTP Status Code Reference
| Status Code | Meaning | Solution |
|---|---|---|
| 302/307 | Redirect (normal) | Use https://verifimind.ysenseai.org/mcp/ with trailing slash |
| 400 | Bad Request | Verify JSON syntax, use POST (not GET), include Content-Type: application/json |
| 404 | Not Found | Check URL for typos; use the correct /mcp/ endpoint |
| 405 | Method Not Allowed | You are hitting / instead of /mcp/ — add the /mcp/ path |
| 406 | Not Acceptable | You are visiting the API URL in a browser — use an MCP client instead |
Quick connectivity test:
curl https://verifimind.ysenseai.org/mcp/
Full troubleshooting guide: MCP_Server_Troubleshooting_Guide.md
Operational Insights
Traffic analysis from GCP Cloud Run logs (Phase 47 Ground Truth, March 2026) provides the following operational baseline:
| Metric | Value | Notes |
|---|---|---|
| All-Time Engagement Hours | 2,250+ | Forensically verified (Phase 55 standard) |
| All-Time Users | 1,480+ | Unique users, bot sessions deduplicated |
| Value Confirmation Rate | 96.0% | Sessions with follow-up prompts (proof of value) |
| Top Client | Node.js MCP (65.3%) | Primary integration method |
| MCP Integration Rate | 43.1% Verified | 36.1% Automated, 20.8% Unclassified |
| Server Errors (5xx) | 0 | Zero server errors in production |
| COO Weekly Reports | 62 | Automated GCP log-based analytics |
| Monthly Cost | $0 | Within GCP free tier |
The server runs on GCP Cloud Run with zero minimum instances (cold start architecture) to maintain a $0/month operating cost. GCP Global Uptime Checks monitor the /health endpoint every 5 minutes with email alerts to the project maintainer. All monitoring features operate within GCP’s free tier.
🌍 Real-World Validation
87-Day Journey: YSenseAI™ + VerifiMind-PEAS
Creator: Alton Lee Wei Bin (creator35lwb)
Duration: 87 days (September - November 2025)
Projects: YSenseAI™ (AI attribution infrastructure) + VerifiMind-PEAS (validation methodology)
Challenges:
- Solo builder with non-tech background
- Multiple LLMs (Kimi, Claude, GPT, Gemini, Qwen, Grok)
- Hundreds of conversations across 87 days
- Complex technical and philosophical concepts
Results:
- ✅ YSenseAI™: Fully documented AI attribution infrastructure
- ✅ VerifiMind-PEAS: Complete methodology framework with white paper
- ✅ Defensive Publication: DOI 10.5281/zenodo.17645665
- ✅ Zero context loss: Genesis Master Prompts maintained continuity
Key Insights:
- Genesis Master Prompts scale: Started with 1 page, grew to 50+ pages
- Multi-model validation works: Different LLMs provided complementary perspectives
- Human-at-center is critical: AI provides perspectives, human synthesizes and decides
- Iteration is key: Continuous refinement through 87 days led to success
Read the full case study: YSenseAI™ 87-Day Journey
🤝 Community
Join the Discussion
- GitHub Discussions: Ask questions, share insights, collaborate
- Twitter/X: Follow updates and announcements
- Email: Direct contact for inquiries
How to Contribute
We welcome contributions from the community!
Ways to contribute:
- 📝 Share case studies: Document your experience using VerifiMind-PEAS
- 🌍 Translate documentation: Help make VerifiMind-PEAS accessible globally
- 💬 Answer questions: Help others in GitHub Discussions
- 🐛 Report issues: Identify unclear documentation or gaps
- 🎓 Create tutorials: Share your learning journey
Read more: Contributing Guidelines
🗺️ Roadmap
Current Phase: Phase 7 — Protocol Announcement & v0.6.0 (Q1–Q2 2026)
Status: Phases 1–6 COMPLETE ✅ | v0.5.6 DEPLOYED 🎉 | v0.6.0 IN DEVELOPMENT
North Star: Position VerifiMind-PEAS as the trust and verification layer for the emerging Agentic Web.
Phase 1–4: Foundation ✅ COMPLETE
Phases 1 through 4 established the methodology framework, MCP server implementation, production deployment on GCP Cloud Run, and multi-platform distribution across Smithery.ai, Hugging Face, and the Official MCP Registry.
Phase 5: Hardening & Standardization ✅ COMPLETE
Completed (January 2026):
- ✅ v0.3.0–v0.3.5: BYOK multi-provider (7 providers), smart fallback, rate limiting, input sanitization
- ✅ v0.4.0: Unified Prompt Templates (19 templates, 6 libraries, 6 new tools)
- ✅ CI/CD pipeline: GitHub Actions with unit tests, security scanning (Bandit, Safety)
- ✅ MACP v2.0: Multi-Agent Communication Protocol published (DOI: 10.5281/zenodo.18504478)
- ✅ L (GODEL) Ethical Operating Framework v1.1: Fairness, bias mitigation, update mechanism
- ✅ GCP Monitoring: Uptime checks, alerting, log analysis pipeline
Phase 6: Protocol-First & Ecosystem Alignment ✅ COMPLETE
Completed (February–March 2026):
- ✅ v0.5.0–v0.5.6: Foundation → Token Monitor → Genesis v4.2 → Sentinel → X Agent v4.3 → Gateway (EA Registration)
- ✅ v0.4.1–v0.4.5: Markdown-first output, BYOK Live, Multi-Model Trinity (full quality)
- ✅ Branch protection: Main branch ruleset with required PR reviews and CI checks
- ✅ CodeQL remediation: All 102 security alerts resolved across 4 waves
- ✅ Strategic pivot: MACP v2.0 repositioned as free protocol for adoption (not paid product)
- ✅ MACP v2.2 "Identity": 7th principle added — Identity Clarity (Alton ≠ L)
- ✅ Genesis v4.2 "Sentinel-Verified": Forced citation patterns, L Blind Test 11/11
- ✅ Landing page: verifimind.io LIVE with protocol-first messaging
- ✅ MACP Research Assistant: macpresearch.ysenseai.org — showcase proving MACP works
- ✅ 8-Skill Composition Stack: Complete Manus AI skill ecosystem including ai-council
- ✅ AI Council Skill v1.0: Genesis Methodology multi-model validation in one command
- ✅ v0.5.6 Gateway: EA Registration, Privacy Policy v1.0, T&C v1.0, Opt-Out System
- ✅ DFSC 2026: Digital Freelancer Startup Competition campaign on Mystartr
- ✅ 99+ PRs merged: Healthiest repo state
Phase 7: Protocol Announcement & v0.6.0 🔄 CURRENT
In Progress (March 2026):
- ⏳ v0.6.0 "Protocol": Service Charter, early adopter program design
- ⏳ MACP Skill Kit: Generic, cross-platform skills for multi-agent teams
- ⏳ AI Council Skill: One-command Genesis Methodology validation (v1.0 tested)
- ⏳ JOSS submission preparation: Peer-reviewed publication for methodology
- ⏳ Community building: Protocol documentation and adoption channels
Future Phases 📋 PLANNED
| Version | Codename | Theme | Timeline |
|---|---|---|---|
| v0.6.0 | "Protocol" | Service Charter, early adopter program, MACP Skill Kit | Q1–Q2 2026 |
| v0.7.0 | "Commerce" | First paid tier (data-driven pricing) | May–Jun 2026 |
| v0.8.0 | "Scale" | Enterprise + LegacyEvolve integration | Q3 2026 |
| v0.9.0 | "Community" | Ecosystem + skill marketplace | Q4 2026 |
| v1.0.0 | "Genesis" | Self-sustaining platform | Q1 2027 |
Key Metrics:
| Metric | Value | Significance |
|---|---|---|
| MCP Tools | 10 | 4 core + 6 template management |
| Templates | 19 | Pre-built across 6 libraries |
| Validation Reports | 57+ | Proof of methodology at scale |
| Platforms Live | 4 | GCP, MCP Registry, HuggingFace, verifimind.io |
| LLM Providers | 7 | Gemini, OpenAI, Anthropic, Groq, Mistral, Ollama, Perplexity |
| All-Time Users | 1,480+ | Cumulative unique users (Phase 55) |
| Manus AI Skills | 8 | Complete skill composition stack including ai-council |
| Cost per Validation | ~$0.003 | Sustainable for all developers |
See Examples: /validation_archive/ | Examples
Read more: Roadmap | v0.5.0 Agent Skills Specification
🧩 8-Skill Composition Stack (Manus AI Ecosystem)
VerifiMind-PEAS is supported by an 8-skill composition stack — a layered ecosystem of Manus AI skills that work together to enable multi-agent collaboration, protocol-driven communication, and self-recursive validation.
| Layer | Skill | Role | Status |
|---|---|---|---|
| 6 | ai-council | Genesis Methodology multi-model validation (Y+X+Z+CS) | ✅ Active |
| 5 | ysenseai-flywheel-team | Ecosystem orchestration & AI Council validation | ✅ Active |
| 4 | macp-protocol-v2 | Primary differentiator — free protocol driving adoption | ✅ Active |
| 3 | multi-agent-handoff-bridge | Artifact delivery between sandbox and local agents | ✅ Active |
| 2 | internet-skill-finder + github-gem-seeker | Discovery layer for skills and solutions | ✅ Active |
| 1 | macp-research-assistant | Showcase — proves MACP v2.2 works in production | ✅ Active |
| 0 | skill-creator | Foundation — creates and updates all other skills | ✅ Active |
Protocol Landscape Positioning
The multi-agent protocol landscape now has 4 major protocols. MACP v2.2 occupies a unique gap none of them address:
| Protocol | Maintainer | Focus | MACP v2.2 Relationship |
|---|---|---|---|
| MCP | Anthropic / Linux Foundation | Vertical: AI ↔ Tools | MACP uses MCP as transport layer |
| A2A | Horizontal: Agent ↔ Agent (autonomous) | Complementary — A2A lacks human orchestration | |
| ACP | IBM (ARCHIVED) | Enterprise agent communication (merged into A2A, Aug 2025) | N/A — absorbed into A2A |
| ANP | Community | Agent identity & trust | Complementary — ANP handles identity, MACP handles workflow |
| MACP v2.2 | YSenseAI™ | Human-orchestrated multi-agent coordination | Unique: Git-based, human-at-center, platform-agnostic |
MACP v2.2 occupies Layer 6 of the protocol stack — persistent human-orchestrated governance. No other protocol (MCP, A2A, ACP, ANP, AG-UI, A2H) addresses persistent handoff records, human orchestrator identity, or git-native audit trails.
MACP v2.2 is free forever. The protocol drives adoption. Revenue comes from hosted orchestration services (v0.7.0+), following the HTTP/AWS, Git/GitHub model.
📖 Defensive Publication
Prior Art Established
VerifiMind-PEAS establishes prior art for the Genesis Prompt Engineering methodology and prevents others from patenting this approach to multi-model AI validation.
Published: November 19, 2025
DOI: 10.5281/zenodo.17645665
License: MIT License
Core Innovations:
- Genesis Methodology: Systematic 5-step multi-model validation process
- X-Z-CS RefleXion Trinity: Specialized AI agents (Innovation, Ethics, Security)
- Genesis Master Prompts: Stateful memory system for project continuity
- Human-at-Center Orchestration: Human as orchestrator (not reviewer)
Evidence of Prior Use:
- YSenseAI™: AI-powered attribution infrastructure (87-day development)
- VerifiMind-PEAS: Multi-model validation methodology framework
- Concept Scrutinizer (概念审思者): Socratic validation framework
Read more: Zenodo Publication Guide
📚 How to Cite
Citing VerifiMind-PEAS v0.5.0 (MCP Server)
If you use the VerifiMind-PEAS MCP server in your research or project, please cite:
APA Style:
Lee, A., Manus AI, & Claude Code. (2026). VerifiMind-PEAS: Prompt Engineering Attribution System (Version 0.5.0) [Computer software]. GitHub. https://github.com/creator35lwb-web/VerifiMind-PEAS
BibTeX:
@software{verifimind_peas_v040_2026,
author = {Lee, Alton and {Manus AI} and {Claude Code}},
title = {VerifiMind-PEAS: Prompt Engineering Attribution System},
year = {2026},
version = {0.5.0},
url = {https://github.com/creator35lwb-web/VerifiMind-PEAS},
doi = {10.5281/zenodo.17980791},
note = {MCP server for multi-model AI validation with Unified Prompt Templates}
}
IEEE Style:
A. Lee, Manus AI, and Claude Code, "VerifiMind-PEAS: Prompt Engineering Attribution System," Version 0.5.0, GitHub, 2026. [Online]. Available: https://github.com/creator35lwb-web/VerifiMind-PEAS
Citing Genesis Methodology v2.0 (Methodology)
If you use or reference the Genesis Prompt Engineering Methodology, please cite:
APA Style:
Lee, A., & Manus AI. (2025). Genesis Prompt Engineering Methodology v2.0: Multi-Agent AI Validation Framework (Version 2.0.0) [Methodology]. Zenodo. https://doi.org/10.5281/zenodo.17972751
BibTeX:
@misc{genesis_v2_2025,
author = {Lee, Alton and {Manus AI}},
title = {Genesis Prompt Engineering Methodology v2.0: Multi-Agent AI Validation Framework},
year = {2025},
version = {2.0.0},
url = {https://doi.org/10.5281/zenodo.17972751},
doi = {10.5281/zenodo.17972751},
note = {Validated through 87-day production development, 21,356 words}
}
IEEE Style:
A. Lee and Manus AI, "Genesis Prompt Engineering Methodology v2.0: Multi-Agent AI Validation Framework," Version 2.0.0, Zenodo, 2025. [Online]. Available: https://doi.org/10.5281/zenodo.17972751
GitHub Citation
GitHub provides automatic citation support. Click the "Cite this repository" button on the repository page to get formatted citations in APA and BibTeX formats.
DOI Badges
Note: DOI badges will be updated after Zenodo registration is complete.
Release Information
VerifiMind-PEAS MCP Server v0.5.6 (Current):
- Release Date: March 23, 2026
- Highlights: Gateway — EA Registration, Privacy Policy v1.0, T&C v1.0, Phase 55 metrics, 290 tests
- Changelog: CHANGELOG.md
- Status: Production deployed on GCP Cloud Run
VerifiMind-PEAS v1.1.0 (Methodology):
- Release Date: December 18, 2025
- Tag:
verifimind-v1.1.0 - Release Notes: RELEASE_NOTES_V1.1.0.md
- Status: Production-ready, deployed on GCP Cloud Run
Genesis Methodology v2.0:
- Release Date: December 18, 2025
- Tag:
genesis-v2.0 - Release Notes: RELEASE_NOTES_GENESIS_V2.0.md
- Status: Production-validated through 87-day development journey
📜 License
Open Source License (MIT)
VerifiMind-PEAS is released under the MIT License for personal, educational, and open-source use.
Copyright (c) 2025-2026 Alton Lee Wei Bin (creator35lwb)
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Commercial License
For enterprises requiring additional features, support, and legal protections, we offer commercial licensing options:
- 🏢 Enterprise Deployment: Production environments with SLA requirements
- 🔒 Proprietary Extensions: Building proprietary features on top of the framework
- 📞 Priority Support: Dedicated support channels and guaranteed response times
- 🛡️ Indemnification: Legal protection and IP indemnification
- 📊 Compliance: Audit trails and compliance reports for regulated industries
Read more: Commercial License
™️ Trademark Notice
The following are trademarks of Alton Lee:
- VerifiMind™ - Primary brand
- Genesis Methodology™ - Validation methodology
- RefleXion Trinity™ - X-Z-CS agent architecture
Usage Guidelines:
- ✅ Use freely for personal and educational purposes
- ✅ Reference in documentation and discussions
- ❌ Do not use in product names without permission
- ❌ Do not imply official endorsement without permission
Forks and derivatives may use the open-source code under MIT license, but must use different branding.
📞 Contact
General Inquiries: creator35lwb@gmail.com
Twitter/X: @creator35lwb
GitHub Discussions: Join discussions
MCP Server: verifimind.ysenseai.org (LIVE — v0.5.6)
Early Adopter Registration: verifimind.ysenseai.org/register
DFSC 2026: rewards.mystartr.com/projects/verifimind
Landing Page: verifimind.io
🙏 Acknowledgments
FLYWHEEL TEAM
VerifiMind-PEAS is developed through the FLYWHEEL TEAM multi-agent collaboration protocol:
| Agent | Role | Contribution |
|---|---|---|
| Alton Lee | Human Orchestrator & Founder | Vision, strategy, final decisions — absolute authority |
| L (GODEL) | AI-Generated Strategic Entity | Strategic analysis under Alton's delegated authority |
| Manus AI (T/CTO) | Strategic Architecture | Documentation, roadmap, ecosystem alignment, skills, landing page, AI Council |
| Claude Code (RNA) | Implementation Lead | Code, testing, deployment, CI/CD, MACP Research Assistant |
| Gemini (AY/Antigravity) | COO / Operations | GCP log analysis, weekly reports, monitoring, metrics |
| Perplexity | Real-Time Research | Market intelligence, competitive analysis, AI Council (X + CS) |
LLM Providers
- Google Gemini: Default FREE provider for innovation analysis and GCP operations
- Anthropic Claude: Ethics and safety validation, code implementation
- OpenAI GPT-4: Technical analysis and structured output
- Moonshot AI Kimi: Innovation and creative insights
- xAI Grok: Alternative perspectives and validation
- Alibaba Qwen: Multilingual support
- Groq / Mistral / Perplexity / Ollama: BYOK multi-provider support
Special Thanks
- Open-source community: For inspiration and collaboration
- Early adopters: For feedback and validation (1,480+ users and counting)
- Academic researchers: For theoretical foundations
- PHAWM (Participatory Harm Auditing Workbenches and Methodologies): For published research referenced in our harm auditing methodology
- Google Cloud Platform: For generous free tier enabling $0/month operations
<div align="center">
Transform your vision into validated, ethical, secure applications.
Start with the Genesis Master Prompt Guide today! 🚀
Last Updated: March 23, 2026 | Version: v0.5.6 "Gateway" | MACP: v2.2 "Identity"
</div>常见问题
VerifiMind PEAS - RefleXion Trinity 是什么?
基于 X-Z-CS RefleXion Trinity 的 Multi-Agent AI 验证方案,支持更合乎伦理且安全的应用开发。
相关 Skills
Claude接口
by anthropics
面向接入 Claude API、Anthropic SDK 或 Agent SDK 的开发场景,自动识别项目语言并给出对应示例与默认配置,快速搭建 LLM 应用。
✎ 想把Claude能力接进应用或智能体,用claude-api上手快、兼容Anthropic与Agent SDK,集成路径清晰又省心
RAG架构师
by alirezarezvani
聚焦生产级RAG系统设计与优化,覆盖文档切块、检索链路、索引构建、召回评估等关键环节,适合搭建可扩展、高准确率的知识库问答与检索增强应用。
✎ 面向RAG落地,把知识库、向量检索和生成链路系统串联起来,做架构设计时更清晰,也更少踩坑。
计算机视觉
by alirezarezvani
聚焦目标检测、图像分割与视觉系统落地,覆盖 YOLO、DETR、Mask R-CNN、SAM 等方案,适合定制数据集训练、推理优化及 ONNX/TensorRT 部署。
✎ 把目标检测、图像分割到推理部署串成完整工程链路,主流框架与 YOLO、DETR、SAM 等方案都覆盖,落地视觉 AI 会省心很多。
相关 MCP Server
顺序思维
编辑精选by Anthropic
Sequential Thinking 是让 AI 通过动态思维链解决复杂问题的参考服务器。
✎ 这个服务器展示了如何让 Claude 像人类一样逐步推理,适合开发者学习 MCP 的思维链实现。但注意它只是个参考示例,别指望直接用在生产环境里。
知识图谱记忆
编辑精选by Anthropic
Memory 是一个基于本地知识图谱的持久化记忆系统,让 AI 记住长期上下文。
✎ 帮 AI 和智能体补上“记不住”的短板,用本地知识图谱沉淀长期上下文,连续对话更聪明,数据也更可控。
PraisonAI
编辑精选by mervinpraison
PraisonAI 是一个支持自反思和多 LLM 的低代码 AI 智能体框架。
✎ 如果你需要快速搭建一个能 24/7 运行的 AI 智能体团队来处理复杂任务(比如自动研究或代码生成),PraisonAI 的低代码设计和多平台集成(如 Telegram)让它上手极快。但作为非官方项目,它的生态成熟度可能不如 LangChain 等主流框架,适合愿意尝鲜的开发者。