io.github.VouchlyAI/pincer
编码与调试by vouchlyai
Secure grip for your agent's secrets - security-hardened MCP gateway with proxy token architecture
什么是 io.github.VouchlyAI/pincer?
Secure grip for your agent's secrets - security-hardened MCP gateway with proxy token architecture
README
Pincer MCP 🦀
<p align="center"> <picture> <source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/VouchlyAI/Pincer-MCP/refs/heads/main/mascot.png"> <img src="https://raw.githubusercontent.com/VouchlyAI/Pincer-MCP/refs/heads/main/mascot.png" alt="Pincer-MCP" width="500"> </picture> </p>Pincer-MCP is a security-hardened Model Context Protocol (MCP) gateway that eliminates the "Lethal Trifecta" vulnerability in agentic AI systems. By acting as a stateless intermediary, Pincer ensures agents never see your real API keys.
🔒 The Problem
Current AI agents store long-lived API keys in plain-text .env files or local databases. If compromised via prompt injection or host intrusion, attackers gain direct access to your:
- Database passwords
- Third-party API keys
✨ The Solution: Proxy Token Architecture
Pincer implements a "blindfold" security model:
- Agent knows: Only a unique proxy token (
pxr_abc123...) - Pincer knows: Mapping of proxy tokens → real API keys (encrypted in OS keychain)
- Agent never sees: The actual credentials
sequenceDiagram
participant Agent
participant Pincer
participant Vault (OS Keychain)
participant External API
Agent->>Pincer: tools/call + proxy_token: pxr_abc123
Pincer->>Vault: Decrypt real API key
Vault-->>Pincer: gemini_api_key: AIzaSy...
Pincer->>External API: API call with real key
External API-->>Pincer: Response
Pincer->>Pincer: Scrub key from memory
Pincer-->>Agent: Response (no credentials)
📦 Available Tools
gemini_generate: Secure Google Gemini API calls.openai_chat: Chat completions with OpenAI GPT models (gpt-4o, gpt-4-turbo, gpt-3.5-turbo, etc.).openai_list_models: List all available OpenAI models.openai_compatible_chat: Chat completions with any OpenAI-compatible API (Azure OpenAI, Ollama, vLLM, etc.).openai_compatible_list_models: List models from custom OpenAI-compatible endpoints.claude_chat: Chat completions with Anthropic Claude models (Claude 3.5 Sonnet, Opus, Haiku).openrouter_chat: Unified API access to 100+ models from multiple providers (OpenAI, Anthropic, Google, Meta, etc.).openrouter_list_models: List all available models across OpenRouter providers.openwebui_chat: OpenAI-compatible interface for self-hosted LLMs.openwebui_list_models: Discover available models on an OpenWebUI instance.gpg_sign_data: Sign data or files using a GPG/PGP private key stored in Pincer's vault. (Keyless Execution — agent never sees the key)gpg_decrypt: Decrypt PGP-encrypted data using a vault-stored private key.
🔑 GPG Key Management
# Generate a new GPG keypair (private key stored in vault)
pincer key generate --name "Release Signing" --email dev@example.com
# Import an existing PGP private key
pincer key import ./my-key.asc --passphrase "my-passphrase"
# List all stored GPG keys
pincer key list
# Export public key (safe to share)
pincer key export <key-id>
# Authorize an agent for signing
pincer agent authorize mybot gpg_sign_data --key <key-id>
(More callers coming soon!)
🚀 Quick Start
Prerequisites
- Node.js 18+
- macOS, Windows, or Linux with native keychain support
Installation
Option 1: Global Installation (Recommended)
npm install -g pincer-mcp
# Now 'pincer' command is available system-wide
Option 2: Local Development
git clone https://github.com/VouchlyAI/Pincer-MCP.git
cd Pincer-MCP
npm install
npm run build
npm link # Makes 'pincer' command available locally
Setup Vault
# 1. Initialize vault (creates master key in OS keychain)
pincer init
# 2. Store your real API keys (encrypted)
pincer set gemini_api_key "AIzaSyDpxPq..."
pincer set openai_api_key "sk-proj-..."
# 3. Register an agent and generate proxy token
pincer agent add openclaw
# Output: 🎫 Proxy Token: pxr_V1StGXR8_Z5jdHi6B-myT
# 4. Authorize the agent for specific tools
pincer agent authorize openclaw gemini_generate
Multi-Key Support
Store multiple keys for the same tool and assign them to different agents:
# Store two different Gemini API keys
pincer set gemini_api_key "AIzaSy_KEY_FOR_CLAWDBOT..." --label key1
pincer set gemini_api_key "AIzaSy_KEY_FOR_MYBOT..." --label key2
# View all stored keys
pincer list
# Assign specific keys to each agent
pincer agent add clawdbot
pincer agent authorize clawdbot gemini_generate --key key1
pincer agent add mybot
pincer agent authorize mybot gemini_generate --key key2
# View agent permissions
pincer agent list
Result: clawdbot uses key1, mybot uses key2 - perfect for rate limiting or cost tracking!
Run the Server
npm run dev
Configure Your Agent
Give your agent the proxy token (not the real API key):
export PINCER_PROXY_TOKEN="pxr_V1StGXR8_Z5jdHi6B-myT"
Tool-to-Secret Name Mappings
When storing secrets, you must use the correct secret name for each tool. See the Tool Mappings Guide for a complete reference.
When you run pincer agent authorize myagent gemini_generate, Pincer will inject the gemini_api_key secret when that tool is called.
Make a Tool Call
Your agent sends requests with the proxy token in the body:
{
"jsonrpc": "2.0",
"method": "tools/call",
"params": {
"name": "gemini_generate",
"arguments": {
"prompt": "Hello world",
"model": "gemini-2.0-flash"
},
"_meta": {
"pincer_token": "pxr_V1StGXR8_Z5jdHi6B-myT"
}
}
}
Pincer maps the proxy token to the real API key and executes the call securely.
🏗️ Architecture
Two-Tiered Vault System
Tier 1: Master Key (OS Keychain)
- Stored in macOS Keychain, Windows Credential Manager, or GNOME Keyring
- Never touches the filesystem
- Accessed only for encryption/decryption
Tier 2: Encrypted Store (SQLite)
- Database at
~/.pincer/vault.db - Three tables:
secrets: Real API keys (AES-256-GCM encrypted)proxy_tokens: Proxy token → Agent ID mappingsagent_mappings: Agent ID → Tool authorization
Authentication Flow
Request (_meta.pincer_token: pxr_xxx)
↓
Gatekeeper: Extract proxy token from body
↓
Vault: Resolve pxr_xxx → agent_id → tool_name → real_api_key
↓
Injector: JIT decrypt & inject real key
↓
Caller: Execute external API call
↓
Scrubber: Overwrite key in memory with zeros
↓
Audit: Log to tamper-evident chain
🔐 Security & Compliance
Pincer is built for enterprise-grade security:
- Hardware-Backed Cryptography: Master encryption keys never leave the OS-native keychain.
- Proxy Token Isolation: Agents only handle ephemeral
pxr_tokens; they never touch real credentials. - JIT Decryption: Secrets are decrypted only for the duration of the API call.
- Zero-Footprint Memory: Sensitive data is scrubbed (zeroed out) from memory immediately after use.
- Fine-Grained Authorization: Strict per-agent, per-tool access control policies.
- Tamper-Evident Audit Log: Append-only tool call history with SHA-256 chain-hashing.
- Hardened Execution: Schema validation on all inputs and protected environment execution.
- Stdio Compatible: Fully compatible with the standard Model Context Protocol transport.
🔍 Audit Logs
Every tool call is logged to ~/.pincer/audit.jsonl with both UTC and Local timestamps, plus character counts and estimated token usage:
{
"agentId": "openclaw",
"tool": "gemini_generate",
"duration": 234,
"status": "success",
"input_chars": 156,
"output_chars": 423,
"estimated_input_tokens": 39,
"estimated_output_tokens": 106,
"timestamp_utc": "2026-02-05T08:32:00.000Z",
"timestamp_local": "2/5/2026, 2:02:45 PM",
"chainHash": "a1b2c3d4e5f6g7h8",
"prevHash": "0000000000000000"
}
Token Estimation: Pincer automatically estimates token usage using a 4:1 character-to-token ratio (~4 characters per token average). This provides consistent cost tracking across all AI providers without relying on provider-specific APIs.
Chain hashes provide tamper detection - any modification breaks the SHA-256 chain.
## 🧪 Development
```bash
# Install dependencies
npm install
# Run tests
npm test
# Run with watch mode
npm run dev
# Build for production
npm run build
📚 Documentation
- Setup Guide - Getting started with Pincer-MCP
- IDE Integration - Use Pincer with VSCode, Claude Desktop, Cursor, and more
- OpenClaw Integration - Integrate Pincer with OpenClaw agents
- Testing Guide - Comprehensive test suite documentation
- Capabilities Reference - Full API and feature documentation
- Security Policy - Vulnerability reporting and security best practices
- CHANGELOG - Version history and release notes
🤝 Contributing
Contributions are welcome! Please see CONTRIBUTING.md for guidelines.
📄 License
BSL 1.1 (Business Source License) — See LICENSE for details. Converts to Apache 2.0 on 2028-04-01.
- Model Context Protocol - The standard for AI tool integration.
- keytar - Secure cross-platform keychain access.
- better-sqlite3 - High-performance local persistence.
Built with ❤️ for a more secure AI future.
常见问题
io.github.VouchlyAI/pincer 是什么?
Secure grip for your agent's secrets - security-hardened MCP gateway with proxy token architecture
相关 Skills
网页构建器
by anthropics
面向复杂 claude.ai HTML artifact 开发,快速初始化 React + Tailwind CSS + shadcn/ui 项目并打包为单文件 HTML,适合需要状态管理、路由或多组件交互的页面。
✎ 在 claude.ai 里做复杂网页 Artifact 很省心,多组件、状态和路由都能顺手搭起来,React、Tailwind 与 shadcn/ui 组合效率高、成品也更精致。
前端设计
by anthropics
面向组件、页面、海报和 Web 应用开发,按鲜明视觉方向生成可直接落地的前端代码与高质感 UI,适合做 landing page、Dashboard 或美化现有界面,避开千篇一律的 AI 审美。
✎ 想把页面做得既能上线又有设计感,就用前端设计:组件到整站都能产出,难得的是能避开千篇一律的 AI 味。
网页应用测试
by anthropics
用 Playwright 为本地 Web 应用编写自动化测试,支持启动开发服务器、校验前端交互、排查 UI 异常、抓取截图与浏览器日志,适合调试动态页面和回归验证。
✎ 借助 Playwright 一站式验证本地 Web 应用前端功能,调 UI 时还能同步查看日志和截图,定位问题更快。
相关 MCP Server
GitHub
编辑精选by GitHub
GitHub 是 MCP 官方参考服务器,让 Claude 直接读写你的代码仓库和 Issues。
✎ 这个参考服务器解决了开发者想让 AI 安全访问 GitHub 数据的问题,适合需要自动化代码审查或 Issue 管理的团队。但注意它只是参考实现,生产环境得自己加固安全。
Context7 文档查询
编辑精选by Context7
Context7 是实时拉取最新文档和代码示例的智能助手,让你告别过时资料。
✎ 它能解决开发者查找文档时信息滞后的问题,特别适合快速上手新库或跟进更新。不过,依赖外部源可能导致偶尔的数据延迟,建议结合官方文档使用。
by tldraw
tldraw 是让 AI 助手直接在无限画布上绘图和协作的 MCP 服务器。
✎ 这解决了 AI 只能输出文本、无法视觉化协作的痛点——想象让 Claude 帮你画流程图或白板讨论。最适合需要快速原型设计或头脑风暴的开发者。不过,目前它只是个基础连接器,你得自己搭建画布应用才能发挥全部潜力。