io.github.x51xxx/codex-mcp-tool
编码与调试by x51xxx
将 AI assistants 连接到 OpenAI Codex CLI 的 MCP server,用于代码分析与审查。
什么是 io.github.x51xxx/codex-mcp-tool?
将 AI assistants 连接到 OpenAI Codex CLI 的 MCP server,用于代码分析与审查。
README
Codex MCP Server
<div align="center"> </div>MCP server connecting Claude/Cursor to Codex CLI. Enables code analysis via @ file references, multi-turn conversations, sandboxed edits, and structured change mode.
Features
- File Analysis — Reference files with
@src/,@package.jsonsyntax - Multi-Turn Sessions — Conversation continuity with workspace isolation
- Native Resume — Uses
codex resumefor context preservation (CLI v0.36.0+) - Local OSS Models — Run with Ollama or LM Studio via
localProvider - Web Search — Research capabilities with
search: true - Sandbox Mode — Safe code execution with
--full-auto - Change Mode — Structured OLD/NEW patch output for refactoring
- Brainstorming — SCAMPER, design-thinking, lateral thinking frameworks
- Health Diagnostics — CLI version, features, and session monitoring
- Cross-Platform — Windows, macOS, Linux fully supported
Quick Start
claude mcp add codex-cli -- npx -y @trishchuk/codex-mcp-tool
Prerequisites: Node.js 18+, Codex CLI installed and authenticated.
Configuration
{
"mcpServers": {
"codex-cli": {
"command": "npx",
"args": ["-y", "@trishchuk/codex-mcp-tool"]
}
}
}
Config locations: macOS: ~/Library/Application Support/Claude/claude_desktop_config.json | Windows: %APPDATA%\Claude\claude_desktop_config.json
Usage Examples
// File analysis
'explain the architecture of @src/';
'analyze @package.json and list dependencies';
// With specific model
'use codex with model gpt-5.4 to analyze @algorithm.py';
// Multi-turn conversations (v1.4.0+)
'ask codex sessionId:"my-project" prompt:"explain @src/"';
'ask codex sessionId:"my-project" prompt:"now add error handling"';
// Brainstorming
'brainstorm ways to optimize CI/CD using SCAMPER method';
// Sandbox mode
'use codex sandbox:true to create and run a Python script';
// Web search
'ask codex search:true prompt:"latest TypeScript 5.7 features"';
// Local OSS model (Ollama)
'ask codex localProvider:"ollama" model:"qwen3:8b" prompt:"explain @src/"';
Tools
| Tool | Description |
|---|---|
ask-codex | Execute Codex CLI with file analysis, models, sessions |
brainstorm | Generate ideas with SCAMPER, design-thinking, etc. |
list-sessions | View/delete/clear conversation sessions |
health | Diagnose CLI installation, version, features |
ping / help | Test connection, show CLI help |
Models
Default: gpt-5.4 with fallback → gpt-5.3-codex → gpt-5.2-codex → gpt-5.1-codex-max → gpt-5.2
| Model | Use Case |
|---|---|
gpt-5.4 | Latest frontier agentic coding (default) |
gpt-5.3-codex | Frontier agentic coding |
gpt-5.2-codex | Frontier agentic coding |
gpt-5.1-codex-max | Deep and fast reasoning |
gpt-5.1-codex-mini | Cost-efficient quick tasks |
gpt-5.2 | Broad knowledge, reasoning and coding |
Key Features
Session Management (v1.4.0+)
Multi-turn conversations with workspace isolation:
{ "prompt": "analyze code", "sessionId": "my-session" }
{ "prompt": "continue from here", "sessionId": "my-session" }
{ "prompt": "start fresh", "sessionId": "my-session", "resetSession": true }
Environment:
CODEX_SESSION_TTL_MS- Session TTL (default: 24h)CODEX_MAX_SESSIONS- Max sessions (default: 50)
Local OSS Models (v1.6.0+)
Run with local Ollama or LM Studio instead of OpenAI:
// Ollama
{ "prompt": "analyze @src/", "localProvider": "ollama", "model": "qwen3:8b" }
// LM Studio
{ "prompt": "analyze @src/", "localProvider": "lmstudio", "model": "my-model" }
// Auto-select provider
{ "prompt": "analyze @src/", "oss": true }
Requirements: Ollama running locally with a model that supports tool calling (e.g. qwen3:8b).
Advanced Options
| Parameter | Description |
|---|---|
model | Model selection |
sessionId | Enable conversation continuity |
sandbox | Enable --full-auto mode |
search | Enable web search |
changeMode | Structured OLD/NEW edits |
addDirs | Additional writable directories |
toolOutputTokenLimit | Cap response verbosity (100-10,000) |
reasoningEffort | Reasoning depth: low, medium, high, xhigh |
oss | Use local OSS model provider |
localProvider | Local provider: lmstudio or ollama |
CLI Compatibility
| Version | Features |
|---|---|
| v0.60.0+ | GPT-5.2 model family |
| v0.59.0+ | --add-dir, token limits |
| v0.52.0+ | Native --search flag |
| v0.36.0+ | Native codex resume (sessions) |
Troubleshooting
codex --version # Check CLI version
codex login # Authenticate
Use health tool for diagnostics: 'use health verbose:true'
Migration
v2.0.x → v2.1.0: gpt-5.4 as new default model, updated fallback chain.
v1.5.x → v1.6.0: Local OSS model support (localProvider, oss), gpt-5.3-codex default model, xhigh reasoning effort.
v1.3.x → v1.4.0: New sessionId parameter, list-sessions/health tools, structured error handling. No breaking changes.
License
MIT License. Not affiliated with OpenAI.
Documentation | Issues | Inspired by jamubc/gemini-mcp-tool
常见问题
io.github.x51xxx/codex-mcp-tool 是什么?
将 AI assistants 连接到 OpenAI Codex CLI 的 MCP server,用于代码分析与审查。
相关 Skills
前端设计
by anthropics
面向组件、页面、海报和 Web 应用开发,按鲜明视觉方向生成可直接落地的前端代码与高质感 UI,适合做 landing page、Dashboard 或美化现有界面,避开千篇一律的 AI 审美。
✎ 想把页面做得既能上线又有设计感,就用前端设计:组件到整站都能产出,难得的是能避开千篇一律的 AI 味。
网页构建器
by anthropics
面向复杂 claude.ai HTML artifact 开发,快速初始化 React + Tailwind CSS + shadcn/ui 项目并打包为单文件 HTML,适合需要状态管理、路由或多组件交互的页面。
✎ 在 claude.ai 里做复杂网页 Artifact 很省心,多组件、状态和路由都能顺手搭起来,React、Tailwind 与 shadcn/ui 组合效率高、成品也更精致。
网页应用测试
by anthropics
用 Playwright 为本地 Web 应用编写自动化测试,支持启动开发服务器、校验前端交互、排查 UI 异常、抓取截图与浏览器日志,适合调试动态页面和回归验证。
✎ 借助 Playwright 一站式验证本地 Web 应用前端功能,调 UI 时还能同步查看日志和截图,定位问题更快。
相关 MCP Server
GitHub
编辑精选by GitHub
GitHub 是 MCP 官方参考服务器,让 Claude 直接读写你的代码仓库和 Issues。
✎ 这个参考服务器解决了开发者想让 AI 安全访问 GitHub 数据的问题,适合需要自动化代码审查或 Issue 管理的团队。但注意它只是参考实现,生产环境得自己加固安全。
Context7 文档查询
编辑精选by Context7
Context7 是实时拉取最新文档和代码示例的智能助手,让你告别过时资料。
✎ 它能解决开发者查找文档时信息滞后的问题,特别适合快速上手新库或跟进更新。不过,依赖外部源可能导致偶尔的数据延迟,建议结合官方文档使用。
by tldraw
tldraw 是让 AI 助手直接在无限画布上绘图和协作的 MCP 服务器。
✎ 这解决了 AI 只能输出文本、无法视觉化协作的痛点——想象让 Claude 帮你画流程图或白板讨论。最适合需要快速原型设计或头脑风暴的开发者。不过,目前它只是个基础连接器,你得自己搭建画布应用才能发挥全部潜力。