Cortex MCP — Multi-Level Reasoning Server
平台与服务by j0hanz
Multi-level reasoning MCP server with configurable depth levels
什么是 Cortex MCP — Multi-Level Reasoning Server?
Multi-level reasoning MCP server with configurable depth levels
README
Cortex MCP
Multi-level reasoning MCP server with configurable depth levels, session-based state management, structured thought input, and real-time trace resources.
Overview
Cortex MCP is a stdio-only MCP server for stateful, depth-controlled reasoning. The runtime entrypoint in src/index.ts connects createServer() to StdioServerTransport, and the server surface in src/server.ts enables tools, prompts, completions, logging, and subscribable resources around a single session-based reasoning engine.
The live MCP surface confirmed by Inspector is 1 tool, 6 concrete resources, 4 resource templates, and 7 prompts. Sessions are stored in memory, exposed as MCP resources, and cleared on process restart.
Key Features
reasoning_thinksupports step-by-step sessions,run_to_completionbatches, rollback, early conclusion, and structuredobservation/hypothesis/evaluationinput.- Four depth levels are built into the engine:
basic,normal,high, andexpert, each with bounded thought ranges and token budgets. - Prompt helpers expose
reasoning.basic,reasoning.normal,reasoning.high,reasoning.expert,reasoning.continue,reasoning.retry, andget-help. - Resource endpoints expose internal docs plus live session lists, per-session JSON views, full markdown traces, and individual thought documents.
- Completions are wired for levels, session IDs, and thought names through
completable()and resource-template completion hooks.
Requirements
- Node.js
>=24for localnpxornpmusage. - An MCP client that supports
stdiotransport. - Optional: Docker if you want to build or run the container image defined by
Dockerfile.
Quick Start
Use this standard MCP client configuration:
{
"mcpServers": {
"cortex-mcp": {
"command": "npx",
"args": ["-y", "@j0hanz/cortex-mcp@latest"]
}
}
}
Client Configuration
<details> <summary><b>Install in VS Code</b></summary>Add to .vscode/mcp.json:
{
"servers": {
"cortex-mcp": {
"command": "npx",
"args": ["-y", "@j0hanz/cortex-mcp@latest"]
}
}
}
Or install via CLI:
code --add-mcp '{"name":"cortex-mcp","command":"npx","args":["-y","@j0hanz/cortex-mcp@latest"]}'
For more info, see VS Code MCP docs.
</details> <details> <summary><b>Install in VS Code Insiders</b></summary>Add to .vscode/mcp.json:
{
"servers": {
"cortex-mcp": {
"command": "npx",
"args": ["-y", "@j0hanz/cortex-mcp@latest"]
}
}
}
Or install via CLI:
code-insiders --add-mcp '{"name":"cortex-mcp","command":"npx","args":["-y","@j0hanz/cortex-mcp@latest"]}'
For more info, see VS Code Insiders MCP docs.
</details> <details> <summary><b>Install in Cursor</b></summary>Add to ~/.cursor/mcp.json:
{
"mcpServers": {
"cortex-mcp": {
"command": "npx",
"args": ["-y", "@j0hanz/cortex-mcp@latest"]
}
}
}
For more info, see Cursor MCP docs.
</details> <details> <summary><b>Install in Visual Studio</b></summary>Add to mcp.json (VS integrated):
{
"mcpServers": {
"cortex-mcp": {
"command": "npx",
"args": ["-y", "@j0hanz/cortex-mcp@latest"]
}
}
}
For more info, see Visual Studio MCP docs.
</details> <details> <summary><b>Install in Goose</b></summary>Add to Goose extension registry:
{
"mcpServers": {
"cortex-mcp": {
"command": "npx",
"args": ["-y", "@j0hanz/cortex-mcp@latest"]
}
}
}
For more info, see Goose MCP docs.
</details> <details> <summary><b>Install in LM Studio</b></summary>Add to LM Studio MCP config:
{
"mcpServers": {
"cortex-mcp": {
"command": "npx",
"args": ["-y", "@j0hanz/cortex-mcp@latest"]
}
}
}
For more info, see LM Studio MCP docs.
</details> <details> <summary><b>Install in Claude Desktop</b></summary>Add to claude_desktop_config.json:
{
"mcpServers": {
"cortex-mcp": {
"command": "npx",
"args": ["-y", "@j0hanz/cortex-mcp@latest"]
}
}
}
For more info, see Claude Desktop MCP docs.
</details> <details> <summary><b>Install in Claude Code</b></summary>Add to Claude Code CLI:
{
"mcpServers": {
"cortex-mcp": {
"command": "npx",
"args": ["-y", "@j0hanz/cortex-mcp@latest"]
}
}
}
Or install via CLI:
claude mcp add cortex-mcp -- npx -y @j0hanz/cortex-mcp@latest
For more info, see Claude Code MCP docs.
</details> <details> <summary><b>Install in Windsurf</b></summary>Add to ~/.codeium/windsurf/mcp_config.json:
{
"mcpServers": {
"cortex-mcp": {
"command": "npx",
"args": ["-y", "@j0hanz/cortex-mcp@latest"]
}
}
}
For more info, see Windsurf MCP docs.
</details> <details> <summary><b>Install in Amp</b></summary>Add to Amp MCP config:
{
"mcpServers": {
"cortex-mcp": {
"command": "npx",
"args": ["-y", "@j0hanz/cortex-mcp@latest"]
}
}
}
Or install via CLI:
amp mcp add cortex-mcp -- npx -y @j0hanz/cortex-mcp@latest
For more info, see Amp MCP docs.
</details> <details> <summary><b>Install in Cline</b></summary>Add to cline_mcp_settings.json:
{
"mcpServers": {
"cortex-mcp": {
"command": "npx",
"args": ["-y", "@j0hanz/cortex-mcp@latest"]
}
}
}
For more info, see Cline MCP docs.
</details> <details> <summary><b>Install in Codex CLI</b></summary>Add to ~/.codex/config.yaml or codex CLI:
{
"mcpServers": {
"cortex-mcp": {
"command": "npx",
"args": ["-y", "@j0hanz/cortex-mcp@latest"]
}
}
}
For more info, see Codex CLI MCP docs.
</details> <details> <summary><b>Install in GitHub Copilot</b></summary>Add to .vscode/mcp.json:
{
"servers": {
"cortex-mcp": {
"command": "npx",
"args": ["-y", "@j0hanz/cortex-mcp@latest"]
}
}
}
For more info, see GitHub Copilot MCP docs.
</details> <details> <summary><b>Install in Warp</b></summary>Add to Warp MCP config:
{
"mcpServers": {
"cortex-mcp": {
"command": "npx",
"args": ["-y", "@j0hanz/cortex-mcp@latest"]
}
}
}
For more info, see Warp MCP docs.
</details> <details> <summary><b>Install in Kiro</b></summary>Add to .kiro/settings/mcp.json:
{
"mcpServers": {
"cortex-mcp": {
"command": "npx",
"args": ["-y", "@j0hanz/cortex-mcp@latest"]
}
}
}
For more info, see Kiro MCP docs.
</details> <details> <summary><b>Install in Gemini CLI</b></summary>Add to ~/.gemini/settings.json:
{
"mcpServers": {
"cortex-mcp": {
"command": "npx",
"args": ["-y", "@j0hanz/cortex-mcp@latest"]
}
}
}
For more info, see Gemini CLI MCP docs.
</details> <details> <summary><b>Install in Zed</b></summary>Add to ~/.config/zed/settings.json:
{
"context_servers": {
"cortex-mcp": {
"settings": {
"command": "npx",
"args": ["-y", "@j0hanz/cortex-mcp@latest"]
}
}
}
}
For more info, see Zed MCP docs.
</details> <details> <summary><b>Install in Augment</b></summary>Add to VS Code settings.json:
Add to your VS Code
settings.jsonunderaugment.advanced.
{
"augment.advanced": {
"mcpServers": [
{
"id": "cortex-mcp",
"command": "npx",
"args": ["-y", "@j0hanz/cortex-mcp@latest"]
}
]
}
}
For more info, see Augment MCP docs.
</details> <details> <summary><b>Install in Roo Code</b></summary>Add to Roo Code MCP settings:
{
"mcpServers": {
"cortex-mcp": {
"command": "npx",
"args": ["-y", "@j0hanz/cortex-mcp@latest"]
}
}
}
For more info, see Roo Code MCP docs.
</details> <details> <summary><b>Install in Kilo Code</b></summary>Add to Kilo Code MCP settings:
{
"mcpServers": {
"cortex-mcp": {
"command": "npx",
"args": ["-y", "@j0hanz/cortex-mcp@latest"]
}
}
}
For more info, see Kilo Code MCP docs.
</details>Use Cases
Start bounded reasoning at the right depth
Use reasoning.basic, reasoning.normal, reasoning.high, or reasoning.expert when the client wants a prompt-first entrypoint, or call reasoning_think directly with query, level, and the first thought. Each response returns the current session state plus a summary string that tells the client how to continue.
Relevant tool: reasoning_think
Related prompts: reasoning.basic, reasoning.normal, reasoning.high, reasoning.expert
Continue, retry, or batch an active session
Reuse sessionId to continue a prior trace, switch to runMode="run_to_completion" when you already have the remaining thought inputs, or use the continuation and retry prompts to generate the next call payload. The handler also supports rollbackToStep and isConclusion for revising or ending a trace early.
Relevant tool: reasoning_think
Related prompts: reasoning.continue, reasoning.retry
Inspect live traces without re-running the tool
Read reasoning://sessions for the active session list, reasoning://sessions/{sessionId} for the JSON detail view, reasoning://sessions/{sessionId}/trace for the markdown transcript, or reasoning://sessions/{sessionId}/thoughts/{thoughtName} for a single thought. This lets a client present progress or audit a session independently from the next tool call.
Relevant resources: reasoning://sessions, reasoning://sessions/{sessionId}, reasoning://sessions/{sessionId}/trace, reasoning://sessions/{sessionId}/thoughts/{thoughtName}
Architecture
[MCP Client]
|
| stdio
v
[src/index.ts]
createServer()
-> new StdioServerTransport()
-> server.connect(transport)
|
v
[src/server.ts]
McpServer("cortex-mcp")
capabilities:
- tools
- prompts
- completions
- logging
- resources { subscribe: true, listChanged: true }
|
+--> tools/call
| -> reasoning_think
| -> src/tools/reasoning-think.ts
| -> ReasoningThinkInputSchema / ReasoningThinkToolOutputSchema
| -> src/engine/reasoner.ts
| -> SessionStore
|
+--> prompts/get
| -> src/prompts/index.ts
|
+--> resources/read
| -> src/resources/index.ts
| -> internal://* and reasoning://sessions/*
|
+--> notifications
-> logging messages
-> resources/list_changed
-> resources/updated
-> notifications/progress
Request Lifecycle
[Client] -- initialize --> [Server]
[Server] -- serverInfo + capabilities --> [Client]
[Client] -- notifications/initialized --> [Server]
[Client] -- tools/call {name: "reasoning_think", arguments} --> [Handler]
[Handler] -- validate args --> [Reasoner + SessionStore]
[Reasoner] -- progress/resource events --> [Server notifications]
[Handler] -- structuredContent + optional trace resource --> [Client]
MCP Surface
Tools
reasoning_think
Stateful reasoning tool for creating and continuing multi-step sessions. It supports one-step interactive calls, run_to_completion batches, structured observation/hypothesis/evaluation input, rollback, and early conclusion while returning structured session state.
| Parameter | Type | Required | Description |
|---|---|---|---|
query | string | no | Question or problem to analyze. |
level | string | no | Depth level. Required for new sessions. basic (1–3 steps, 2K budget), normal (4–8 steps, 8K budget), high (10–15 steps, 32K budget), expert (20–25 steps, 128K budget). |
targetThoughts | integer | no | Exact step count. Must fit level range. |
sessionId | string | no | Session ID to continue. |
runMode | string | no | "step" (default) or "run_to_completion". |
thought | any | no | Reasoning text. Stored verbatim. String for step mode, string[] for batch. |
isConclusion | boolean | no | End session early at final answer. |
rollbackToStep | integer | no | 0-based index to rollback to. Discards later thoughts. |
stepSummary | string | no | One-sentence step summary. |
observation | string | no | Known facts at this step. |
hypothesis | string | no | Proposed next idea. |
evaluation | string | no | Critique of hypothesis. |
1. [Client] -- tools/call {name: "reasoning_think", arguments} --> [Server]
Transport: stdio
2. [Server] -- dispatch("reasoning_think") --> [Handler: src/tools/reasoning-think.ts]
3. [Handler] -- validate(ReasoningThinkInputSchema) --> [src/engine/reasoner.ts]
4. [Reasoner] -- create/update session --> [src/engine/session-store.ts]
5. [Handler] -- structuredContent + optional embedded trace resource --> [Client]
Resources
| Resource | URI or Template | MIME Type | Description |
|---|---|---|---|
server-instructions | internal://instructions | text/markdown | Usage instructions for the MCP server. |
server-config | internal://server-config | application/json | Runtime limits and level configurations for the reasoning server. |
tool-catalog | internal://tool-catalog | text/markdown | Tool reference: models, params, outputs, data flow. |
tool-info | internal://tool-info/{toolName} | text/markdown | Per-tool contract details. |
tool-info-reasoning_think | internal://tool-info/reasoning_think | text/markdown | Contract details for reasoning_think. |
workflows | internal://workflows | text/markdown | Recommended workflows and tool sequences. |
reasoning.sessions | reasoning://sessions | application/json | List of active reasoning sessions with summaries. Updated in real-time as sessions progress. |
reasoning.session | reasoning://sessions/{sessionId} | application/json | Detailed view of a single reasoning session, including all thoughts and metadata. |
reasoning.trace | reasoning://sessions/{sessionId}/trace | text/markdown | Markdown trace of a reasoning session (full content). |
reasoning.thought | reasoning://sessions/{sessionId}/thoughts/{thoughtName} | text/markdown | Markdown content of a single thought (for example Thought-1). |
Prompts
| Prompt | Arguments | Description |
|---|---|---|
get-help | none | Return server usage instructions. |
reasoning.basic | query required, targetThoughts optional | Basic-depth reasoning (1-3 thoughts). |
reasoning.normal | query required, targetThoughts optional | Normal-depth reasoning (4-8 thoughts). |
reasoning.high | query required, targetThoughts optional | High-depth reasoning (10-15 thoughts). |
reasoning.expert | query required, targetThoughts optional | Expert-depth reasoning (20-25 thoughts). |
reasoning.continue | sessionId required, query optional, level optional | Continue an existing session. Optional follow-up query. |
reasoning.retry | query required, level required, targetThoughts optional | Retry a failed reasoning task with modified parameters. |
MCP Capabilities
| Capability | Status | Evidence |
|---|---|---|
tools | confirmed | src/server.ts:203-205, src/tools/reasoning-think.ts:479 |
prompts | confirmed | src/server.ts:205, src/prompts/index.ts:201 |
completions | confirmed | src/server.ts:207, src/prompts/index.ts:249, src/resources/index.ts:375 |
logging | confirmed | src/server.ts:204, src/server.ts:98 |
resources.subscribe | confirmed | src/server.ts:208, src/server.ts:121 |
resources.listChanged | confirmed | src/server.ts:208, src/server.ts:114 |
progress notifications | confirmed | src/lib/mcp.ts:71, src/tools/reasoning-think.ts:424 |
Tool Annotations
| Annotation | Value | Evidence |
|---|---|---|
readOnlyHint | false | src/tools/reasoning-think.ts:500 |
destructiveHint | false | src/tools/reasoning-think.ts:502 |
openWorldHint | false | src/tools/reasoning-think.ts:503 |
idempotentHint | false | src/tools/reasoning-think.ts:501 |
Structured Output
reasoning_thinkdeclaresoutputSchemaand returnsstructuredContent, with an embedded trace resource when the trace is small enough. Evidence:src/tools/reasoning-think.ts:498,src/lib/mcp.ts:97-114.
Configuration
| Variable | Default | Required | Evidence |
|---|---|---|---|
CORTEX_SESSION_TTL_MS | 1800000 (30 minutes) | no | src/engine/reasoner.ts:22, src/engine/session-store.ts:19 |
CORTEX_MAX_SESSIONS | 100 | no | src/engine/reasoner.ts:23, src/engine/session-store.ts:20 |
CORTEX_MAX_TOTAL_TOKENS | 2000000 | no | src/engine/reasoner.ts:24, src/engine/session-store.ts:21 |
CORTEX_MAX_ACTIVE_REASONING_TASKS | 32 | no | src/engine/config.ts:41-44 |
CORTEX_REDACT_TRACE_CONTENT | false | no | src/engine/config.ts:21 |
[!NOTE] The source does not define any HTTP host/port configuration. The only other environment-related signal is
NODE_ENV=productionin the Docker image and--env-file=.envin the localdev:runscript.
Security
| Control | Status | Evidence |
|---|---|---|
| input validation | confirmed | src/schemas/inputs.ts:13, src/schemas/outputs.ts:46, src/prompts/index.ts:207 |
| stdout-safe logging fallback | confirmed | src/server.ts:98, src/server.ts:145 |
| main-thread-only runtime | confirmed | src/index.ts:15-25 |
| non-root container user | confirmed | Dockerfile:37 |
[!NOTE] No auth, OAuth, HTTP origin checks, or rate-limiting controls are implemented in the current source because the server only exposes stdio transport.
Development
| Script | Command | Purpose |
|---|---|---|
dev | tsc --watch --preserveWatchOutput | Watch and compile source during development. |
dev:run | node --env-file=.env --watch dist/index.js | Run the built server in watch mode with an optional local .env file. |
build | node scripts/tasks.mjs build | Clean dist, compile TypeScript, copy assets, and make the entrypoint executable. |
lint | eslint . | Run ESLint across the repository. |
type-check | node scripts/tasks.mjs type-check | Run source and test TypeScript checks concurrently. |
test | node scripts/tasks.mjs test | Run the TypeScript test suites with the configured loader. |
test:dist | node scripts/tasks.mjs test:dist | Rebuild first, then run tests against the built output. |
test:fast | node --test --import tsx/esm src/__tests__/**/*.test.ts node-tests/**/*.test.ts | Run the fast direct test command without the task wrapper. |
format | prettier --write . | Format the repository. |
inspector | npm run build && npx -y @modelcontextprotocol/inspector node dist/index.js | Build the server and open it in the MCP Inspector. |
prepublishOnly | npm run lint && npm run type-check && npm run build | Enforce release checks before publishing. |
Additional helper scripts for diagnostics, coverage, asset copying, and knip are defined in package.json.
Build and Release
.github/workflows/release.ymlbumpspackage.jsonandserver.json, then runsnpm run lint,npm run type-check,npm run test, andnpm run buildbefore tagging and creating a GitHub release.- The same workflow publishes the package to npm with Trusted Publishing, publishes to the MCP Registry with
mcp-publisher, and pushes a multi-arch Docker image toghcr.io. Dockerfileuses a multi-stage Node 24 Alpine build, prunes dev dependencies, and runs the released container as themcpuser.
Troubleshooting
- Sessions are in memory and expire after 30 minutes by default. If you receive
E_SESSION_NOT_FOUND, start a new session or increaseCORTEX_SESSION_TTL_MS. runMode="run_to_completion"requires enoughthoughtentries to cover the remaining steps. If you want the server to return after each step, keep the defaultstepmode.- For stdio transport, do not add custom stdout logging around the server process. This server routes logs through MCP logging and falls back to
stderron failures.
Credits
| Dependency | Registry |
|---|---|
| @modelcontextprotocol/sdk | npm |
| zod | npm |
Contributing and License
- License: MIT
- Contributions are welcome via pull requests.
常见问题
Cortex MCP — Multi-Level Reasoning Server 是什么?
Multi-level reasoning MCP server with configurable depth levels
相关 Skills
MCP构建
by anthropics
聚焦高质量 MCP Server 开发,覆盖协议研究、工具设计、错误处理与传输选型,适合用 FastMCP 或 MCP SDK 对接外部 API、封装服务能力。
✎ 想让 LLM 稳定调用外部 API,就用 MCP构建:从 Python 到 Node 都有成熟指引,帮你更快做出高质量 MCP 服务器。
Slack动图
by anthropics
面向Slack的动图制作Skill,内置emoji/消息GIF的尺寸、帧率和色彩约束、校验与优化流程,适合把创意或上传图片快速做成可直接发送的Slack动画。
✎ 帮你快速做出适配 Slack 的动图,内置约束规则和校验工具,少踩上传与播放坑,做表情包和演示都更省心。
邮件模板
by alirezarezvani
快速搭建生产可用的事务邮件系统:生成 React Email/MJML 模板,接入 Resend、Postmark、SendGrid 或 AWS SES,并支持本地预览、i18n、暗色模式、反垃圾优化与追踪埋点。
✎ 面向营销与服务场景,快速搭建高质量邮件模板,省去反复设计与切图成本,成熟度和社区认可都很高。
相关 MCP Server
Slack 消息
编辑精选by Anthropic
Slack 是让 AI 助手直接读写你的 Slack 频道和消息的 MCP 服务器。
✎ 这个服务器解决了团队协作中需要 AI 实时获取 Slack 信息的痛点,特别适合开发团队让 Claude 帮忙汇总频道讨论或发送通知。不过,它目前只是参考实现,文档有限,不建议在生产环境直接使用——更适合开发者学习 MCP 如何集成第三方服务。
by netdata
io.github.netdata/mcp-server 是让 AI 助手实时监控服务器指标和日志的 MCP 服务器。
✎ 这个工具解决了运维人员需要手动检查系统状态的痛点,最适合 DevOps 团队让 Claude 自动分析性能数据。不过,它依赖 NetData 的现有部署,如果你没用过这个监控平台,得先花时间配置。
by d4vinci
Scrapling MCP Server 是专为现代网页设计的智能爬虫工具,支持绕过 Cloudflare 等反爬机制。
✎ 这个工具解决了爬取动态网页和反爬网站时的头疼问题,特别适合需要批量采集电商价格或新闻数据的开发者。不过,它依赖外部浏览器引擎,资源消耗较大,不适合轻量级任务。