io.github.HeshamFS/mcp-tool-factory

平台与服务

by heshamfs

可根据自然语言、OpenAPI 规范或数据库 schema,自动生成 MCP server,加速工具与服务构建。

什么是 io.github.HeshamFS/mcp-tool-factory

可根据自然语言、OpenAPI 规范或数据库 schema,自动生成 MCP server,加速工具与服务构建。

README

MCP Tool Factory (TypeScript)

Generate production-ready MCP (Model Context Protocol) servers from natural language descriptions, OpenAPI specs, database schemas, GraphQL schemas, or ontologies.

npm version npm downloads CI License: MIT TypeScript Node.js MCP

Why MCP?

The Model Context Protocol (MCP) is an open standard that enables AI assistants to securely connect with external data sources and tools. MCP servers expose tools that can be used by:

  • Claude Code and Claude Desktop
  • OpenAI Agents SDK
  • Google ADK (Agent Development Kit)
  • LangChain and CrewAI
  • Any MCP-compatible client

MCP Tool Factory lets you generate complete, production-ready MCP servers in seconds.

Features

FeatureDescription
Natural LanguageDescribe your tools in plain English
OpenAPI ImportConvert any REST API spec to MCP tools
Database CRUDGenerate tools from SQLite or PostgreSQL schemas
GraphQL ImportConvert GraphQL schemas to MCP tools (queries to reads, mutations to writes)
Ontology ImportGenerate from RDF/OWL, JSON-LD, or YAML ontologies
Resources & PromptsFull support for all three MCP primitives: Tools, Resources, and Prompts
10 LLM ProvidersAnthropic, OpenAI, Google, Mistral, DeepSeek, Groq, xAI, Azure, Cohere + Claude Code via Vercel AI SDK
Cost TrackingPer-call cost calculation, budget limits, provider cost comparison
Parallel GenerationTool implementations generated concurrently for faster output
LLM Response CachingDeduplicates identical LLM calls with configurable TTL
Streamable HTTPGenerated servers use the modern Streamable HTTP transport
Web SearchAuto-fetch API documentation for better generation
Production ReadyLogging, metrics, rate limiting, retries built-in
Type SafeFull TypeScript with strict mode
MCP RegistryGenerates server.json for registry publishing
Is an MCP ServerUse it directly with Claude to generate servers on-the-fly

Use as MCP Server

MCP Tool Factory is itself an MCP server! Add it to Claude Desktop, Claude Code, Cursor, or VS Code to generate MCP servers through conversation.

Tier 1 — Zero Config (Claude Code)

Claude Code auto-injects CLAUDE_CODE_OAUTH_TOKEN — no env vars needed:

bash
claude mcp add mcp-tool-factory -- node /path/to/mcp-tool-factory-ts/bin/mcp-server.js

Tier 2 — Standard (Pick a Provider)

Set one API key and go. The factory auto-detects the provider:

Claude Desktop / Cursor / VS Code — add to your MCP config (claude_desktop_config.json, .cursor/mcp.json, or .vscode/mcp.json):

json
{
  "mcpServers": {
    "mcp-tool-factory": {
      "command": "node",
      "args": ["/path/to/mcp-tool-factory-ts/bin/mcp-server.js"],
      "env": {
        "ANTHROPIC_API_KEY": "your-key-here"
      }
    }
  }
}

Any of these API keys will work: ANTHROPIC_API_KEY, OPENAI_API_KEY, GOOGLE_API_KEY, MISTRAL_API_KEY, DEEPSEEK_API_KEY, GROQ_API_KEY, XAI_API_KEY, AZURE_OPENAI_API_KEY, COHERE_API_KEY.

Tier 3 — Full Control (Provider + Model + Budget)

Use MCP_FACTORY_PROVIDER, MCP_FACTORY_MODEL, and MCP_FACTORY_BUDGET to override auto-detection:

json
{
  "mcpServers": {
    "mcp-tool-factory": {
      "command": "node",
      "args": ["/path/to/mcp-tool-factory-ts/bin/mcp-server.js"],
      "env": {
        "OPENAI_API_KEY": "your-key-here",
        "MCP_FACTORY_PROVIDER": "openai",
        "MCP_FACTORY_MODEL": "gpt-5.2",
        "MCP_FACTORY_BUDGET": "0.50"
      }
    }
  }
}
Env VarPurposeExample
MCP_FACTORY_PROVIDEROverride auto-detected provideropenai, groq, deepseek
MCP_FACTORY_MODELOverride default modelgpt-5.2, deepseek-chat
MCP_FACTORY_BUDGETPer-generation budget limit in USD0.50

Claude Code CLI with full control:

bash
claude mcp add mcp-tool-factory \
  -e DEEPSEEK_API_KEY=your-key \
  -e MCP_FACTORY_PROVIDER=deepseek \
  -e MCP_FACTORY_MODEL=deepseek-chat \
  -e MCP_FACTORY_BUDGET=0.25 \
  -- node /path/to/mcp-tool-factory-ts/bin/mcp-server.js

Available Tools

ToolDescription
generate_mcp_serverGenerate from natural language description
generate_from_openapiGenerate from OpenAPI specification
generate_from_databaseGenerate from database schema
generate_from_graphqlGenerate from GraphQL schema
generate_from_ontologyGenerate from RDF/OWL, JSON-LD, or YAML ontology
validate_typescriptValidate TypeScript code
list_providersList available LLM providers
get_factory_infoGet factory capabilities

Example Conversation

You: Create an MCP server for the GitHub API with tools to list repos, create issues, and manage pull requests

Claude: Uses generate_mcp_server tool

I've generated a complete MCP server with the following tools:

  • list_repositories - List user repositories
  • create_issue - Create a new issue
  • list_pull_requests - List PRs for a repo
  • merge_pull_request - Merge a PR

Let me write these files to your project...

Quick Start

Installation

bash
# Global installation
npm install -g @heshamfsalama/mcp-tool-factory

# Or use npx
npx @heshamfsalama/mcp-tool-factory generate "Create tools for managing a todo list"

Set Your API Key

At least one provider API key is required:

bash
# Anthropic Claude (recommended)
export ANTHROPIC_API_KEY=your-key-here

# Or Claude Code OAuth
export CLAUDE_CODE_OAUTH_TOKEN=your-token-here

# Or any other supported provider
export OPENAI_API_KEY=your-key-here
export GOOGLE_API_KEY=your-key-here
export MISTRAL_API_KEY=your-key-here
export DEEPSEEK_API_KEY=your-key-here
export GROQ_API_KEY=your-key-here
export XAI_API_KEY=your-key-here
export AZURE_OPENAI_API_KEY=your-key-here
export COHERE_API_KEY=your-key-here

Generate Your First Server

bash
# From natural language
mcp-factory generate "Create tools for fetching weather data by city and converting temperatures"

# From OpenAPI spec
mcp-factory from-openapi ./api-spec.yaml

# From database
mcp-factory from-database ./data.db

# From GraphQL schema
mcp-factory from-graphql ./schema.graphql

# From ontology
mcp-factory from-ontology ./ontology.owl --format rdf

Usage

Natural Language Generation

bash
mcp-factory generate "Create tools for managing a todo list with priorities" \
  --name todo-server \
  --output ./servers/todo \
  --web-search \
  --logging \
  --metrics

OpenAPI Specification

bash
# From local file
mcp-factory from-openapi ./openapi.yaml --name my-api-server

# With custom base URL
mcp-factory from-openapi ./spec.json --base-url https://api.example.com

Database Schema

bash
# SQLite
mcp-factory from-database ./myapp.db --tables users,posts,comments

# PostgreSQL
mcp-factory from-database "postgresql://user:pass@localhost/mydb" --type postgresql

GraphQL Schema

bash
# From a GraphQL SDL file
mcp-factory from-graphql ./schema.graphql --name my-graphql-server

# From a URL endpoint
mcp-factory from-graphql https://api.example.com/graphql --name my-api-server

GraphQL queries are mapped to read-only MCP tools, and mutations are mapped to write tools. GraphQL types are automatically converted to Zod validation schemas.

Ontology

bash
# From RDF/OWL (.owl, .rdf, .ttl)
mcp-factory from-ontology ./ontology.owl --format rdf --name knowledge-server

# From JSON-LD (.jsonld)
mcp-factory from-ontology ./schema.jsonld --format jsonld --name linked-data-server

# From custom YAML ontology
mcp-factory from-ontology ./domain.yaml --format yaml --name domain-server

OWL Classes are mapped to MCP Resources, ObjectProperties become Tools, and DataProperties become tool parameters.

Test & Serve

bash
# Run tests
mcp-factory test ./servers/my-server

# Start server for testing
mcp-factory serve ./servers/my-server

Generated Server Structure

code
servers/my-server/
├── src/
│   └── index.ts          # MCP server with tools, resources, and prompts
├── tests/
│   └── tools.test.ts     # Vitest tests (InMemoryTransport)
├── package.json          # Dependencies
├── tsconfig.json         # TypeScript config
├── Dockerfile            # Container deployment
├── README.md             # Usage documentation
├── skill.md              # Claude Code skill file
├── server.json           # MCP Registry manifest
├── EXECUTION_LOG.md      # Generation trace (optional)
└── .github/
    └── workflows/
        └── ci.yml        # GitHub Actions CI/CD

Generated servers export a createServer() factory function for easy testing. The server uses Streamable HTTP transport with a single /mcp POST endpoint and a /health GET endpoint. Tests use InMemoryTransport.createLinkedPair() for fast, reliable in-process testing with vitest.

CLI Reference

CommandDescription
generate <description>Generate MCP server from natural language
from-openapi <spec>Generate from OpenAPI specification
from-database <path>Generate from database schema
from-graphql <schema>Generate from GraphQL schema
from-ontology <file>Generate from RDF/OWL, JSON-LD, or YAML ontology
test <server-path>Run tests for generated server
serve <server-path>Start server for testing
infoDisplay factory information

Generate Options

bash
mcp-factory generate "..." \
  --output, -o <path>           # Output directory (default: ./servers)
  --name, -n <name>             # Server name
  --description, -d <desc>      # Package description
  --github-username, -g <user>  # GitHub username for MCP Registry
  --version, -v <ver>           # Server version (default: 1.0.0)
  --provider, -p <provider>     # LLM provider (anthropic, openai, google, mistral, deepseek, groq, xai, azure, cohere, claude_code)
  --model, -m <model>           # Specific model to use
  --web-search, -w              # Search web for API documentation
  --auth <vars...>              # Environment variables for auth
  --health-check                # Include health check endpoint (default: true)
  --logging                     # Enable structured logging (default: true)
  --metrics                     # Enable Prometheus metrics
  --rate-limit <n>              # Rate limiting (requests per minute)
  --retries                     # Enable retry logic (default: true)
  --budget <amount>             # Maximum spend in USD (aborts if exceeded)
  --compare-costs               # Show cost comparison across providers before generating

Configuration

Environment Variables

VariableDescriptionRequired
ANTHROPIC_API_KEYAnthropic Claude API keyAt least one
CLAUDE_CODE_OAUTH_TOKENClaude Code OAuth tokenprovider key
OPENAI_API_KEYOpenAI API keyis required
GOOGLE_API_KEYGoogle Gemini API keyfor generation
MISTRAL_API_KEYMistral AI API key
DEEPSEEK_API_KEYDeepSeek API key
GROQ_API_KEYGroq API key
XAI_API_KEYxAI Grok API key
AZURE_OPENAI_API_KEYAzure OpenAI API key
COHERE_API_KEYCohere API key

LLM Providers

All providers use the Vercel AI SDK via a unified UnifiedLLMProvider class with lazy dynamic imports — only the @ai-sdk/* package for your chosen provider is loaded at runtime.

ProviderModelsBest For
Anthropicclaude-opus-4-6, claude-sonnet-4-5, claude-haiku-4-5Highest quality
OpenAIgpt-5.2, gpt-5.2-codex, o3, o4-miniFast generation
Googlegemini-3-pro, gemini-3-flash, gemini-2.5-proCost effective
Mistralmistral-large, codestral, magistralEuropean AI, code
DeepSeekdeepseek-chat, deepseek-reasonerUltra low cost
Groqllama-3.3-70b, llama-4-maverickUltra-fast inference
xAIgrok-4, grok-3, grok-code-fastReasoning
Azuregpt-4o (Azure-hosted)Enterprise compliance
Coherecommand-a, command-r+RAG, enterprise search
Claude Codeclaude-sonnet-4-5 (OAuth)Claude Code users

Programmatic Usage

Basic Usage

typescript
import { ToolFactoryAgent, writeServerToDirectory, formatCost } from '@heshamfsalama/mcp-tool-factory';

// Create agent (auto-detects provider from env vars)
const agent = new ToolFactoryAgent();

// Generate from description
const server = await agent.generateFromDescription(
  'Create tools for managing a todo list with priorities',
  {
    serverName: 'todo-server',
    webSearch: true,
    parallel: true,           // Enable parallel generation (default)
    maxConcurrency: 5,        // Max concurrent LLM calls (default)
    budget: 1.00,             // Optional: abort if cost exceeds $1.00
    productionConfig: {
      enableLogging: true,
      enableMetrics: true,
    },
  }
);

// Cost tracking — see how much the generation cost
if (server.executionLog) {
  console.log(`Cost: ${formatCost(server.executionLog.totalCost)}`);
}

// Write to directory
await writeServerToDirectory(server, './servers/todo');

From OpenAPI

typescript
import { ToolFactoryAgent, writeServerToDirectory } from '@heshamfsalama/mcp-tool-factory';
import { readFileSync } from 'fs';
import yaml from 'js-yaml';

const spec = yaml.load(readFileSync('./openapi.yaml', 'utf-8'));
const agent = new ToolFactoryAgent({ requireLlm: false });

const server = await agent.generateFromOpenAPI(spec, {
  serverName: 'my-api-server',
  baseUrl: 'https://api.example.com',
});

await writeServerToDirectory(server, './servers/api');

From Database

typescript
import { ToolFactoryAgent, writeServerToDirectory } from '@heshamfsalama/mcp-tool-factory';

const agent = new ToolFactoryAgent({ requireLlm: false });

// SQLite (auto-detected from file path)
const server = await agent.generateFromDatabase('./data/app.db', {
  serverName: 'app-database-server',
  tables: ['users', 'posts', 'comments'],
});

// PostgreSQL (auto-detected from connection string)
const pgServer = await agent.generateFromDatabase(
  'postgresql://user:pass@localhost/mydb',
  { serverName: 'postgres-server' }
);

await writeServerToDirectory(server, './servers/app-db');

From GraphQL

typescript
import { ToolFactoryAgent, writeServerToDirectory } from '@heshamfsalama/mcp-tool-factory';
import { readFileSync } from 'fs';

const schema = readFileSync('./schema.graphql', 'utf-8');
const agent = new ToolFactoryAgent({ requireLlm: false });

const server = await agent.generateFromGraphQL(schema, {
  serverName: 'my-graphql-server',
});

await writeServerToDirectory(server, './servers/graphql');

From Ontology

typescript
import { ToolFactoryAgent, writeServerToDirectory } from '@heshamfsalama/mcp-tool-factory';
import { readFileSync } from 'fs';

const ontologyData = readFileSync('./ontology.owl', 'utf-8');
const agent = new ToolFactoryAgent({ requireLlm: false });

const server = await agent.generateFromOntology(ontologyData, {
  serverName: 'knowledge-server',
  format: 'rdf',
});

await writeServerToDirectory(server, './servers/knowledge');

Code Validation

typescript
import { validateTypeScriptCode, validateGeneratedServer } from '@heshamfsalama/mcp-tool-factory';

// Validate TypeScript syntax
const result = await validateTypeScriptCode(code);
// { valid: false, errors: [{ line: 4, column: 1, message: "'}' expected." }] }

// Validate complete server
const serverResult = await validateGeneratedServer(serverCode);
// { valid: true, errors: [], summary: 'Generated server code is syntactically valid' }

Use with AI Frameworks

Claude Code / Claude Desktop

Add to your MCP settings (claude_desktop_config.json):

json
{
  "mcpServers": {
    "my-server": {
      "command": "npx",
      "args": ["tsx", "./servers/my-server/src/index.ts"]
    }
  }
}

OpenAI Agents SDK

python
from agents import Agent
from agents.mcp import MCPServerStdio

async with MCPServerStdio(
    command="npx",
    args=["tsx", "./servers/my-server/src/index.ts"]
) as mcp:
    agent = Agent(
        name="My Agent",
        tools=mcp.list_tools()
    )

Google ADK

python
from google.adk.tools.mcp_tool import MCPToolset

tools = MCPToolset(
    connection_params=StdioServerParameters(
        command="npx",
        args=["tsx", "./servers/my-server/src/index.ts"]
    )
)

LangChain

python
from langchain_mcp_adapters.client import MCPClient

client = MCPClient(
    command="npx",
    args=["tsx", "./servers/my-server/src/index.ts"]
)
tools = client.get_tools()

Production Features

Structured Logging

bash
mcp-factory generate "..." --logging

Generates servers with pino structured JSON logging:

typescript
const logger = pino({ level: 'info' });
logger.info({ tool: 'get_weather', params }, 'Tool called');

Prometheus Metrics

bash
mcp-factory generate "..." --metrics

Generates servers with prom-client metrics:

  • mcp_tool_calls_total - Counter of tool invocations
  • mcp_tool_duration_seconds - Histogram of execution times

Rate Limiting

bash
mcp-factory generate "..." --rate-limit 100

Configurable rate limiting per client with sliding window.

Retry Logic

bash
mcp-factory generate "..." --retries

Exponential backoff retry for transient failures.

Structured Error Codes

Generated servers use structured error codes for consistent error handling:

  • INVALID_INPUT - Malformed or invalid tool parameters
  • NOT_FOUND - Requested resource does not exist
  • AUTH_ERROR - Authentication or authorization failure
  • INTERNAL_ERROR - Unexpected server error

Enhanced Health Check

The /health endpoint returns detailed server status:

json
{
  "status": "ok",
  "version": "1.0.0",
  "uptime": 3600,
  "memory": { "rss": 52428800, "heapUsed": 20971520 },
  "transport": "streamable-http"
}

MCP Registry Publishing

Publish your generated servers to the MCP Registry for discoverability.

Generate with Registry Support

bash
mcp-factory generate "Create weather tools" \
  --name weather-server \
  --github-username your-github-username \
  --description "Weather tools for Claude" \
  --version 1.0.0

This generates registry-compliant files:

package.json:

json
{
  "name": "@your-github-username/weather-server",
  "mcpName": "io.github.your-github-username/weather-server"
}

server.json:

json
{
  "$schema": "https://static.modelcontextprotocol.io/schemas/2025-12-11/server.schema.json",
  "name": "io.github.your-github-username/weather-server",
  "packages": [{
    "registryType": "npm",
    "identifier": "@your-github-username/weather-server",
    "transport": { "type": "stdio" }
  }],
  "tools": [...]
}

Publish Workflow

bash
# 1. Build and publish to npm
cd ./servers/weather-server
npm install && npm run build
npm publish --access public

# 2. Install mcp-publisher
brew install modelcontextprotocol/tap/mcp-publisher

# 3. Authenticate
mcp-publisher login github

# 4. Publish to registry
mcp-publisher publish

See Publishing Guide for detailed instructions.

Architecture

code
┌───────────────────────────────────────────────────────────────────────┐
│                         MCP Tool Factory                               │
├───────────────────────────────────────────────────────────────────────┤
│  Input Sources                                                         │
│  ┌───────────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐ ┌─────────┐│
│  │ Natural   │ │  OpenAPI  │ │ Database  │ │ GraphQL   │ │Ontology ││
│  │ Language  │ │   Spec    │ │  Schema   │ │  Schema   │ │RDF/YAML ││
│  └─────┬─────┘ └─────┬─────┘ └─────┬─────┘ └─────┬─────┘ └────┬────┘│
│        └──────────┬───┴─────────────┴─────────────┴────────────┘     │
│                   ▼                                                    │
│  ┌────────────────────────────────────────────────────────────────┐   │
│  │                     ToolFactoryAgent                            │   │
│  │  ┌─────────────────────────────────────────────────────────┐   │   │
│  │  │  UnifiedLLMProvider (Vercel AI SDK)                      │   │   │
│  │  │  Anthropic │ OpenAI │ Google │ Mistral │ DeepSeek       │   │   │
│  │  │  Groq │ xAI │ Azure │ Cohere + Claude Code OAuth       │   │   │
│  │  └─────────────────────────────────────────────────────────┘   │   │
│  │  ┌──────────────┐ ┌──────────────┐ ┌───────────────────────┐  │   │
│  │  │  LLM Cache   │ │  Cost        │ │ Parallel Generation   │  │   │
│  │  │  (TTL-based) │ │  Tracking    │ │ (max concurrency: 5)  │  │   │
│  │  └──────────────┘ └──────────────┘ └───────────────────────┘  │   │
│  └────────────────────────────────────────────────────────────────┘   │
│                   │                                                    │
│                   ▼                                                    │
│  ┌────────────────────────────────────────────────────────────────┐   │
│  │                       Generators                                │   │
│  │  ServerGenerator  │  DocsGenerator  │  TestsGenerator          │   │
│  └────────────────────────────────────────────────────────────────┘   │
│                   │                                                    │
│                   ▼                                                    │
│  ┌────────────────────────────────────────────────────────────────┐   │
│  │                     GeneratedServer                             │   │
│  │  Tools │ Resources │ Prompts │ Tests │ Docs │ Dockerfile       │   │
│  └────────────────────────────────────────────────────────────────┘   │
│                   │                                                    │
│                   ▼                                                    │
│  ┌────────────────────────────────────────────────────────────────┐   │
│  │                Streamable HTTP Transport                        │   │
│  │          POST /mcp  │  GET /health                             │   │
│  └────────────────────────────────────────────────────────────────┘   │
└───────────────────────────────────────────────────────────────────────┘

Development

bash
# Clone the repository
git clone https://github.com/HeshamFS/mcp-tool-factory-ts.git
cd mcp-tool-factory-ts

# Install dependencies
pnpm install

# Build
pnpm run build

# Run tests
pnpm test

# Type check
pnpm run typecheck

# Lint
pnpm run lint

Project Structure

code
mcp-tool-factory-ts/
├── src/
│   ├── agent/              # Main ToolFactoryAgent
│   ├── auth/               # OAuth2 providers
│   ├── cache/              # LLM response caching with configurable TTL
│   ├── cli/                # Command-line interface
│   ├── config/             # Configuration management
│   ├── database/           # Database introspection (SQLite, PostgreSQL)
│   ├── execution-logger/   # Execution logging
│   ├── generators/         # Code generators (server, docs, tests)
│   ├── graphql/            # GraphQL SDL parsing and server generation
│   ├── middleware/         # Validation middleware
│   ├── models/             # Data models
│   ├── observability/      # Telemetry and tracing
│   ├── ontology/           # Ontology parsing (RDF/OWL, JSON-LD, YAML)
│   ├── openapi/            # OpenAPI spec parsing
│   ├── production/         # Production code generation
│   ├── prompts/            # LLM prompt templates
│   ├── providers/          # LLM providers (10 providers via Vercel AI SDK + Claude Code)
│   ├── security/           # Security scanning
│   ├── server/             # MCP server mode (factory-as-a-server)
│   ├── templates/          # Handlebars templates for generated files
│   ├── validation/         # Code validation and Zod schemas
│   └── web-search/         # Web search integration
├── docs/                   # Documentation
├── tests/                  # Test files
└── dist/                   # Built output

Documentation

Troubleshooting

Common Issues

API Key Not Found

bash
# Check your environment
echo $ANTHROPIC_API_KEY

# Set it
export ANTHROPIC_API_KEY=your-key-here

Generated Server Won't Start

bash
# Install dependencies first
cd ./servers/my-server
npm install
npx tsx src/index.ts

TypeScript Errors

bash
# Validate generated code
import { validateGeneratedServer } from '@heshamfsalama/mcp-tool-factory';
const result = await validateGeneratedServer(code);
console.log(result.errors);

See Troubleshooting Guide for more solutions.

Changelog

v0.3.0

  • Vercel AI SDK Migration - All LLM providers now use the Vercel AI SDK via a single UnifiedLLMProvider class with lazy dynamic imports. Removed ~473 LOC of provider-specific implementations. Only the @ai-sdk/* package for your chosen provider is loaded at runtime.
  • 10 LLM Providers - Added Mistral, DeepSeek, Groq, xAI, Azure, and Cohere alongside existing Anthropic, OpenAI, Google, and Claude Code providers. All use the same unified interface.
  • Cost Tracking - Every LLM call now calculates estimated cost using a built-in pricing table for 50+ models. Shows per-call cost, total generation cost, and per-phase breakdown (tool extraction, implementation, tests, docs). Detailed token breakdowns include cache read/write tokens and reasoning tokens from the AI SDK.
  • Budget Limits (--budget <amount>) - Set a maximum spend in USD. Generation aborts gracefully with BudgetExceededError if cumulative cost exceeds the budget.
  • Provider Cost Comparison (--compare-costs) - Before generation, estimates cost across all available providers and shows a sorted comparison table. No extra API calls needed — uses the static pricing table.
  • Per-Phase Cost Breakdown - CLI output and execution logs show which generation steps cost the most (tool extraction, implementation, resource extraction, prompt extraction, test generation, docs generation).
  • OpenAI Reasoning Model Support - Temperature parameter is automatically omitted for OpenAI o-series and gpt-5.x models that don't support it.

v0.2.0

  • Streamable HTTP Transport - Generated servers use StreamableHTTPServerTransport with native http module instead of Express/SSE (deprecated June 2025). Single /mcp POST endpoint with /health GET endpoint.
  • MCP SDK v1.26.0 - Updated from ^1.0.0 to ^1.26.0
  • Resources & Prompts - Full support for all three MCP primitives. Resources expose structured data (documents, DB records, file trees). Prompts provide reusable templates for guided LLM workflows. Agent automatically extracts resources and prompts from descriptions via LLM.
  • GraphQL Input Source - New from-graphql CLI command and generate_from_graphql MCP tool. Queries map to read tools, mutations map to write tools, and GraphQL types are converted to Zod schemas.
  • Ontology Input Source - New from-ontology CLI command and generate_from_ontology MCP tool. Supports RDF/OWL, JSON-LD, and custom YAML formats. OWL Classes map to Resources, ObjectProperties to Tools, DataProperties to tool parameters.
  • LLM Response Caching - Deduplicates identical LLM calls with configurable TTL. Bypass with skipCache option.
  • Parallel Generation - Tool implementations generated concurrently by default (parallel: true, maxConcurrency: 5). Significant speed improvement for multi-tool servers.
  • InMemoryTransport Testing - Generated tests use InMemoryTransport.createLinkedPair() instead of subprocess spawning. Servers export createServer() factory function for testability.
  • Production Enhancements - Rate limiting, structured logging, metrics, and duration tracking wired into tool handlers. Enhanced health check with version, uptime, memory, and transport info. Structured error codes: INVALID_INPUT, NOT_FOUND, AUTH_ERROR, INTERNAL_ERROR.

v0.1.0

  • Initial TypeScript release
  • Natural language generation with Claude, Claude Code, OpenAI, Google Gemini
  • OpenAPI 3.0+ specification import
  • Database CRUD generation (SQLite, PostgreSQL)
  • Production features (logging, metrics, rate limiting)
  • MCP Registry server.json generation
  • TypeScript syntax validation
  • Web search for API documentation
  • GitHub Actions CI/CD generation
  • MCP Server mode for on-the-fly generation with Claude

License

MIT

Links

常见问题

io.github.HeshamFS/mcp-tool-factory 是什么?

可根据自然语言、OpenAPI 规范或数据库 schema,自动生成 MCP server,加速工具与服务构建。

相关 Skills

MCP构建

by anthropics

Universal
热门

聚焦高质量 MCP Server 开发,覆盖协议研究、工具设计、错误处理与传输选型,适合用 FastMCP 或 MCP SDK 对接外部 API、封装服务能力。

想让 LLM 稳定调用外部 API,就用 MCP构建:从 Python 到 Node 都有成熟指引,帮你更快做出高质量 MCP 服务器。

平台与服务
未扫描114.1k

Slack动图

by anthropics

Universal
热门

面向Slack的动图制作Skill,内置emoji/消息GIF的尺寸、帧率和色彩约束、校验与优化流程,适合把创意或上传图片快速做成可直接发送的Slack动画。

帮你快速做出适配 Slack 的动图,内置约束规则和校验工具,少踩上传与播放坑,做表情包和演示都更省心。

平台与服务
未扫描114.1k

MCP服务构建器

by alirezarezvani

Universal
热门

从 OpenAPI 一键生成 Python/TypeScript MCP server 脚手架,并校验 tool schema、命名规范与版本兼容性,适合把现有 REST API 快速发布成可生产演进的 MCP 服务。

帮你快速搭建 MCP 服务与后端 API,脚手架完善、扩展顺手,尤其适合想高效验证服务能力的开发者。

平台与服务
未扫描10.2k

相关 MCP Server

Slack 消息

编辑精选

by Anthropic

热门

Slack 是让 AI 助手直接读写你的 Slack 频道和消息的 MCP 服务器。

这个服务器解决了团队协作中需要 AI 实时获取 Slack 信息的痛点,特别适合开发团队让 Claude 帮忙汇总频道讨论或发送通知。不过,它目前只是参考实现,文档有限,不建议在生产环境直接使用——更适合开发者学习 MCP 如何集成第三方服务。

平台与服务
83.4k

by netdata

热门

io.github.netdata/mcp-server 是让 AI 助手实时监控服务器指标和日志的 MCP 服务器。

这个工具解决了运维人员需要手动检查系统状态的痛点,最适合 DevOps 团队让 Claude 自动分析性能数据。不过,它依赖 NetData 的现有部署,如果你没用过这个监控平台,得先花时间配置。

平台与服务
78.4k

by d4vinci

热门

Scrapling MCP Server 是专为现代网页设计的智能爬虫工具,支持绕过 Cloudflare 等反爬机制。

这个工具解决了爬取动态网页和反爬网站时的头疼问题,特别适合需要批量采集电商价格或新闻数据的开发者。不过,它依赖外部浏览器引擎,资源消耗较大,不适合轻量级任务。

平台与服务
35.4k

评论