Models Plus

AI 与智能体

by Vivek-k3

发现并比较各类模型与提供商,查看最新价格、限制、模态与能力,并按 reasoning、tool calling、context length 等特性筛选。

什么是 Models Plus

发现并比较各类模型与提供商,查看最新价格、限制、模态与能力,并按 reasoning、tool calling、context length 等特性筛选。

核心功能 (4 个工具)

search_models

Search for AI models by name, provider, or capabilities

get_model

Get detailed information about a specific AI model

search_providers

Search for AI model providers by name or environment variables

get_provider

Get detailed information about a specific AI model provider

README

<p align="center"> <picture> <source srcset="public/logo.svg" media="(prefers-color-scheme: dark)"> <img src="public/logo.svg" alt="Models PLUS" width="300"> </picture> </p> <p align="center"> <strong>Comprehensive AI Model Directory & MCP Server</strong> </p> <p align="center"> Unified REST API and Model Context Protocol (MCP) server for AI model metadata, built on <a href="https://models.dev">models.dev</a> data. </p> <p align="center"> <a href="#features">Features</a> • <a href="#quick-start">Quick Start</a> • <a href="#api-guide">API Docs</a> • <a href="#mcp-integration">MCP</a> • <a href="#contributing">Contributing</a> </p>
<p align="center"> <a href="https://modelsplus.quivr.tech"> <img src="https://img.shields.io/badge/Public%20API-Online-brightgreen" alt="Public API"> </a> <a href="https://github.com/vivek-k3/modelsplus/blob/main/LICENSE"> <img src="https://img.shields.io/badge/license-MIT-blue.svg" alt="MIT License"> </a> <a href="https://smithery.ai/server/@Vivek-k3/modelsplus"> <img src="https://smithery.ai/badge/@Vivek-k3/modelsplus" alt="Smithery"> </a> </p>

Features

Models PLUS provides a comprehensive AI model catalog with modern tooling:

Core Features

  • Unified REST API - Advanced search and filtering for 100+ AI models
  • Model Context Protocol (MCP) - Native MCP support with 4 powerful tools
  • Real-time Data - Fresh data from models.dev database
  • Lightning Fast - Built with Bun runtime and SST v3

Developer Experience

  • Zero Config - Biome + Ultracite for ultra-fast formatting and linting
  • TypeScript - Full type safety with strict TypeScript configuration
  • Cloudflare Workers - Global edge deployment with SST

Rich Metadata

  • Comprehensive Model Info - Pricing, limits, capabilities, modalities
  • Provider Details - Environment variables, documentation, integrations
  • Advanced Filtering - Search by cost, context length, features, and more

Public API: https://modelsplus.quivr.tech

Quick Start

Try the Public API

bash
# List latest models
curl "https://modelsplus.quivr.tech/v1/models?limit=5"

# Find reasoning-capable models
curl "https://modelsplus.quivr.tech/v1/models?reasoning=true"

# Get specific model details
curl "https://modelsplus.quivr.tech/v1/models/openai:gpt-4o"

Local Development

bash
# Install dependencies
bun install

# Start development server
bun run dev

# Build for production
bun run build

Installation

📋 Requirements

  • Bun 1.2.21 - Runtime and package manager
  • Node.js types - For tooling compatibility (bundled via SST)

Quick Install

bash
# Install dependencies
bun install

# Generate JSON assets from vendor data
cd packages/api && bun run generate && bun run build

Development

Useful Scripts

  • bun run build — Build all workspaces
  • bun run dev — SST Dev with Cloudflare Worker locally
  • bun run dev:api — Direct Worker dev for API only
  • bun run deploy — Deploy via SST to Cloudflare Workers
  • bun run sync:upstream — Sync vendor subtree

Development Setup

  1. Generate JSON assets from vendor TOML files:

    bash
    cd packages/api
    bun run generate
    bun run build
    
  2. Run development servers:

    bash
    # SST Dev (recommended)
    bun run dev
    
    # Direct Worker dev
    cd packages/api && bun run dev
    

Note: SST config (sst.config.ts) auto-builds @modelsplus/api and exposes the Worker URL.

API Guide

Authentication

No authentication required. The API is publicly accessible.

Base URL

code
https://modelsplus.quivr.tech

Response Format

All API responses return JSON. Error responses include:

json
{
  "error": "Error message",
  "status": 400
}

Rate Limits

Currently no rate limiting is enforced, but please be respectful.

Query Parameters

Models API (/v1/models)

ParameterTypeDescriptionExample
qstringSearch query (model name, provider, etc.)q=gpt
providerstringFilter by providerprovider=openai
tool_callbooleanFilter by tool calling supporttool_call=true
attachmentbooleanFilter by attachment supportattachment=true
reasoningbooleanFilter by reasoning capabilitiesreasoning=true
temperaturebooleanFilter by temperature supporttemperature=true
open_weightsbooleanFilter by open weights availabilityopen_weights=true
min_input_costnumberMinimum input cost filtermin_input_cost=0.001
max_input_costnumberMaximum input cost filtermax_input_cost=0.01
min_output_costnumberMinimum output cost filtermin_output_cost=0.002
max_output_costnumberMaximum output cost filtermax_output_cost=0.05
min_contextnumberMinimum context lengthmin_context=32000
max_contextnumberMaximum context lengthmax_context=128000
min_output_limitnumberMinimum output limitmin_output_limit=4000
max_output_limitnumberMaximum output limitmax_output_limit=8000
modalitiesstringComma-separated modalitiesmodalities=image,text
release_afterstringReleased after date (ISO)release_after=2024-01-01
release_beforestringReleased before date (ISO)release_before=2024-12-31
updated_afterstringUpdated after date (ISO)updated_after=2024-06-01
updated_beforestringUpdated before date (ISO)updated_before=2024-12-31
sortstringSort fieldsort=name or sort=cost_input
orderstringSort orderorder=asc or order=desc
limitnumberMaximum results (default: unlimited)limit=10
offsetnumberSkip number of resultsoffset=20
fieldsstringComma-separated fields to returnfields=id,name,provider

Providers API (/v1/providers)

ParameterTypeDescriptionExample
qstringSearch query (provider name)q=openai
envstringFilter by environment variableenv=API_KEY
npmstringFilter by npm packagenpm=openai
limitnumberMaximum resultslimit=10
offsetnumberSkip number of resultsoffset=5

Model Object Schema

json
{
  "id": "openai:gpt-4o",
  "provider": "openai",
  "name": "GPT-4o",
  "release_date": "2024-05-13",
  "last_updated": "2024-08-06",
  "attachment": true,
  "reasoning": false,
  "temperature": true,
  "tool_call": true,
  "open_weights": false,
  "knowledge": "2023-10",
  "cost": {
    "input": 0.0025,
    "output": 0.01,
    "cache_read": 0.00125,
    "cache_write": 0.00625
  },
  "limit": {
    "context": 128000,
    "output": 16384
  },
  "modalities": {
    "input": ["text", "image"],
    "output": ["text"]
  }
}

Provider Object Schema

json
{
  "id": "openai",
  "name": "OpenAI",
  "env": ["OPENAI_API_KEY"],
  "npm": "openai",
  "api": "https://api.openai.com/v1",
  "doc": "https://platform.openai.com/docs"
}

🔗 API Endpoints

Base URL: https://modelsplus.quivr.tech

MethodEndpointDescription
GET/healthHealth/status check
GET/.well-known/mcpMCP discovery
GET/v1/modelsList/search models
GET/v1/models/countCount models after filters
GET/v1/models/:idGet specific model details
GET/v1/providersList/search providers
GET/v1/providers/countCount providers after filters
GET/POST/mcpMCP over HTTP (JSON-RPC)
GET/POST/mcp/httpAlternate MCP endpoint

Code Examples

JavaScript/TypeScript:

typescript
// Search models
const models = await fetch('https://modelsplus.quivr.tech/v1/models?reasoning=true&limit=5')
  .then(res => res.json());

// Get specific model
const model = await fetch('https://modelsplus.quivr.tech/v1/models/openai:gpt-4o')
  .then(res => res.json());

Python:

python
import requests

# Find vision-capable models
response = requests.get('https://modelsplus.quivr.tech/v1/models',
                       params={'modalities': 'image', 'limit': 5})
models = response.json()

MCP Integration

Models PLUS provides native Model Context Protocol (MCP) support for seamless integration with AI assistants.

Available Tools

  • search_models - Advanced search and filtering for AI models
  • get_model - Detailed information about specific models
  • search_providers - Search and filter AI providers
  • get_provider - Detailed provider information

Quick Setup

Claude Desktop

Add to your claude_desktop_config.json:

json
{
  "mcpServers": {
    "models-plus": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/sdk", "server", "https://modelsplus.quivr.tech/mcp"]
    }
  }
}

Cursor

Configure MCP server with URL: https://modelsplus.quivr.tech/mcp

Other MCP Clients

For any MCP-compatible client, use: https://modelsplus.quivr.tech/mcp

Usage Examples

Once integrated, use natural language:

  • "Find all GPT-4 models from OpenAI"
  • "Show me reasoning-capable models under $1 per million tokens"
  • "What are the specs for Claude 3 Opus?"
  • "Which providers support tool calling?"

Direct HTTP API

bash
# Discover capabilities
curl "https://modelsplus.quivr.tech/mcp"

# List available tools
curl -s "https://modelsplus.quivr.tech/mcp" \
  -X POST \
  -H 'Content-Type: application/json' \
  -d '{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}'

Data Source

Model and provider metadata sourced from models.dev TOML files. The build process (packages/api/src/generate.ts) converts these into optimized JSON artifacts for the API and MCP handlers.

Deployment

Deploys via SST to Cloudflare Workers:

bash
bun run deploy

SST config creates a sst.cloudflare.Worker with global edge deployment.

Contributing

We welcome contributions! Here's how to get started:

  1. Fork and create a feature branch
  2. Install dependencies: bun install
  3. Build and ensure tests pass: bun run build
  4. Format code: npx ultracite format && npx ultracite lint
  5. Test your changes thoroughly
  6. Submit a pull request with a clear description

Acknowledgments

Built on top of models.dev - a comprehensive open-source database of AI model specifications, pricing, and capabilities maintained by the SST team.


常见问题

Models Plus 是什么?

发现并比较各类模型与提供商,查看最新价格、限制、模态与能力,并按 reasoning、tool calling、context length 等特性筛选。

Models Plus 提供哪些工具?

提供 4 个工具,包括 search_models、get_model、search_providers

相关 Skills

Claude接口

by anthropics

Universal
热门

面向接入 Claude API、Anthropic SDK 或 Agent SDK 的开发场景,自动识别项目语言并给出对应示例与默认配置,快速搭建 LLM 应用。

想把Claude能力接进应用或智能体,用claude-api上手快、兼容Anthropic与Agent SDK,集成路径清晰又省心

AI 与智能体
未扫描111.8k

计算机视觉

by alirezarezvani

Universal
热门

聚焦目标检测、图像分割与视觉系统落地,覆盖 YOLO、DETR、Mask R-CNN、SAM 等方案,适合定制数据集训练、推理优化及 ONNX/TensorRT 部署。

把目标检测、图像分割到推理部署串成完整工程链路,主流框架与 YOLO、DETR、SAM 等方案都覆盖,落地视觉 AI 会省心很多。

AI 与智能体
未扫描9.8k

智能体流程设计

by alirezarezvani

Universal
热门

面向生产级多 Agent 编排,梳理顺序、并行、分层、事件驱动、共识五种工作流设计,覆盖 handoff、状态管理、容错重试、上下文预算与成本优化,适合搭建复杂 AI 协作系统。

帮你把多智能体流程设计、编排和自动化统一起来,复杂工作流也能更稳地落地,适合追求强控制力的团队。

AI 与智能体
未扫描9.8k

相关 MCP Server

顺序思维

编辑精选

by Anthropic

热门

Sequential Thinking 是让 AI 通过动态思维链解决复杂问题的参考服务器。

这个服务器展示了如何让 Claude 像人类一样逐步推理,适合开发者学习 MCP 的思维链实现。但注意它只是个参考示例,别指望直接用在生产环境里。

AI 与智能体
83.1k

知识图谱记忆

编辑精选

by Anthropic

热门

Memory 是一个基于本地知识图谱的持久化记忆系统,让 AI 记住长期上下文。

帮 AI 和智能体补上“记不住”的短板,用本地知识图谱沉淀长期上下文,连续对话更聪明,数据也更可控。

AI 与智能体
83.1k

PraisonAI

编辑精选

by mervinpraison

热门

PraisonAI 是一个支持自反思和多 LLM 的低代码 AI 智能体框架。

如果你需要快速搭建一个能 24/7 运行的 AI 智能体团队来处理复杂任务(比如自动研究或代码生成),PraisonAI 的低代码设计和多平台集成(如 Telegram)让它上手极快。但作为非官方项目,它的生态成熟度可能不如 LangChain 等主流框架,适合愿意尝鲜的开发者。

AI 与智能体
6.8k

评论