io.github.florentine-ai/mcp

平台与服务

by florentine-ai

面向 Florentine.ai 的 MCP server,可将自然语言转换为 MongoDB aggregation 管道。

什么是 io.github.florentine-ai/mcp

面向 Florentine.ai 的 MCP server,可将自然语言转换为 MongoDB aggregation 管道。

README

Florentine.ai MCP Server - Talk to your MongoDB & MySQL data

The Florentine.ai Model Context Protocol (MCP) Server lets you integrate natural language querying for your MongoDB & MySQL data directly into your custom AI Agent or AI Desktop App.

Questions are forwarded by the AI Agent to the MCP Server, transformed into database queries and the query results are returned to the agent for further processing.

Also has a couple of extra features under the hood, e.g.:

  • Secure data separation for multi-tenant usage
  • Automated schema exploration
  • Semantic vector search/RAG support with automated embedding creation
  • Advanced lookup support
  • Exclusion of keys

Note: If you are looking for our API you can find it here.

Contents

Prerequisites

  • Node.js >= v18.0.0
  • A Florentine.ai account (create a free account here)
  • A connected database and at least one analyzed and activated collection/table in your Florentine.ai account
  • A Florentine.ai API Key (you can find yours on your account dashboard)

Installation

A detailed documentation of the MCP Server can be found here in our docs.

You can easily run the server using npx. See the following example for Claude Desktop (claude_desktop_config.json):

json
{
  "mcpServers": {
    "florentine": {
      "command": "npx",
      "args": ["-y", "@florentine-ai/mcp", "--mode", "static"],
      "env": {
        "FLORENTINE_TOKEN": "<FLORENTINE_API_KEY>"
      }
    }
  }
}

Available Tools

  • florentine_list_collections --> Lists all currently active collections/tables that can be queried. That includes descriptions, keys and type of values.
  • florentine_ask --> Receives a question and returns a query, query result or answer (depending on the returnTypes setting).

Arguments

VariableRequiredAllowed valuesDescription
--modeYesstatic, dynamicstatic (for existing external MCP clients, e.g. Claude Desktop) or dynamic (for own custom MCP clients). See integration modes section.
--debugNotrueEnables logging to external file. If set requires --logpath to be set as well.
--logpathNoAbsolute log file pathFile path to the debug log. If set requires --debug to be set as åwell.

Authentication

The Florentine.ai MCP Server uses an API key to authenticate requests. You can view and manage your API key on your account dashboard. The key must be added as an ENV variable to the configuration setup of the MCP server:

json
"env": {
  "FLORENTINE_TOKEN": "<FLORENTINE_API_KEY>"
}

Connect your LLM account

Florentine.ai works as a bring your own key model, so you need to provide your LLM API key (OpenAI, Google, Anthropic, Deepseek) in your MCP requests.

You have two options how you can add your LLM API key:

Option 1: Save your LLM key in your account (recommended)

The easiest way to connect to your LLM provider is to save your LLM API key in your Florentine.ai dashboard.

  • Add your API key
  • Select your LLM provider (OpenAI, Deepseek, Google or Anthropic)
  • Click Save

Add your LLM key

Option 2: Provide your LLM key inside the MCP server config env variables

If you prefer not to store the key in your Florentine.ai account or want to use multiple LLM keys, you can pass the key inside the MCP server config:

json
"env": {
  "LLM_SERVICE": "<YOUR_LLM_SERVICE>",
  "LLM_KEY": "<YOUR_LLM_API_KEY>"
}
ParameterDescriptionAllowed Values
LLM_SERVICESpecifies the LLM provider to use.openai,google,anthropic or deepseek
LLM_KEYYour API key for the provided LLM service.A valid API key string

Note: If you provide a LLM_KEY inside the env variables of the MCP server config, it will override any key stored in your account.

Integration Modes

You will have to set the operating mode in the args array of your MCP Server config to either static or dynamic:

json
"args": [
  "-y",
  "@florentine-ai/mcp",
  "--mode",
  "static"
]

Static Mode

The static mode should be used if you integrate Florentine.ai into an existing external MCP client such as a MCP-ready Desktop App like Claude Desktop or Dive AI.

In static mode you set all parameters (such as Return Types, Required Inputs, etc.) as env variables inside the config json. This means that these parameters will remain static until you change the setup config and will be sent with every request to Florentine.ai. See the following example:

json
{
  "mcpServers": {
    "florentine": {
      "command": "npx",
      "args": ["-y", "@florentine-ai/mcp", "--mode", "static"],
      "env": {
        "FLORENTINE_TOKEN": "<FLORENTINE_API_KEY>",
        "SESSION_ID": "6f7d62f9-8ceb-456b-b7ef-6bd869c3b13a",
        "LLM_SERVICE": "openai",
        "LLM_KEY": "<YOUR_OPENAI_KEY>",
        "RETURN_TYPES": "[\"result\"]",
        "REQUIRED_INPUTS": "[{\"keyPath\":\"accountId\",\"value\":\"507f1f77bcf86cd799439011\"}]"
      }
    }
  }
}

Environment variables

VariableRequiredTypeDescription
FLORENTINE_TOKENYesStringYour Florentine.ai api key, copy it from dashboard.
SESSION_IDNoStringThe session id of the client. Used for server-side chat history. See Sessions section.
LLM_SERVICENoStringSpecifies the LLM provider to use. Only needed if you did not save the LLM key in your Florentine.ai account. See Connect your LLM account section.
LLM_KEYNoStringYour API key for the provided LLM service. Only needed if you did not save the LLM key in your Florentine.ai account. See Connect your LLM account section.
RETURN_TYPESNoStringified JSONThe return types for florentine_ask tool calls. See Return Types section.
REQUIRED_INPUTSNoStringified JSONThe required inputs. See Required Inputs section.

Dynamic Mode

The dynamic mode should be used if you integrate Florentine.ai into your own custom MCP client.

In dynamic mode you can pass all parameters (such as Return Types, Required Inputs, etc.) directly to the florentine_ask tool. This means you can dynamically inject individual parameters to every request forwarded to Florentine.ai (i.e. a user id).

In order to be able to pass in values dynamically you have to overwrite the florentine_ask tool method inside your custom client/agent. Look at the following example using the standard @modelcontextprotocol Typescript SDK:

ts
import { StdioClientTransport } from '@modelcontextprotocol/sdk/client/stdio.js';
import { Client } from '@modelcontextprotocol/sdk/client/index.js';
import { fetchUserSpecificData } from './userService.js';

// Create the MCP client instance
const mcpClient = new Client({
  name: 'florentine',
  version: '1.0.0'
});

// Define MCP setup configuration
const mcpSetupConfig = new StdioClientTransport({
  command: 'npx',
  args: ['-y', '@florentine-ai/mcp', '--mode', 'dynamic'],
  env: {
    FLORENTINE_TOKEN: '<FLORENTINE_API_KEY>'
  }
});

// Connect the MCP client
await mcpClient.connect(mcpSetupConfig);

// Save original callTool function to variable
const originalCallTool = mcpClient.callTool;

// Fetch and add florentine_ask parameters dynamically (mock implementation)
const enhanceAskParameters = async ({ question }: { question: string }) => {
  return {
    question,
    // Mocking user data fetch (i.e. returnTypes, requiredInputs, etc.),
    // replace with actual implementation
    ...(await fetchUserSpecificData({ userId: '<USER_ID>' }))
  };
};

// Overwrite callTool function with custom implemention
// enhancing florentine_ask method with dynamically injected parameters
mcpClient.callTool = async (params, resultSchema, options) => {
  if (params.name === 'florentine_ask')
    params.arguments = await enhanceAskParameters(
      params.arguments as unknown as { question: string }
    );
  return await originalCallTool(params, resultSchema, options);
};

// Call to florentine_ask tool will automatically enhance parameters
const result = await mcpClient.callTool({
  name: 'florentine_ask',
  arguments: {
    question: 'Who won the last tabletennis match?'
  }
});

Example breakdown

Let's see what is happening in the example above in detail.

First of all we create the mcp client and connect it:

ts
const mcpClient = new Client({
  name: 'florentine',
  version: '1.0.0'
});

const mcpSetupConfig = new StdioClientTransport({
  command: 'npx',
  args: ['-y', '@florentine-ai/mcp', '--mode', 'dynamic'],
  env: {
    FLORENTINE_TOKEN: '<FLORENTINE_API_KEY>'
  }
});

await mcpClient.connect(mcpSetupConfig);

Note: You may use env variables in dynamic mode as well. However if you specify parameters dynamically these will overwrite existing env values for the parameters.

Next, we save the original callTool function to a variable:

ts
const originalCallTool = mcpClient.callTool;

Then we create an enhanceAskParameters function that takes a question as input, fetches additional parameters (e.g. returnTypes, requiredInputs etc.) for the user and returns the merged parameters:

ts
const enhanceAskParameters = async ({ question }: { question: string }) => {
  return {
    question,
    // Example function that fetches additional data, e.g. user-specfic requiredInputs
    ...(await fetchUserSpecificData({ userId: '<USER_ID>' }))
  };
};

Then we overwrite the original callTool function with an implementation that enhances the florentine_ask tool with the parameters coming from enhanceAskParameters and call the original callTool function we save to the variable originalCallTool:

ts
mcpClient.callTool = async (params, resultSchema, options) => {
  if (params.name === 'florentine_ask')
    params.arguments = await enhanceAskParameters(
      params.arguments as unknown as { question: string }
    );
  return await originalCallTool(params, resultSchema, options);
};

Finally we can call the florentine_ask tool with a question and have the user-specific parameters dynamically injected:

ts
const result = await mcpClient.callTool({
  name: 'florentine_ask',
  arguments: {
    question: 'Who won the last tabletennis match?'
  }
});

IMPORTANT: Make sure that you never use dynamic mode without overwriting florentine_ask implementation. If you do not overwrite it your client/agent will directly use the mcp server-side implementation of the florentine_ask tool with all additional parameters. So the client/agent will decide on its own what values to fill in for returnTypes, requiredInputs etc. That will result in unexpected behavior and lead to errors and wrong results.

florentine_ask Parameters

VariableRequiredTypeDescription
sessionIdNoStringThe session id of the client. Used for server-side chat history. See Sessions section.
returnTypesNoArray<String>The return types for florentine_ask tool calls. See Return Types section.
requiredInputsNoArray<Object>The required inputs. See Required Inputs section.

Return Types

By default, the florentine_ask tool returns the result type configured in your Florentine.ai account (default: result). You can override this per request by specifying a returnTypes array with any combination of the following three steps:

  1. Query Generation: The question is converted into a database query (MongoDB aggregation pipeline or MySQL query).
  2. Query Execution: The query runs against the database using the connection string you provided.
  3. Answer Generation: The structured result is transformed into a natural language answer.

Providing Return Types

You have two options to include a returnTypes array:

  • As the RETURN_TYPES env variable in your MCP setup config (possible in static and dynamic mode)
  • As the returnTypes parameter to the florentine_ask tool (possible only in dynamic mode)

As an env variable you provide the value as a stringified json array:

json
"env": {
  "RETURN_TYPES": "[\"query\",\"result\",\"answer\"]"
}

As a tool parameter you provide the value as an array:

json
{
  "returnTypes": ["query", "result", "answer"]
}

Return Types Configuration

You can choose which of these steps you want returned by specifying a returnTypes array with any combination of:

returnTypes ValueDescriptionExpected Keys in Response
"query"Returns the generated database query, the database and collection/table used, a confidence score on a scale from 0 to 10 and the database type ("mongodb" or "mysql").confidence, database, collection, query, databaseType
"result"Returns the raw query results from the executed query.result
"answer"Returns a natural language response based on the results from the executed query.answer

Secure Data Separation for multi-tenant usage

You can enable secure data separation by ensuring queries filter data based on provided values which we call Required Inputs.

These values are added to the query by the Florentine.ai transformation layer after the query generation by the LLM. Thus Florentine.ai can assure each user only retrieves the data he is eligible to.

Keys are defined as Required Input in your account, please refer to the section in our official docs on how to do that.

Providing Required Inputs

You have two options to include a requiredInputs array:

  • As the REQUIRED_INPUTS env variable in your MCP setup config (possible in static and dynamic mode)
  • As the requiredInputs parameter to the florentine_ask tool (possible only in dynamic mode)

As an env variable you provide the value as a stringified json array:

json
"env": {
  "REQUIRED_INPUTS": "[{\"keyPath\":\"userId\",\"value\":\"507f1f77bcf86cd799439011\"}]"
}

As a tool parameter you provide the value as an array:

json
"requiredInputs": [
    {
      "keyPath": "userId",
      "value": "507f1f77bcf86cd799439011"
    }
  ]

You may also provide a database and a collections array in case you have Required Inputs with the same keyPath in multiple collections/tables but different value for the collections/tables:

json
{
  "requiredInputs": [
    {
      "keyPath": "name",
      "value": "Sesame Street",
      "database": "rentals",
      "collections": ["houses"]
    },
    {
      "keyPath": "name",
      "value": { "$in": ["Ernie", "Bert"] },
      "database": "rentals",
      "collections": ["tenants"]
    }
  ]
}

Required Inputs Configuration

FieldRequiredTypeDescriptionConstraints
keyPathYesStringThe path to the field that should be filtered.Must be a valid key path.
valueYesAnyThe value(s) to filter by (type-specific, see Supported Value Types).Must match the field's type (String, ObjectId, Boolean, Number, or Date).
databaseNoStringThe database containing the collections to filter.Must be provided if collections is provided.
collectionsNoArray<String>The specific collections/tables within the database to apply the filter to.Must contain at least one collection/table.

Supported Value Types

Based on the type of the values for the key you have different options on what you can provide as a Required Input value:

TypeFormat ExamplesOperators SupportedNotes
String or Array<String>"text"<br>{ $in: ["text1", "text2"] }$inCase-sensitive.
ObjectId or Array<ObjectId>"507f191e810c19729de860ea"<br>{ $in: ["507f191e810c19729de860ea", "507f191e810c19729de860eb"] }$inProvided as strings.
Booleantrue/falseOnly exact values.
Number or Array<Number>42<br>{ $gt: 10, $lte: 100 }<br>{ $in: [1, 2, 3] }<br>{ $in: [{$gte:1}, {$lt:10}] }$gt, $gte, $lt, $lte, $inSupports decimals.
Date or Array<Date>"2024-01-01T00:00:00Z" (UTC)<br>"2024-01-01T00:00:00-05:00"(timezone offset)$gt, $gte, $lt, $lte, $inISO 8601 format.

Usage Examples

Note: We will only provide examples as tool parameter input. For env implementation you just change the key name to REQUIRED_INPUTS and stringify the json.

Example type: String

Usecase: A user should only be able to see statistics of the players he frequently plays with.

Solution: Restricting access by player name to a group of 4 players.

ts
const res = await FlorentineAI.ask({
  question: 'Which player had the most wins?',
  requiredInputs: [
    {
      keyPath: 'name',
      value: { $in: ['Megan', 'Frank', 'Jen', 'Bob'] }
    }
  ]
});

Example type: ObjectId

Usecase: A user should only be able to see the revenue of his own products.

Solution: Restricting the access by the accountId to one specific account.

ts
const res = await FlorentineAI.ask({
  question: 'Whats the revenue of my products?',
  requiredInputs: [
    {
      keyPath: 'accountId',
      value: '507f1f77bcf86cd799439011'
    }
  ]
});

Example type: Boolean

Usecase: Every analysis of customers should only be performed on paying customers.

Solution: Restricting the access by isPaidAccount to paying customers only.

ts
const res = await FlorentineAI.ask({
  question: 'How many customers registered in the last year?',
  requiredInputs: [
    {
      keyPath: 'isPaidAccount',
      value: true
    }
  ]
});

Example type: Number

Usecase: An employee should only be allowed to see payment information for payments below a certain amount.

Solution: Restricting the access by amount to payments below 10.000.

ts
const res = await FlorentineAI.ask({
  question: 'List all payments we received.',
  requiredInputs: [
    {
      keyPath: 'amount',
      value: { $lt: 10000 }
    }
  ]
});

Example type: Date

Usecase: The analysis of financial data should only include one specific year.

Solution: Restricting the access by transactionDate to all transactions in 2024.

ts
const res = await FlorentineAI.ask({
  question: 'What was our revenue, profit and margin per month?',
  requiredInputs: [
    {
      keyPath: 'transactionDate',
      value: {
        $gte: '2023-01-01T00:00:00Z',
        $lt: '2024-01-01T00:00:00Z'
      }
    }
  ]
});

Sessions

Sessions allow Florentine.ai to enable a server-side chat history.

Since the client/agent including the MCP server usually keeps track of the chat history itself it is not absolutely essential to add a session.

However it might still help Florentine.ai to get a better understanding of the context and might increase result quality.

Providing a session

You have two options to include a sessionId:

  • As the SESSION_ID env variable in your MCP setup config (possible in static and dynamic mode)
  • As the sessionId parameter to the florentine_ask tool (possible only in dynamic mode)

As an env variable:

json
"env": {
  "SESSION_ID": "<YOUR_SESSION_ID>"
}

As a tool parameter:

json
{
  "sessionId": "<YOUR_SESSION_ID>"
}

Errors

All errors from the MCP Server tool calls follow this consistent JSON structure:

json
{
  "error": {
    "name": "FlorentineApiError",
    "statusCode": 500,
    "message": "The provided Florentine API key is invalid. You can find the key in your account settings: https://florentine.ai/settings",
    "errorCode": "INVALID_TOKEN",
    "requestId": "abc123"
  }
}
FieldTypeDescription
namestringError class name (e.g. FlorentineApiError, FlorentineConnectionError)
statusCodenumberHTTP status code (e.g. 400, 500)
messagestringExplanation of what went wrong
errorCodestringError identifier (e.g. NO_TOKEN, INVALID_LLM_KEY)
requestIdstringUnique ID for this request (helpful for support and debugging)

Custom client error handling

The error object is returned as a stringified json in the content array:

json
{
  "content": [
    {
      "type": "text",
      "text": "{\"error\":{\"name\":\"FlorentineApiError\",\"statusCode\":401,\"message\":\"The provided Florentine API key is invalid. You can find the key in your account settings: https://florentine.ai/settings\",\"errorCode\":\"INVALID_TOKEN\",\"requestId\":\"uhv99g\"}}"
    }
  ],
  "isError": true
}

You may parse the JSON in text and handle the different errors inside your custom client/agent.

Common Errors

Error NameerrorCodeMeaning
FlorentineApiErrorMODE_MISSINGYou must provide static or dynamic as mode argument
FlorentineApiErrorMODE_INVALIDMode is invalid (must be static or dynamic)
FlorentineApiErrorINVALID_TOKENThe Florentine API key is invalid
FlorentineApiErrorLLM_KEY_WITHOUT_SERVICEYou must provide a llmService if llmKey is defined
FlorentineApiErrorLLM_SERVICE_WITHOUT_KEYYou must provide a llmKey if llmService is defined
FlorentineApiErrorINVALID_LLM_SERVICEInvalid llmService provided
FlorentineApiErrorNO_OWN_LLM_KEYYou need to provide your own llm key
FlorentineApiErrorNO_ACTIVE_COLLECTIONSNo collections/tables activated for the account
FlorentineApiErrorMISSING_REQUIRED_INPUTRequired input is missing
FlorentineApiErrorINVALID_REQUIRED_INPUTRequired input is invalid
FlorentineApiErrorINVALID_REQUIRED_INPUT_FORMATRequired input format is invalid
FlorentineApiErrorNO_QUESTIONQuestion is missing
FlorentineApiErrorEXECUTION_FAILURECreated query execution failed
FlorentineApiErrorNO_CHAT_IDHistory chat id required but missing
FlorentineApiErrorTOO_MANY_TOKENSThe query prompt exceeds the maximum tokens of the LLM model
FlorentineLLMErrorAPI_KEY_ISSUELLM API key is invalid
FlorentineLLMErrorNO_RETURNFlorentine.ai did not receive a valid LLM return
FlorentineLLMErrorRATE_LIMIT_EXCEEDEDLLM Request size too big
FlorentineConnectionErrorCONNECTION_REFUSEDCould not connect to database for query execution
FlorentineCollectionErrorNO_EXECUTIONCreated query could not be executed
FlorentinePipelineErrorMODIFICATION_FAILEDModifying the query pipeline failed
FlorentineUsageErrorLIMIT_REACHEDAll API requests included in your plan depleted
FlorentineUnknownErrorUNKNOWN_ERRORAll occurring unknown errors

常见问题

io.github.florentine-ai/mcp 是什么?

面向 Florentine.ai 的 MCP server,可将自然语言转换为 MongoDB aggregation 管道。

相关 Skills

MCP构建

by anthropics

Universal
热门

聚焦高质量 MCP Server 开发,覆盖协议研究、工具设计、错误处理与传输选型,适合用 FastMCP 或 MCP SDK 对接外部 API、封装服务能力。

想让 LLM 稳定调用外部 API,就用 MCP构建:从 Python 到 Node 都有成熟指引,帮你更快做出高质量 MCP 服务器。

平台与服务
未扫描111.8k

Slack动图

by anthropics

Universal
热门

面向Slack的动图制作Skill,内置emoji/消息GIF的尺寸、帧率和色彩约束、校验与优化流程,适合把创意或上传图片快速做成可直接发送的Slack动画。

帮你快速做出适配 Slack 的动图,内置约束规则和校验工具,少踩上传与播放坑,做表情包和演示都更省心。

平台与服务
未扫描111.8k

MCP服务构建器

by alirezarezvani

Universal
热门

从 OpenAPI 一键生成 Python/TypeScript MCP server 脚手架,并校验 tool schema、命名规范与版本兼容性,适合把现有 REST API 快速发布成可生产演进的 MCP 服务。

帮你快速搭建 MCP 服务与后端 API,脚手架完善、扩展顺手,尤其适合想高效验证服务能力的开发者。

平台与服务
未扫描9.8k

相关 MCP Server

Slack 消息

编辑精选

by Anthropic

热门

Slack 是让 AI 助手直接读写你的 Slack 频道和消息的 MCP 服务器。

这个服务器解决了团队协作中需要 AI 实时获取 Slack 信息的痛点,特别适合开发团队让 Claude 帮忙汇总频道讨论或发送通知。不过,它目前只是参考实现,文档有限,不建议在生产环境直接使用——更适合开发者学习 MCP 如何集成第三方服务。

平台与服务
83.1k

by netdata

热门

io.github.netdata/mcp-server 是让 AI 助手实时监控服务器指标和日志的 MCP 服务器。

这个工具解决了运维人员需要手动检查系统状态的痛点,最适合 DevOps 团队让 Claude 自动分析性能数据。不过,它依赖 NetData 的现有部署,如果你没用过这个监控平台,得先花时间配置。

平台与服务
78.3k

by d4vinci

热门

Scrapling MCP Server 是专为现代网页设计的智能爬虫工具,支持绕过 Cloudflare 等反爬机制。

这个工具解决了爬取动态网页和反爬网站时的头疼问题,特别适合需要批量采集电商价格或新闻数据的开发者。不过,它依赖外部浏览器引擎,资源消耗较大,不适合轻量级任务。

平台与服务
34.9k

评论