io.github.VictoriaMetrics/mcp-vmanomaly

平台与服务

by victoriametrics

面向 vmanomaly 的 MCP Server,用于 VictoriaMetrics 环境中的异常检测与分析。

什么是 io.github.VictoriaMetrics/mcp-vmanomaly

面向 vmanomaly 的 MCP Server,用于 VictoriaMetrics 环境中的异常检测与分析。

README

MCP Server for vmanomaly

Latest Release License Slack X

The implementation of Model Context Protocol (MCP) server for vmanomaly - VictoriaMetrics Anomaly Detection product.

This provides seamless integration with vmanomaly REST API and documentation for AI-assisted anomaly detection, model management, and observability insights.

Features

This MCP server enables AI assistants like Claude to interact with vmanomaly for:

  • Health Monitoring: Check vmanomaly server health and build information
  • Model Management: List, validate, and configure anomaly detection models (like zscore_online, prophet, and more)
  • Configuration Generation: Generate complete vmanomaly YAML configurations
  • Alert Rule Generation: Generate vmalert alerting rules based on anomaly score metrics to simplify alerting setup
  • Documentation Search: Full-text search across embedded vmanomaly documentation with fuzzy matching

The MCP server contains embedded up-to-date vmanomaly documentation and is able to search it without online access.

The quality of the MCP Server and its responses depends very much on the capabilities of your client and the quality of the model you are using.

Requirements

  • vmanomaly instance (version 1.28.3+) with REST API access
  • Go 1.24 or higher (if building from source)

Installation

Go

bash
go install github.com/VictoriaMetrics/mcp-vmanomaly/cmd/mcp-vmanomaly@latest

Binaries

Download the latest release from Releases page and put it to your PATH.

Example for Linux x86_64 (other architectures and platforms are also available):

bash
latest=$(curl -s https://api.github.com/repos/VictoriaMetrics/mcp-vmanomaly/releases/latest | grep 'tag_name' | cut -d\" -f4)
wget https://github.com/VictoriaMetrics/mcp-vmanomaly/releases/download/$latest/mcp-vmanomaly_Linux_x86_64.tar.gz
tar axvf mcp-vmanomaly_Linux_x86_64.tar.gz

Docker

You can run vmanomaly MCP Server using Docker.

This is the easiest way to get started without needing to install Go or build from source.

bash
docker run -d --name mcp-vmanomaly \
  -e VMANOMALY_ENDPOINT=http://localhost:8490 \
  -e MCP_SERVER_MODE=http \
  -e MCP_LISTEN_ADDR=:8080 \
  -p 8080:8080 \
  ghcr.io/victoriametrics/mcp-vmanomaly

You should replace environment variables with your own parameters.

Note that the MCP_SERVER_MODE=http flag is used to enable Streamable HTTP mode. More details about server modes can be found in the Configuration section.

See available docker images in github registry.

Also see Using Docker instead of binary section for more details about using Docker with MCP server with clients in stdio mode.

Source Code

For building binary from source code you can use the following approach:

  • Clone repo:

    bash
    git clone https://github.com/VictoriaMetrics/mcp-vmanomaly.git
    cd mcp-vmanomaly
    
  • Build binary from cloned source code:

    bash
    make build
    # after that you can find binary mcp-vmanomaly and copy this file to your PATH or run inplace
    
  • Build image from cloned source code:

    bash
    docker build -t mcp-vmanomaly .
    # after that you can use docker image mcp-vmanomaly for running or pushing
    

Configuration

MCP Server for vmanomaly is configured via environment variables:

VariableDescriptionRequiredDefaultAllowed values
VMANOMALY_ENDPOINTvmanomaly server endpoint URL (e.g., http://localhost:8490)Yes--
VMANOMALY_BEARER_TOKENBearer token for authenticating with vmanomaly APINo--
VMANOMALY_HEADERSCustom HTTP headers for requests (comma-separated key=value pairs, e.g., X-Custom=value1,X-Auth=value2)No--
MCP_SERVER_MODEServer operation mode. See Modes for details.Nostdiostdio, http, sse
MCP_LISTEN_ADDRAddress for HTTP server to listen onNolocalhost:8080-
MCP_DISABLED_TOOLSComma-separated list of tools to disableNo--
MCP_DISABLE_RESOURCESDisable all resources (documentation search will continue to work)Nofalsefalse, true
MCP_HEARTBEAT_INTERVALHeartbeat interval for streamable-http protocol (keeps connection alive through network infrastructure)No30s-
MCP_LOG_LEVELLog level: debug (verbose), info (default), warn, or errorNoinfo-
MCP_LOG_FILELog file path (empty = stderr)Nostderr-

Modes

MCP Server supports the following modes of operation (transports):

  • stdio - Standard input/output mode, where the server reads commands from standard input and writes responses to standard output. This is the default mode and is suitable for local servers.
  • http - Streamable HTTP. Server will expose the /mcp endpoint for HTTP connections.
  • sse - Server-Sent Events. Server will expose the /sse and /message endpoints for SSE connections.

[!NOTE] The sse transport mode was officialy deprecated from MCP Specification (version 2025-03-26) and was replaced by Streamable HTTP transport (http mode). In future releases its support can be deprecated, use Streamable HTTP transport if your client supports it.

More info about transports you can find in MCP docs:

Configuration examples

bash
# Basic configuration
export VMANOMALY_ENDPOINT="http://localhost:8490"

# With authentication
export VMANOMALY_ENDPOINT="http://localhost:8490"
export VMANOMALY_BEARER_TOKEN="your-token"

# With custom headers (e.g., behind a reverse proxy)
export VMANOMALY_HEADERS="X-Custom-Header=value1,X-Another=value2"

# Server mode
export MCP_SERVER_MODE="http"
export MCP_LISTEN_ADDR="0.0.0.0:8080"

# Logging
export MCP_LOG_LEVEL="debug"
export MCP_LOG_FILE="/tmp/mcp-vmanomaly.log"

Endpoints

In HTTP and SSE modes the MCP server provides the following endpoints:

EndpointDescription
/mcpHTTP endpoint for streaming messages in HTTP mode (for MCP clients that support Streamable HTTP)
/metricsMetrics in Prometheus format for monitoring the MCP server
/health/livenessLiveness check endpoint to ensure the server is running
/health/readinessReadiness check endpoint to ensure the server is ready to accept requests
/sse + /messageEndpoints for messages in SSE mode (for MCP clients that support SSE)

Setup in clients

Cursor

Go to: SettingsCursor SettingsMCPAdd new global MCP server and paste the following configuration into your Cursor ~/.cursor/mcp.json file:

json
{
  "mcpServers": {
    "vmanomaly": {
      "command": "/path/to/mcp-vmanomaly",
      "env": {
        "VMANOMALY_ENDPOINT": "http://localhost:8490",
        "VMANOMALY_BEARER_TOKEN": "<YOUR_TOKEN>",
        "VMANOMALY_HEADERS": "X-Custom=value1,X-Auth=value2"
      }
    }
  }
}

See Cursor MCP docs for more info.

Claude Desktop

Add this to your Claude Desktop claude_desktop_config.json file (you can find it if open SettingsDeveloperEdit config):

json
{
  "mcpServers": {
    "vmanomaly": {
      "command": "/path/to/mcp-vmanomaly",
      "env": {
        "VMANOMALY_ENDPOINT": "http://localhost:8490",
        "VMANOMALY_BEARER_TOKEN": "<YOUR_TOKEN>",
        "VMANOMALY_HEADERS": "X-Custom=value1,X-Auth=value2"
      }
    }
  }
}

See Claude Desktop MCP docs for more info.

Claude Code

Run the command:

sh
claude mcp add vmanomaly -- /path/to/mcp-vmanomaly \
  -e VMANOMALY_ENDPOINT=http://localhost:8490 \
  -e VMANOMALY_BEARER_TOKEN=<YOUR_TOKEN> \
  -e VMANOMALY_HEADERS="X-Custom=value1,X-Auth=value2"

See Claude Code MCP docs for more info.

Visual Studio Code

Add this to your VS Code MCP config file:

json
{
  "servers": {
    "vmanomaly": {
      "type": "stdio",
      "command": "/path/to/mcp-vmanomaly",
      "env": {
        "VMANOMALY_ENDPOINT": "http://localhost:8490",
        "VMANOMALY_BEARER_TOKEN": "<YOUR_TOKEN>",
        "VMANOMALY_HEADERS": "X-Custom=value1,X-Auth=value2"
      }
    }
  }
}

See VS Code MCP docs for more info.

Zed

Add the following to your Zed config file:

json
  "context_servers": {
    "vmanomaly": {
      "command": {
        "path": "/path/to/mcp-vmanomaly",
        "args": [],
        "env": {
          "VMANOMALY_ENDPOINT": "http://localhost:8490",
          "VMANOMALY_BEARER_TOKEN": "<YOUR_TOKEN>",
          "VMANOMALY_HEADERS": "X-Custom=value1,X-Auth=value2"
        }
      },
      "settings": {}
    }
  }

See Zed MCP docs for more info.

JetBrains IDEs

  • Open SettingsToolsAI AssistantModel Context Protocol (MCP).
  • Click Add (+)
  • Select As JSON
  • Put the following to the input field:
json
{
  "mcpServers": {
    "vmanomaly": {
      "command": "/path/to/mcp-vmanomaly",
      "env": {
        "VMANOMALY_ENDPOINT": "http://localhost:8490",
        "VMANOMALY_BEARER_TOKEN": "<YOUR_TOKEN>",
        "VMANOMALY_HEADERS": "X-Custom=value1,X-Auth=value2"
      }
    }
  }
}

Windsurf

Add the following to your Windsurf MCP config file:

json
{
  "mcpServers": {
    "vmanomaly": {
      "command": "/path/to/mcp-vmanomaly",
      "env": {
        "VMANOMALY_ENDPOINT": "http://localhost:8490",
        "VMANOMALY_BEARER_TOKEN": "<YOUR_TOKEN>",
        "VMANOMALY_HEADERS": "X-Custom=value1,X-Auth=value2"
      }
    }
  }
}

See Windsurf MCP docs for more info.

Using Docker instead of binary

You can run vmanomaly MCP server using Docker instead of local binary.

You should replace run command in configuration examples above in the following way:

json
{
  "mcpServers": {
    "vmanomaly": {
      "command": "docker",
      "args": [
        "run",
        "-i", "--rm",
        "-e", "VMANOMALY_ENDPOINT",
        "-e", "VMANOMALY_BEARER_TOKEN",
        "-e", "VMANOMALY_HEADERS",
        "ghcr.io/victoriametrics/mcp-vmanomaly"
      ],
      "env": {
        "VMANOMALY_ENDPOINT": "http://localhost:8490",
        "VMANOMALY_BEARER_TOKEN": "<YOUR_TOKEN>",
        "VMANOMALY_HEADERS": "X-Custom=value1,X-Auth=value2"
      }
    }
  }
}

Usage

After installing and configuring the MCP server, you can start using it with your favorite MCP client.

You can start dialog with AI assistant from the phrase:

code
Use MCP vmanomaly in the following answers

But it's not required, you can just start asking questions and the assistant will automatically use the tools and documentation to provide you with the best answers.

Toolset

MCP vmanomaly provides tools organized into categories:

Health & Info (3 tools)

ToolDescription
vmanomaly_health_checkCheck vmanomaly server health status
vmanomaly_get_buildinfoGet build information (version, build time, Go version)
vmanomaly_get_metricsGet vmanomaly server metrics in Prometheus format

Model Configuration (3 tools)

ToolDescription
vmanomaly_list_modelsList all available anomaly detection model types
vmanomaly_get_model_schemaGet JSON schema for a specific model type
vmanomaly_validate_model_configValidate model configuration before using it

Configuration (1 tool)

ToolDescription
vmanomaly_validate_configValidate complete vmanomaly YAML configuration

Documentation (1 tool)

ToolDescription
vmanomaly_search_docsFull-text search across vmanomaly documentation with fuzzy matching

Compatibility (1 tool)

ToolDescription
vmanomaly_check_compatibilityCheck if persisted state is compatible with runtime version

Alerting (1 tool)

ToolDescription
vmanomaly_generate_alert_ruleGenerate VMAlert rule YAML for anomaly score alerting

Dialog example

This is an example dialog showing how AI assistant can help with vmanomaly configuration and anomaly detection:

  • You: I want to set up anomaly detection for CPU usage. What models are available?

  • Assistant: I can help you set up anomaly detection for CPU usage. Let me check available models in vmanomaly.

    Your vmanomaly instance supports the following anomaly detection models:

    • zscore - Statistical model based on z-score (standard deviations from mean)
    • prophet - Facebook's Prophet model for time series with seasonality
    • mad - Median Absolute Deviation model
    • holtwinters - Holt-Winters seasonal method
    • isolation_forest_univariate - Isolation Forest for univariate time series
    • rolling_quantile - Rolling quantile-based detection
    • auto - Automatic model selection

    For CPU usage monitoring, I'd recommend starting with zscore model as it's simple, fast, and works well for metrics with stable patterns. Would you like me to generate a configuration?

  • You: Yes, please generate a configuration for monitoring CPU usage with zscore model

  • Assistant: I'll generate a complete vmanomaly configuration for CPU usage monitoring with the zscore model:

    yaml
    scheduler:
      infer_every: "1m"
      fit_every: "1h"
      fit_window: "2d"
    
    reader:
      datasource_url: "http://victoriametrics:8428"
      queries:
        cpu_usage: "avg(rate(node_cpu_seconds_total{mode!='idle'}[5m])) by (instance)"
      step: "1m"
    
    models:
      zscore_cpu:
        class: "zscore"
        z_threshold: 2.5
    
    writer:
      datasource_url: "http://victoriametrics:8428"
    

    This configuration will:

    • Run inference every minute
    • Retrain the model every hour using the last 2 days of data
    • Use z-score model with threshold of 2.5 standard deviations
    • Monitor CPU usage across all instances

    Would you like me to validate this configuration or adjust any parameters?

  • You: Looks good! Can you create a detection task to test it?

  • Assistant: I'll create an anomaly detection task to test the configuration on your recent CPU data:

    Task created successfully! Here's the status:

    • Task ID: task-abc-123
    • Status: Running
    • Model: zscore with z_threshold=2.5
    • Time range: Last 24 hours
    • Progress: Processing data...

    The task is analyzing your CPU usage data. You can check the results in VictoriaMetrics by querying the anomaly score metrics. Would you like me to check the task status or show you how to query the results?

During this dialog, the assistant used the following tools:

  • vmanomaly_list_models to get available model types
  • vmanomaly_validate_config to validate the configuration
  • vmanomaly_create_detection_task to start anomaly detection
  • vmanomaly_search_docs to provide context about model parameters

Monitoring

In HTTP and SSE modes the MCP Server provides metrics in Prometheus format at the /metrics endpoint.

Tracked operations:

  • mcp_vmanomaly_initialize_total - Client connections
  • mcp_vmanomaly_call_tool_total{name,is_error} - Tool calls with success/error tracking
  • mcp_vmanomaly_read_resource_total{uri} - Documentation resource reads
  • mcp_vmanomaly_list_*_total - List operations (tools, resources, prompts)
  • mcp_vmanomaly_error_total{method,error} - Errors by method and type

Example:

bash
# Start in HTTP mode
VMANOMALY_ENDPOINT="http://localhost:8490" MCP_SERVER_MODE=http ./bin/mcp-vmanomaly

# Query metrics
curl http://localhost:8080/metrics

Roadmap

  • Add prompts for common vmanomaly workflows (model selection, troubleshooting)
  • Grafana dashboard for MCP server monitoring
  • Add API compatibility matrix to gracefully handle version differences between MCP client and vmanomaly server (API is evolving, features may be unavailable)

Disclaimer

AI services and agents along with MCP servers like this cannot guarantee the accuracy, completeness and reliability of results. You should double check the results obtained with AI.

The quality of the MCP Server and its responses depends very much on the capabilities of your client and the quality of the model you are using.

Contributing

Contributions to the MCP vmanomaly project are welcome!

Please feel free to submit issues, feature requests, or pull requests.

Related Projects

Support

For vmanomaly-specific questions, see the vmanomaly documentation.

For MCP server issues, please open an issue in this repository.

常见问题

io.github.VictoriaMetrics/mcp-vmanomaly 是什么?

面向 vmanomaly 的 MCP Server,用于 VictoriaMetrics 环境中的异常检测与分析。

相关 Skills

MCP构建

by anthropics

Universal
热门

聚焦高质量 MCP Server 开发,覆盖协议研究、工具设计、错误处理与传输选型,适合用 FastMCP 或 MCP SDK 对接外部 API、封装服务能力。

想让 LLM 稳定调用外部 API,就用 MCP构建:从 Python 到 Node 都有成熟指引,帮你更快做出高质量 MCP 服务器。

平台与服务
未扫描111.8k

Slack动图

by anthropics

Universal
热门

面向Slack的动图制作Skill,内置emoji/消息GIF的尺寸、帧率和色彩约束、校验与优化流程,适合把创意或上传图片快速做成可直接发送的Slack动画。

帮你快速做出适配 Slack 的动图,内置约束规则和校验工具,少踩上传与播放坑,做表情包和演示都更省心。

平台与服务
未扫描111.8k

MCP服务构建器

by alirezarezvani

Universal
热门

从 OpenAPI 一键生成 Python/TypeScript MCP server 脚手架,并校验 tool schema、命名规范与版本兼容性,适合把现有 REST API 快速发布成可生产演进的 MCP 服务。

帮你快速搭建 MCP 服务与后端 API,脚手架完善、扩展顺手,尤其适合想高效验证服务能力的开发者。

平台与服务
未扫描9.8k

相关 MCP Server

Slack 消息

编辑精选

by Anthropic

热门

Slack 是让 AI 助手直接读写你的 Slack 频道和消息的 MCP 服务器。

这个服务器解决了团队协作中需要 AI 实时获取 Slack 信息的痛点,特别适合开发团队让 Claude 帮忙汇总频道讨论或发送通知。不过,它目前只是参考实现,文档有限,不建议在生产环境直接使用——更适合开发者学习 MCP 如何集成第三方服务。

平台与服务
83.1k

by netdata

热门

io.github.netdata/mcp-server 是让 AI 助手实时监控服务器指标和日志的 MCP 服务器。

这个工具解决了运维人员需要手动检查系统状态的痛点,最适合 DevOps 团队让 Claude 自动分析性能数据。不过,它依赖 NetData 的现有部署,如果你没用过这个监控平台,得先花时间配置。

平台与服务
78.3k

by d4vinci

热门

Scrapling MCP Server 是专为现代网页设计的智能爬虫工具,支持绕过 Cloudflare 等反爬机制。

这个工具解决了爬取动态网页和反爬网站时的头疼问题,特别适合需要批量采集电商价格或新闻数据的开发者。不过,它依赖外部浏览器引擎,资源消耗较大,不适合轻量级任务。

平台与服务
34.9k

评论