io.github.VictoriaMetrics/mcp-vmanomaly
平台与服务by victoriametrics
面向 vmanomaly 的 MCP Server,用于 VictoriaMetrics 环境中的异常检测与分析。
什么是 io.github.VictoriaMetrics/mcp-vmanomaly?
面向 vmanomaly 的 MCP Server,用于 VictoriaMetrics 环境中的异常检测与分析。
README
MCP Server for vmanomaly
The implementation of Model Context Protocol (MCP) server for vmanomaly - VictoriaMetrics Anomaly Detection product.
This provides seamless integration with vmanomaly REST API and documentation for AI-assisted anomaly detection, model management, and observability insights.
Features
This MCP server enables AI assistants like Claude to interact with vmanomaly for:
- Health Monitoring: Check
vmanomalyserver health and build information - Model Management: List, validate, and configure anomaly detection models (like
zscore_online,prophet, and more) - Configuration Generation: Generate complete
vmanomalyYAML configurations - Alert Rule Generation: Generate
vmalertalerting rules based on anomaly score metrics to simplify alerting setup - Documentation Search: Full-text search across embedded
vmanomalydocumentation with fuzzy matching
The MCP server contains embedded up-to-date vmanomaly documentation and is able to search it without online access.
The quality of the MCP Server and its responses depends very much on the capabilities of your client and the quality of the model you are using.
Requirements
vmanomalyinstance (version 1.28.3+) with REST API access- Go 1.24 or higher (if building from source)
Installation
Go
go install github.com/VictoriaMetrics/mcp-vmanomaly/cmd/mcp-vmanomaly@latest
Binaries
Download the latest release from Releases page and put it to your PATH.
Example for Linux x86_64 (other architectures and platforms are also available):
latest=$(curl -s https://api.github.com/repos/VictoriaMetrics/mcp-vmanomaly/releases/latest | grep 'tag_name' | cut -d\" -f4)
wget https://github.com/VictoriaMetrics/mcp-vmanomaly/releases/download/$latest/mcp-vmanomaly_Linux_x86_64.tar.gz
tar axvf mcp-vmanomaly_Linux_x86_64.tar.gz
Docker
You can run vmanomaly MCP Server using Docker.
This is the easiest way to get started without needing to install Go or build from source.
docker run -d --name mcp-vmanomaly \
-e VMANOMALY_ENDPOINT=http://localhost:8490 \
-e MCP_SERVER_MODE=http \
-e MCP_LISTEN_ADDR=:8080 \
-p 8080:8080 \
ghcr.io/victoriametrics/mcp-vmanomaly
You should replace environment variables with your own parameters.
Note that the MCP_SERVER_MODE=http flag is used to enable Streamable HTTP mode.
More details about server modes can be found in the Configuration section.
See available docker images in github registry.
Also see Using Docker instead of binary section for more details about using Docker with MCP server with clients in stdio mode.
Source Code
For building binary from source code you can use the following approach:
-
Clone repo:
bashgit clone https://github.com/VictoriaMetrics/mcp-vmanomaly.git cd mcp-vmanomaly -
Build binary from cloned source code:
bashmake build # after that you can find binary mcp-vmanomaly and copy this file to your PATH or run inplace -
Build image from cloned source code:
bashdocker build -t mcp-vmanomaly . # after that you can use docker image mcp-vmanomaly for running or pushing
Configuration
MCP Server for vmanomaly is configured via environment variables:
| Variable | Description | Required | Default | Allowed values |
|---|---|---|---|---|
VMANOMALY_ENDPOINT | vmanomaly server endpoint URL (e.g., http://localhost:8490) | Yes | - | - |
VMANOMALY_BEARER_TOKEN | Bearer token for authenticating with vmanomaly API | No | - | - |
VMANOMALY_HEADERS | Custom HTTP headers for requests (comma-separated key=value pairs, e.g., X-Custom=value1,X-Auth=value2) | No | - | - |
MCP_SERVER_MODE | Server operation mode. See Modes for details. | No | stdio | stdio, http, sse |
MCP_LISTEN_ADDR | Address for HTTP server to listen on | No | localhost:8080 | - |
MCP_DISABLED_TOOLS | Comma-separated list of tools to disable | No | - | - |
MCP_DISABLE_RESOURCES | Disable all resources (documentation search will continue to work) | No | false | false, true |
MCP_HEARTBEAT_INTERVAL | Heartbeat interval for streamable-http protocol (keeps connection alive through network infrastructure) | No | 30s | - |
MCP_LOG_LEVEL | Log level: debug (verbose), info (default), warn, or error | No | info | - |
MCP_LOG_FILE | Log file path (empty = stderr) | No | stderr | - |
Modes
MCP Server supports the following modes of operation (transports):
stdio- Standard input/output mode, where the server reads commands from standard input and writes responses to standard output. This is the default mode and is suitable for local servers.http- Streamable HTTP. Server will expose the/mcpendpoint for HTTP connections.sse- Server-Sent Events. Server will expose the/sseand/messageendpoints for SSE connections.
[!NOTE] The
ssetransport mode was officialy deprecated from MCP Specification (version 2025-03-26) and was replaced by Streamable HTTP transport (httpmode). In future releases its support can be deprecated, use Streamable HTTP transport if your client supports it.
More info about transports you can find in MCP docs:
Configuration examples
# Basic configuration
export VMANOMALY_ENDPOINT="http://localhost:8490"
# With authentication
export VMANOMALY_ENDPOINT="http://localhost:8490"
export VMANOMALY_BEARER_TOKEN="your-token"
# With custom headers (e.g., behind a reverse proxy)
export VMANOMALY_HEADERS="X-Custom-Header=value1,X-Another=value2"
# Server mode
export MCP_SERVER_MODE="http"
export MCP_LISTEN_ADDR="0.0.0.0:8080"
# Logging
export MCP_LOG_LEVEL="debug"
export MCP_LOG_FILE="/tmp/mcp-vmanomaly.log"
Endpoints
In HTTP and SSE modes the MCP server provides the following endpoints:
| Endpoint | Description |
|---|---|
/mcp | HTTP endpoint for streaming messages in HTTP mode (for MCP clients that support Streamable HTTP) |
/metrics | Metrics in Prometheus format for monitoring the MCP server |
/health/liveness | Liveness check endpoint to ensure the server is running |
/health/readiness | Readiness check endpoint to ensure the server is ready to accept requests |
/sse + /message | Endpoints for messages in SSE mode (for MCP clients that support SSE) |
Setup in clients
Cursor
Go to: Settings → Cursor Settings → MCP → Add new global MCP server and paste the following configuration into your Cursor ~/.cursor/mcp.json file:
{
"mcpServers": {
"vmanomaly": {
"command": "/path/to/mcp-vmanomaly",
"env": {
"VMANOMALY_ENDPOINT": "http://localhost:8490",
"VMANOMALY_BEARER_TOKEN": "<YOUR_TOKEN>",
"VMANOMALY_HEADERS": "X-Custom=value1,X-Auth=value2"
}
}
}
}
See Cursor MCP docs for more info.
Claude Desktop
Add this to your Claude Desktop claude_desktop_config.json file (you can find it if open Settings → Developer → Edit config):
{
"mcpServers": {
"vmanomaly": {
"command": "/path/to/mcp-vmanomaly",
"env": {
"VMANOMALY_ENDPOINT": "http://localhost:8490",
"VMANOMALY_BEARER_TOKEN": "<YOUR_TOKEN>",
"VMANOMALY_HEADERS": "X-Custom=value1,X-Auth=value2"
}
}
}
}
See Claude Desktop MCP docs for more info.
Claude Code
Run the command:
claude mcp add vmanomaly -- /path/to/mcp-vmanomaly \
-e VMANOMALY_ENDPOINT=http://localhost:8490 \
-e VMANOMALY_BEARER_TOKEN=<YOUR_TOKEN> \
-e VMANOMALY_HEADERS="X-Custom=value1,X-Auth=value2"
See Claude Code MCP docs for more info.
Visual Studio Code
Add this to your VS Code MCP config file:
{
"servers": {
"vmanomaly": {
"type": "stdio",
"command": "/path/to/mcp-vmanomaly",
"env": {
"VMANOMALY_ENDPOINT": "http://localhost:8490",
"VMANOMALY_BEARER_TOKEN": "<YOUR_TOKEN>",
"VMANOMALY_HEADERS": "X-Custom=value1,X-Auth=value2"
}
}
}
}
See VS Code MCP docs for more info.
Zed
Add the following to your Zed config file:
"context_servers": {
"vmanomaly": {
"command": {
"path": "/path/to/mcp-vmanomaly",
"args": [],
"env": {
"VMANOMALY_ENDPOINT": "http://localhost:8490",
"VMANOMALY_BEARER_TOKEN": "<YOUR_TOKEN>",
"VMANOMALY_HEADERS": "X-Custom=value1,X-Auth=value2"
}
},
"settings": {}
}
}
See Zed MCP docs for more info.
JetBrains IDEs
- Open
Settings→Tools→AI Assistant→Model Context Protocol (MCP). - Click
Add (+) - Select
As JSON - Put the following to the input field:
{
"mcpServers": {
"vmanomaly": {
"command": "/path/to/mcp-vmanomaly",
"env": {
"VMANOMALY_ENDPOINT": "http://localhost:8490",
"VMANOMALY_BEARER_TOKEN": "<YOUR_TOKEN>",
"VMANOMALY_HEADERS": "X-Custom=value1,X-Auth=value2"
}
}
}
}
Windsurf
Add the following to your Windsurf MCP config file:
{
"mcpServers": {
"vmanomaly": {
"command": "/path/to/mcp-vmanomaly",
"env": {
"VMANOMALY_ENDPOINT": "http://localhost:8490",
"VMANOMALY_BEARER_TOKEN": "<YOUR_TOKEN>",
"VMANOMALY_HEADERS": "X-Custom=value1,X-Auth=value2"
}
}
}
}
See Windsurf MCP docs for more info.
Using Docker instead of binary
You can run vmanomaly MCP server using Docker instead of local binary.
You should replace run command in configuration examples above in the following way:
{
"mcpServers": {
"vmanomaly": {
"command": "docker",
"args": [
"run",
"-i", "--rm",
"-e", "VMANOMALY_ENDPOINT",
"-e", "VMANOMALY_BEARER_TOKEN",
"-e", "VMANOMALY_HEADERS",
"ghcr.io/victoriametrics/mcp-vmanomaly"
],
"env": {
"VMANOMALY_ENDPOINT": "http://localhost:8490",
"VMANOMALY_BEARER_TOKEN": "<YOUR_TOKEN>",
"VMANOMALY_HEADERS": "X-Custom=value1,X-Auth=value2"
}
}
}
}
Usage
After installing and configuring the MCP server, you can start using it with your favorite MCP client.
You can start dialog with AI assistant from the phrase:
Use MCP vmanomaly in the following answers
But it's not required, you can just start asking questions and the assistant will automatically use the tools and documentation to provide you with the best answers.
Toolset
MCP vmanomaly provides tools organized into categories:
Health & Info (3 tools)
| Tool | Description |
|---|---|
vmanomaly_health_check | Check vmanomaly server health status |
vmanomaly_get_buildinfo | Get build information (version, build time, Go version) |
vmanomaly_get_metrics | Get vmanomaly server metrics in Prometheus format |
Model Configuration (3 tools)
| Tool | Description |
|---|---|
vmanomaly_list_models | List all available anomaly detection model types |
vmanomaly_get_model_schema | Get JSON schema for a specific model type |
vmanomaly_validate_model_config | Validate model configuration before using it |
Configuration (1 tool)
| Tool | Description |
|---|---|
vmanomaly_validate_config | Validate complete vmanomaly YAML configuration |
Documentation (1 tool)
| Tool | Description |
|---|---|
vmanomaly_search_docs | Full-text search across vmanomaly documentation with fuzzy matching |
Compatibility (1 tool)
| Tool | Description |
|---|---|
vmanomaly_check_compatibility | Check if persisted state is compatible with runtime version |
Alerting (1 tool)
| Tool | Description |
|---|---|
vmanomaly_generate_alert_rule | Generate VMAlert rule YAML for anomaly score alerting |
Dialog example
This is an example dialog showing how AI assistant can help with vmanomaly configuration and anomaly detection:
-
You: I want to set up anomaly detection for CPU usage. What models are available?
-
Assistant: I can help you set up anomaly detection for CPU usage. Let me check available models in vmanomaly.
Your vmanomaly instance supports the following anomaly detection models:
zscore- Statistical model based on z-score (standard deviations from mean)prophet- Facebook's Prophet model for time series with seasonalitymad- Median Absolute Deviation modelholtwinters- Holt-Winters seasonal methodisolation_forest_univariate- Isolation Forest for univariate time seriesrolling_quantile- Rolling quantile-based detectionauto- Automatic model selection
For CPU usage monitoring, I'd recommend starting with
zscoremodel as it's simple, fast, and works well for metrics with stable patterns. Would you like me to generate a configuration? -
You: Yes, please generate a configuration for monitoring CPU usage with zscore model
-
Assistant: I'll generate a complete vmanomaly configuration for CPU usage monitoring with the zscore model:
yamlscheduler: infer_every: "1m" fit_every: "1h" fit_window: "2d" reader: datasource_url: "http://victoriametrics:8428" queries: cpu_usage: "avg(rate(node_cpu_seconds_total{mode!='idle'}[5m])) by (instance)" step: "1m" models: zscore_cpu: class: "zscore" z_threshold: 2.5 writer: datasource_url: "http://victoriametrics:8428"This configuration will:
- Run inference every minute
- Retrain the model every hour using the last 2 days of data
- Use z-score model with threshold of 2.5 standard deviations
- Monitor CPU usage across all instances
Would you like me to validate this configuration or adjust any parameters?
-
You: Looks good! Can you create a detection task to test it?
-
Assistant: I'll create an anomaly detection task to test the configuration on your recent CPU data:
Task created successfully! Here's the status:
- Task ID:
task-abc-123 - Status: Running
- Model: zscore with z_threshold=2.5
- Time range: Last 24 hours
- Progress: Processing data...
The task is analyzing your CPU usage data. You can check the results in VictoriaMetrics by querying the anomaly score metrics. Would you like me to check the task status or show you how to query the results?
- Task ID:
During this dialog, the assistant used the following tools:
vmanomaly_list_modelsto get available model typesvmanomaly_validate_configto validate the configurationvmanomaly_create_detection_taskto start anomaly detectionvmanomaly_search_docsto provide context about model parameters
Monitoring
In HTTP and SSE modes the MCP Server provides metrics in Prometheus format at the /metrics endpoint.
Tracked operations:
mcp_vmanomaly_initialize_total- Client connectionsmcp_vmanomaly_call_tool_total{name,is_error}- Tool calls with success/error trackingmcp_vmanomaly_read_resource_total{uri}- Documentation resource readsmcp_vmanomaly_list_*_total- List operations (tools, resources, prompts)mcp_vmanomaly_error_total{method,error}- Errors by method and type
Example:
# Start in HTTP mode
VMANOMALY_ENDPOINT="http://localhost:8490" MCP_SERVER_MODE=http ./bin/mcp-vmanomaly
# Query metrics
curl http://localhost:8080/metrics
Roadmap
- Add prompts for common vmanomaly workflows (model selection, troubleshooting)
- Grafana dashboard for MCP server monitoring
- Add API compatibility matrix to gracefully handle version differences between MCP client and vmanomaly server (API is evolving, features may be unavailable)
Disclaimer
AI services and agents along with MCP servers like this cannot guarantee the accuracy, completeness and reliability of results. You should double check the results obtained with AI.
The quality of the MCP Server and its responses depends very much on the capabilities of your client and the quality of the model you are using.
Contributing
Contributions to the MCP vmanomaly project are welcome!
Please feel free to submit issues, feature requests, or pull requests.
Related Projects
- vmanomaly - VictoriaMetrics anomaly detection
- VictoriaMetrics - Time series database
- mcp-victoriametrics - MCP server for VictoriaMetrics
- Model Context Protocol - MCP specification
Support
For vmanomaly-specific questions, see the vmanomaly documentation.
For MCP server issues, please open an issue in this repository.
常见问题
io.github.VictoriaMetrics/mcp-vmanomaly 是什么?
面向 vmanomaly 的 MCP Server,用于 VictoriaMetrics 环境中的异常检测与分析。
相关 Skills
MCP构建
by anthropics
聚焦高质量 MCP Server 开发,覆盖协议研究、工具设计、错误处理与传输选型,适合用 FastMCP 或 MCP SDK 对接外部 API、封装服务能力。
✎ 想让 LLM 稳定调用外部 API,就用 MCP构建:从 Python 到 Node 都有成熟指引,帮你更快做出高质量 MCP 服务器。
Slack动图
by anthropics
面向Slack的动图制作Skill,内置emoji/消息GIF的尺寸、帧率和色彩约束、校验与优化流程,适合把创意或上传图片快速做成可直接发送的Slack动画。
✎ 帮你快速做出适配 Slack 的动图,内置约束规则和校验工具,少踩上传与播放坑,做表情包和演示都更省心。
MCP服务构建器
by alirezarezvani
从 OpenAPI 一键生成 Python/TypeScript MCP server 脚手架,并校验 tool schema、命名规范与版本兼容性,适合把现有 REST API 快速发布成可生产演进的 MCP 服务。
✎ 帮你快速搭建 MCP 服务与后端 API,脚手架完善、扩展顺手,尤其适合想高效验证服务能力的开发者。
相关 MCP Server
Slack 消息
编辑精选by Anthropic
Slack 是让 AI 助手直接读写你的 Slack 频道和消息的 MCP 服务器。
✎ 这个服务器解决了团队协作中需要 AI 实时获取 Slack 信息的痛点,特别适合开发团队让 Claude 帮忙汇总频道讨论或发送通知。不过,它目前只是参考实现,文档有限,不建议在生产环境直接使用——更适合开发者学习 MCP 如何集成第三方服务。
by netdata
io.github.netdata/mcp-server 是让 AI 助手实时监控服务器指标和日志的 MCP 服务器。
✎ 这个工具解决了运维人员需要手动检查系统状态的痛点,最适合 DevOps 团队让 Claude 自动分析性能数据。不过,它依赖 NetData 的现有部署,如果你没用过这个监控平台,得先花时间配置。
by d4vinci
Scrapling MCP Server 是专为现代网页设计的智能爬虫工具,支持绕过 Cloudflare 等反爬机制。
✎ 这个工具解决了爬取动态网页和反爬网站时的头疼问题,特别适合需要批量采集电商价格或新闻数据的开发者。不过,它依赖外部浏览器引擎,资源消耗较大,不适合轻量级任务。