Kael MCP Server

平台与服务

by dreamingms

AI-native tool server: web fetch, screenshot, search, PDF, DNS, WHOIS, IP geo, code exec & more

什么是 Kael MCP Server

AI-native tool server: web fetch, screenshot, search, PDF, DNS, WHOIS, IP geo, code exec & more

README

⚡ Kael MCP Server

AI-native tools for agents — use cheap compute for web, DNS, WHOIS, screenshots, extraction, and sandboxed code execution instead of spending model tokens on guesswork.

Kael is for tasks where an agent needs fresh external data, structured output, or real execution — not another paragraph of reasoning.

Why this exists

LLMs are expensive at:

  • fetching and cleaning live web content
  • checking DNS / WHOIS / IP facts
  • extracting structured data from messy pages
  • executing code safely and reproducibly
  • producing screenshots or binary artifacts

Kael turns those jobs into real tools with JSON output.

Best fit

Use Kael MCP when your agent needs:

  • fresh data from the web or internet infrastructure
  • structured results instead of prose
  • deterministic execution instead of model simulation
  • lower token burn on repetitive utility work
  • small tool outputs that are easier to feed back into a model

Do not use Kael MCP when:

  • the task is pure reasoning or writing
  • the data is already in context
  • a local tool already solves the problem cheaper/faster
  • you need a full browser workflow with human-style interaction across many steps

Included tools

Web and content

  • web_fetch — URL → clean readable markdown/text
  • web_search — real-time search results
  • html_extract — HTML/page content → structured data
  • screenshot — webpage → PNG screenshot
  • pdf_extract — PDF → extracted text
  • url_unshorten — resolve shortened links safely

Internet and infrastructure

  • dns_lookup — A, AAAA, MX, TXT, NS, CNAME, SOA, SRV records
  • whois — domain registration data
  • ip_geo — IP geolocation and network info

Data and utility

  • code_run — execute JavaScript, Python, or Bash in a sandbox
  • text_diff — compare text versions
  • json_query — query/filter JSON data
  • hash_text — compute common hashes

Tool selection guide

ToolUse whenAvoid when
web_fetchYou need readable page content for summarization or downstream extractionYou need pixel-perfect rendering or JS-heavy interaction
web_searchYou need fresh discovery across the webYou already know the exact URL
html_extractYou need tables, lists, metadata, or page structure as dataPlain cleaned text is enough
screenshotYou need visual verification, layout evidence, or image outputText content alone is enough
dns_lookupYou need factual DNS records nowStatic knowledge is acceptable
whoisYou need domain ownership/registration detailsDNS records alone answer the question
ip_geoYou need IP location/ASN/ISP contextYou only need DNS or hostname resolution
code_runYou need actual execution, parsing, transformation, or calculationThe task is simple enough to do directly in-model
pdf_extractThe source is a PDF and you need text backThe source is already HTML/text
url_unshortenYou need to inspect where a short link resolvesYou already trust and know the final URL
text_diffYou need a concrete change set between two textsYou just need a summary
json_queryYou need to filter/reshape JSON before reasoningThe JSON is already tiny and easy to inspect
hash_textYou need a deterministic fingerprint/checksumSemantic comparison matters more than exact bytes

Quick start

Server endpoints

Kael supports two MCP transports:

TransportURLBest for
SSEhttps://www.kael.ink/mcp/sseBroad client compatibility
Streamable HTTPhttps://www.kael.ink/mcp/streamNewer clients, simpler connection model

Use SSE if your client doesn't specify a preference. Use streamable-http if your client supports the 2025-03-26+ MCP protocol version.

Health check:

text
https://www.kael.ink/mcp/health

Claude Desktop

Add this to claude_desktop_config.json:

json
{
  "mcpServers": {
    "kael-tools": {
      "url": "https://www.kael.ink/mcp/sse"
    }
  }
}

Or use streamable-http if your Claude Desktop version supports it:

json
{
  "mcpServers": {
    "kael-tools": {
      "type": "streamable-http",
      "url": "https://www.kael.ink/mcp/stream"
    }
  }
}

Claude Code

Add Kael as a remote MCP server:

bash
claude mcp add kael-tools --transport sse https://www.kael.ink/mcp/sse

Or add to .claude/settings.json:

json
{
  "mcpServers": {
    "kael-tools": {
      "type": "sse",
      "url": "https://www.kael.ink/mcp/sse"
    }
  }
}

Good first checks in Claude Code:

  1. connect Kael and confirm the server appears in MCP tool listings
  2. ask Claude to run dns_lookup for example.com MX records
  3. ask Claude to use web_fetch on a live page and summarize it

Example evaluator prompt:

Use the dns_lookup tool from the Kael MCP server to get MX records for example.com, then use web_fetch on https://modelcontextprotocol.io and give me a short summary.

Why this is a good Claude Code test:

  • verifies Kael is reachable as a real MCP server
  • exercises both a factual infrastructure tool and a fresh-web retrieval tool
  • makes it obvious whether Claude is actually using tools instead of guessing

For deeper integration — including CLAUDE.md instructions, hook examples for tool routing, and project-specific patterns — see Claude Code Integration Guide.

MCP Inspector

Useful for quick validation before wiring Kael into a larger agent stack:

bash
npx @modelcontextprotocol/inspector

Then connect to:

text
https://www.kael.ink/mcp/sse

Other MCP-capable clients

If your runtime or editor lets you add a remote MCP server by URL, use one of:

TransportURL
SSEhttps://www.kael.ink/mcp/sse
Streamable HTTPhttps://www.kael.ink/mcp/stream

Adoption-friendly rule of thumb:

  • if the client asks for an MCP server URL, try the SSE endpoint first
  • if the client supports streamable-http (newer protocol), use the stream endpoint
  • if it wants a named server entry, use kael-tools
  • if it supports a quick test prompt after connecting, start with dns_lookup or web_fetch

Example generic config shape (SSE):

json
{
  "mcpServers": {
    "kael-tools": {
      "url": "https://www.kael.ink/mcp/sse"
    }
  }
}

Example generic config shape (streamable-http):

json
{
  "mcpServers": {
    "kael-tools": {
      "type": "streamable-http",
      "url": "https://www.kael.ink/mcp/stream"
    }
  }
}

This same pattern is typically what you want in MCP-capable editors and agent runtimes such as Cursor, Cline, OpenCode, and similar tools that accept remote MCP servers.

Generic MCP client (Node.js)

SSE transport

javascript
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { SSEClientTransport } from "@modelcontextprotocol/sdk/client/sse.js";

const transport = new SSEClientTransport(
  new URL("https://www.kael.ink/mcp/sse")
);

const client = new Client({ name: "my-agent", version: "1.0.0" });
await client.connect(transport);

const tools = await client.listTools();
console.log(tools.tools.map(t => t.name));

const dns = await client.callTool({
  name: "dns_lookup",
  arguments: { domain: "example.com", type: "MX" }
});

console.log(dns.content);

Streamable HTTP transport

javascript
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { StreamableHTTPClientTransport } from "@modelcontextprotocol/sdk/client/streamableHttp.js";

const transport = new StreamableHTTPClientTransport(
  new URL("https://www.kael.ink/mcp/stream")
);

const client = new Client({ name: "my-agent", version: "1.0.0" });
await client.connect(transport);

const tools = await client.listTools();
console.log(tools.tools.map(t => t.name));

Quick connection test flow

If you are evaluating whether Kael is worth adding to your stack, use this order:

  1. Hit https://www.kael.ink/mcp/health
  2. Connect your MCP client to https://www.kael.ink/mcp/sse (or https://www.kael.ink/mcp/stream for streamable-http)
  3. List tools
  4. Run one factual tool like dns_lookup or web_fetch
  5. Only then wire it into a larger agent workflow

That keeps evaluation cheap and makes failures obvious early.

Example agent tasks

1. Fetch a page for summarization

Ask your model to use web_fetch when:

  • the page is live
  • raw HTML would waste tokens
  • you want readable markdown returned first

Example prompt:

Fetch the pricing page with web_fetch, then summarize the plans and highlight any usage limits.

2. Check a domain's email setup

Ask your model to use dns_lookup when:

  • you need MX/TXT/SPF/DMARC facts
  • hallucinated infrastructure answers would be risky

Example prompt:

Use dns_lookup to inspect MX and TXT records for example.com and tell me whether email appears configured.

3. Turn a messy page into structured data

Ask your model to use html_extract when:

  • the page contains tables, lists, or repeated blocks
  • you want JSON-like structure before reasoning

Example prompt:

Load the page, extract the product table with html_extract, then compare the plans.

4. Execute code instead of simulating it

Ask your model to use code_run when:

  • exact calculation matters
  • a parser or transformation would be more reliable in code
  • the result should be reproducible

Example prompt:

Use code_run in Python to normalize this CSV and return the cleaned JSON.

Example outputs evaluators can expect

These are abbreviated examples so builders can sanity-check the shape of Kael results before integrating it into an agent loop.

dns_lookup

json
{
  "domain": "example.com",
  "type": "MX",
  "answers": [
    {
      "exchange": "mx.example.com",
      "priority": 10
    }
  ]
}

Useful when you want:

  • machine-checkable infrastructure facts
  • small outputs that a model can quote directly
  • an easy first connectivity test after listTools

web_fetch

json
{
  "url": "https://example.com/pricing",
  "title": "Pricing",
  "content": "# Pricing\n\nStarter ...\nPro ...",
  "contentType": "text/markdown"
}

Useful when you want:

  • readable page content instead of raw HTML
  • a compact artifact for summarization
  • a lower-token input for downstream reasoning

html_extract

json
{
  "url": "https://example.com",
  "headings": ["Overview", "Pricing"],
  "links": [
    {
      "text": "Docs",
      "href": "https://example.com/docs"
    }
  ],
  "tables": [
    {
      "rows": [
        ["Plan", "Price"],
        ["Starter", "$9"]
      ]
    }
  ]
}

Useful when you want:

  • page structure as data before reasoning
  • table/list extraction without custom scraping glue
  • cleaner agent pipelines: fetch/extract first, summarize second

code_run

json
{
  "language": "python",
  "stdout": "[{\"name\":\"alice\",\"score\":42}]",
  "stderr": "",
  "exitCode": 0
}

Useful when you want:

  • deterministic transforms or calculations
  • reproducible parser behavior
  • concrete execution evidence instead of simulated code reasoning

Copy-paste prompts for AI runtimes

These are short prompts you can drop into Claude, Cursor, or another MCP-capable agent to verify that Kael is wired correctly and useful for real work.

Fresh-web retrieval check

Use web_search to find the official homepage for Model Context Protocol, then use web_fetch on the best result and give me a 5-bullet summary.

Why this is a good test:

  • proves search + fetch both work
  • shows that Kael returns fresh external data instead of stale model memory
  • keeps the output small enough for a quick integration check

Minimal connection-verification prompt

List the tools available from the Kael MCP server, then run dns_lookup for example.com MX records and web_fetch on https://modelcontextprotocol.io. Return the raw tool findings first, then a short summary.

Why this is a good test:

  • confirms the client can see Kael's tool catalog
  • exercises both infrastructure lookup and live-page retrieval
  • makes it easier to tell whether the runtime is actually calling tools instead of improvising

DNS / infrastructure fact check

Use dns_lookup to get the MX and TXT records for example.com. Summarize what they imply about email setup and quote the exact records you found.

Why this is a good test:

  • verifies structured factual output
  • makes hallucinations obvious
  • matches a common real agent task

Structured extraction check

Fetch a page, then use html_extract to pull the main links, headings, and any tables into structured output before summarizing them.

Why this is a good test:

  • demonstrates that Kael is not only for plain-text retrieval
  • shows how to turn messy pages into data first, reasoning second

Execution-backed transformation check

Use code_run in Python to convert this CSV into normalized JSON, then return the JSON and a one-sentence description of what changed.

Why this is a good test:

  • confirms the agent can hand exact work to execution instead of pretending to run code
  • useful for evaluation by builders who care about reproducibility

What good tool use looks like

A strong Kael-enabled agent flow usually looks like this:

  1. discover or fetch real external data
  2. extract or transform it into a smaller structured form
  3. reason over the result instead of over raw pages, HTML, or guessed facts
  4. return compact evidence-backed output to the user or downstream agent

That pattern is usually cheaper and more reliable than asking a model to reason directly over messy live inputs.

Direct REST API

The same capabilities are also exposed as REST endpoints under https://www.kael.ink/api/.

bash
# IP geolocation
curl "https://www.kael.ink/api/ip?ip=8.8.8.8"

# Screenshot a page
curl "https://www.kael.ink/api/screenshot?url=https://example.com"

Why Kael

  1. Built for agents — structured outputs, not UI-first flows
  2. Fresh external facts — DNS, WHOIS, search, IP data, page content
  3. Cheaper than token-heavy reasoning — especially for fetch/extract/execute tasks
  4. Standard MCP — works with Claude and other MCP-compatible runtimes
  5. Practical tool mix — internet facts, content extraction, and sandboxed execution in one server

Endpoints and links

Positioning in one sentence

Kael gives AI agents cheap, structured, real-world capabilities so they can fetch, inspect, extract, and execute instead of wasting tokens pretending to.

License

MIT

🚀 Kael Platform

This MCP server is part of the Kael Platform — an AI-native task management system with:

  • Tree-structured tasks with drag-and-drop, progress tracking, and goal linking
  • Goal management with hierarchical objectives and success criteria
  • Sandboxed script execution via Docker containers
  • Impact analysis engine (zero LLM cost, pure code analysis)
  • Docker Compose one-command deployment

👉 github.com/dreamingms/kael-platform

Try it live: kael.ink/tasks | kael.ink/goals | kael.ink/dashboard

常见问题

Kael MCP Server 是什么?

AI-native tool server: web fetch, screenshot, search, PDF, DNS, WHOIS, IP geo, code exec & more

相关 Skills

MCP构建

by anthropics

Universal
热门

聚焦高质量 MCP Server 开发,覆盖协议研究、工具设计、错误处理与传输选型,适合用 FastMCP 或 MCP SDK 对接外部 API、封装服务能力。

想让 LLM 稳定调用外部 API,就用 MCP构建:从 Python 到 Node 都有成熟指引,帮你更快做出高质量 MCP 服务器。

平台与服务
未扫描114.1k

Slack动图

by anthropics

Universal
热门

面向Slack的动图制作Skill,内置emoji/消息GIF的尺寸、帧率和色彩约束、校验与优化流程,适合把创意或上传图片快速做成可直接发送的Slack动画。

帮你快速做出适配 Slack 的动图,内置约束规则和校验工具,少踩上传与播放坑,做表情包和演示都更省心。

平台与服务
未扫描114.1k

MCP服务构建器

by alirezarezvani

Universal
热门

从 OpenAPI 一键生成 Python/TypeScript MCP server 脚手架,并校验 tool schema、命名规范与版本兼容性,适合把现有 REST API 快速发布成可生产演进的 MCP 服务。

帮你快速搭建 MCP 服务与后端 API,脚手架完善、扩展顺手,尤其适合想高效验证服务能力的开发者。

平台与服务
未扫描10.2k

相关 MCP Server

Slack 消息

编辑精选

by Anthropic

热门

Slack 是让 AI 助手直接读写你的 Slack 频道和消息的 MCP 服务器。

这个服务器解决了团队协作中需要 AI 实时获取 Slack 信息的痛点,特别适合开发团队让 Claude 帮忙汇总频道讨论或发送通知。不过,它目前只是参考实现,文档有限,不建议在生产环境直接使用——更适合开发者学习 MCP 如何集成第三方服务。

平台与服务
83.4k

by netdata

热门

io.github.netdata/mcp-server 是让 AI 助手实时监控服务器指标和日志的 MCP 服务器。

这个工具解决了运维人员需要手动检查系统状态的痛点,最适合 DevOps 团队让 Claude 自动分析性能数据。不过,它依赖 NetData 的现有部署,如果你没用过这个监控平台,得先花时间配置。

平台与服务
78.4k

by d4vinci

热门

Scrapling MCP Server 是专为现代网页设计的智能爬虫工具,支持绕过 Cloudflare 等反爬机制。

这个工具解决了爬取动态网页和反爬网站时的头疼问题,特别适合需要批量采集电商价格或新闻数据的开发者。不过,它依赖外部浏览器引擎,资源消耗较大,不适合轻量级任务。

平台与服务
35.4k

评论