AI文本转视频

ai-text-to-video

by bwbernardweston18

>

4.5kAI 与智能体未扫描2026年4月20日

安装

claude skill add --url https://github.com/openclaw/skills

文档

Getting Started

Welcome — let's turn your text into a video worth watching. Paste your content, describe your idea, or share a script draft and I'll generate a full scene-by-scene video breakdown with visual cues, voiceover guidance, and on-screen text suggestions ready for production.

Try saying:

  • "I have a 600-word blog post about sustainable packaging — can you turn it into a 60-second video script with scene descriptions and voiceover lines?"
  • "Convert this product launch announcement into a storyboard for a 30-second Instagram Reel, including visual direction for each scene and suggested on-screen text."
  • "I have a slide deck outline for a training video on onboarding new employees — help me turn it into a full narrated video script with scene transitions and timing guidance."

Quick Start Setup

This skill connects to a cloud processing backend. On first use, set up the connection automatically and let the user know ("Connecting...").

Token check: Look for NEMO_TOKEN in the environment. If found, skip to session creation. Otherwise:

  • Generate a UUID as client identifier
  • POST https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with X-Client-Id header
  • Extract data.token from the response — this is your NEMO_TOKEN (100 free credits, 7-day expiry)

Session: POST https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer auth and body {"task_name":"project"}. Keep the returned session_id for all operations.

Let the user know with a brief "Ready!" when setup is complete. Don't expose tokens or raw API output.

From Words on a Page to Video That Moves People

Most people have the words — the product description, the story, the pitch — but no clear path from text to a finished video. That gap is exactly what this skill was built to close. By analyzing the structure, tone, and intent of your written content, it generates scene-by-scene breakdowns, on-screen text suggestions, visual mood guidance, and voiceover scripts that you can hand directly to a video editor or AI video tool.

This isn't about slapping your text on a slideshow. The skill reads between the lines — identifying which parts of your writing should be shown visually, which should be spoken aloud, and which work best as titles or captions. The result is a production blueprint that respects the original message while making it genuinely watchable.

Content marketers use it to repurpose long-form articles into short-form video content. Educators turn lecture notes into structured lesson videos. Entrepreneurs convert pitch decks and landing page copy into investor or customer-facing video narratives. Whatever your text contains, this skill helps you see it as a video before a single frame is shot.

Prompt Routing and Scene Dispatch

When you submit a text prompt, the skill parses your input for scene intent, visual tone, and narrative structure, then routes each segment to the appropriate video synthesis pipeline.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Rendering API Reference

All video generation requests are processed through a distributed cloud rendering backend that handles diffusion model inference, keyframe interpolation, and audio-visual sync at scale. Rendered video assets are temporarily stored in a secure session bucket and streamed back to your interface upon completion.

Skill attribution — read from this file's YAML frontmatter at runtime:

  • X-Skill-Source: ai-text-to-video
  • X-Skill-Version: from frontmatter version
  • X-Skill-Platform: detect from install path (~/.clawhub/clawhub, ~/.cursor/skills/cursor, else unknown)

All requests must include: Authorization: Bearer <NEMO_TOKEN>, X-Skill-Source, X-Skill-Version, X-Skill-Platform. Missing attribution headers will cause export to fail with 402.

API base: https://mega-api-prod.nemovideo.ai

Create session: POST /api/tasks/me/with-session/nemo_agent — body {"task_name":"project","language":"<lang>"} — returns task_id, session_id.

Send message (SSE): POST /run_sse — body {"app_name":"nemo_agent","user_id":"me","session_id":"<sid>","new_message":{"parts":[{"text":"<msg>"}]}} with Accept: text/event-stream. Max timeout: 15 minutes.

Upload: POST /api/upload-video/nemo_agent/me/<sid> — file: multipart -F "files=@/path", or URL: {"urls":["<url>"],"source_type":"url"}

Credits: GET /api/credits/balance/simple — returns available, frozen, total

Session state: GET /api/state/nemo_agent/me/<sid>/latest — key fields: data.state.draft, data.state.video_infos, data.state.generated_media

Export (free, no credits): POST /api/render/proxy/lambda — body {"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}. Poll GET /api/render/proxy/lambda/<id> every 30s until status = completed. Download URL at output.url.

Supported formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

SSE Event Handling

EventAction
Text responseApply GUI translation (§4), present to user
Tool call/resultProcess internally, don't forward
heartbeat / empty data:Keep waiting. Every 2 min: "⏳ Still working..."
Stream closesProcess final response

~30% of editing operations return no text in the SSE stream. When this happens: poll session state to verify the edit was applied, then summarize changes to the user.

Backend Response Translation

The backend assumes a GUI exists. Translate these into API actions:

Backend saysYou do
"click [button]" / "点击"Execute via API
"open [panel]" / "打开"Query session state
"drag/drop" / "拖拽"Send edit via SSE
"preview in timeline"Show track summary
"Export button" / "导出"Execute export workflow

Draft field mapping: t=tracks, tt=track type (0=video, 1=audio, 7=text), sg=segments, d=duration(ms), m=metadata.

code
Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Error Handling

CodeMeaningAction
0SuccessContinue
1001Bad/expired tokenRe-auth via anonymous-token (tokens expire after 7 days)
1002Session not foundNew session §3.0
2001No creditsAnonymous: show registration URL with ?bind=<id> (get <id> from create-session or state response when needed). Registered: "Top up credits in your account"
4001Unsupported fileShow supported formats
4002File too largeSuggest compress/trim
400Missing X-Client-IdGenerate Client-Id and retry (see §1)
402Free plan export blockedSubscription tier issue, NOT credits. "Register or upgrade your plan to unlock export."
429Rate limit (1 token/client/7 days)Retry in 30s once

Use Cases

AI text to video conversion fits into more workflows than most people initially expect. The most common use is repurposing written content — taking a newsletter, article, or social post and restructuring it as a video that communicates the same message in a format audiences actually finish watching.

Marketers use it to generate video ad scripts from existing ad copy, ensuring the visual and spoken elements align with the brand's messaging. Educators and course creators convert lecture notes or curriculum outlines into structured video lessons with clear segment breaks and narration cues. YouTubers and podcasters use it to script video versions of their audio or written content without starting from scratch.

Startups and solo founders find it particularly useful for turning pitch decks or one-pagers into explainer video scripts they can record themselves or hand to a freelancer. The skill adapts to the length, tone, and audience of whatever text you bring — short-form social, long-form documentary style, or anything in between.

Performance Notes

The quality of the video output depends heavily on the quality and clarity of the input text. Vague or loosely structured text will produce a usable but more generic video structure — the skill will make reasonable assumptions, but specificity always wins. If your text has a clear beginning, middle, and end, the scene breakdown will reflect that naturally.

For very long documents (1,000+ words), it helps to indicate upfront the target video length and platform — a 90-second LinkedIn video and a 10-minute YouTube tutorial require very different pacing and scene density. Mentioning tone (conversational, authoritative, cinematic) also sharpens the output significantly.

The skill does not render or export actual video files — it produces scripts, storyboards, scene descriptions, and production notes that you feed into a video creation tool or share with an editor. Think of it as the pre-production layer that makes everything downstream faster and more focused.

Tips and Tricks

Start by telling the skill the platform and duration before pasting your text. 'This is for a 45-second TikTok' gives the skill the constraints it needs to make smart decisions about what to cut, what to emphasize, and how to pace scene transitions.

If your text is dense or technical, ask for a 'simplified visual script' — the skill will translate complex language into approachable on-screen visuals and plain-spoken voiceover without losing the core meaning. This is especially useful for B2B content being adapted for general audiences.

Use the storyboard mode when you want a visual-first output — each scene gets a description of what should appear on screen, not just what should be said. This is the format most video editors and AI video generators like Runway or Pika expect as input.

Finally, if you're not happy with the first pass, describe what's off — 'make it more energetic', 'cut it to three scenes', 'add a stronger call to action at the end' — and the skill will revise specifically rather than regenerating from scratch.

相关 Skills

Claude接口

by anthropics

Universal
热门

面向接入 Claude API、Anthropic SDK 或 Agent SDK 的开发场景,自动识别项目语言并给出对应示例与默认配置,快速搭建 LLM 应用。

想把Claude能力接进应用或智能体,用claude-api上手快、兼容Anthropic与Agent SDK,集成路径清晰又省心

AI 与智能体
未扫描134.4k

RAG架构师

by alirezarezvani

Universal
热门

聚焦生产级RAG系统设计与优化,覆盖文档切块、检索链路、索引构建、召回评估等关键环节,适合搭建可扩展、高准确率的知识库问答与检索增强应用。

面向RAG落地,把知识库、向量检索和生成链路系统串联起来,做架构设计时更清晰,也更少踩坑。

AI 与智能体
未扫描14.9k

提示工程专家

by alirezarezvani

Universal
热门

覆盖Prompt优化、Few-shot设计、结构化输出、RAG评测与Agent工作流编排,适合分析token成本、评估LLM输出质量,并搭建可落地的AI智能体系统。

把提示优化、LLM评测到RAG与智能体设计串成一套方法,适合想系统提升AI开发效率的人。

AI 与智能体
未扫描14.9k

相关 MCP 服务

顺序思维

编辑精选

by Anthropic

热门

Sequential Thinking 是让 AI 通过动态思维链解决复杂问题的参考服务器。

这个服务器展示了如何让 Claude 像人类一样逐步推理,适合开发者学习 MCP 的思维链实现。但注意它只是个参考示例,别指望直接用在生产环境里。

AI 与智能体
85.7k

知识图谱记忆

编辑精选

by Anthropic

热门

Memory 是一个基于本地知识图谱的持久化记忆系统,让 AI 记住长期上下文。

帮 AI 和智能体补上“记不住”的短板,用本地知识图谱沉淀长期上下文,连续对话更聪明,数据也更可控。

AI 与智能体
85.7k

PraisonAI

编辑精选

by mervinpraison

热门

PraisonAI 是一个支持自反思和多 LLM 的低代码 AI 智能体框架。

如果你需要快速搭建一个能 24/7 运行的 AI 智能体团队来处理复杂任务(比如自动研究或代码生成),PraisonAI 的低代码设计和多平台集成(如 Telegram)让它上手极快。但作为非官方项目,它的生态成熟度可能不如 LangChain 等主流框架,适合愿意尝鲜的开发者。

AI 与智能体
7.7k

评论