Loading...
Loading...
Lip-sync a face to a specific audio track on RunComfy via the `runcomfy` CLI. Routes across ByteDance OmniHuman (audio-driven full-body avatar from a portrait + audio), Sync Labs sync v2 / Pro (state-of-the-art mouth sync onto a video), Kling lipsync (audio-to- video and text-to-video with synced speech), and Creatify lipsync. The skill picks the right endpoint for the user's actual intent — portrait still + audio (avatar-style), source video + audio (mouth- swap on existing footage), or generate-and-sync from a script. Triggers on "lip sync", "lipsync", "make this video speak", "match audio to mouth", "dub video", "sync lips to voice", "Sync Labs", "voiceover sync", or any explicit ask to drive a face's mouth from an audio track.
npx skill4agent add agentspace-so/runcomfy-agent-skills lipsyncruncomfy run# 1. Install (see runcomfy-cli skill for details)
npm i -g @runcomfy/cli # or: npx -y @runcomfy/cli --version
# 2. Sign in
runcomfy login # or in CI: export RUNCOMFY_TOKEN=<token>
# 3. Lipsync
runcomfy run <vendor>/<model> \
--input '{"video_url": "...", "audio_url": "..."}' \
--output-dir ./outruncomfy-clisync/sync/lipsync/v2/proSync Labs' premium lip-sync — state-of-the-art mouth motion onto an existing video. Preserves the rest of the frame untouched. Pick for: hero-quality dubs, lipsync on professionally-shot video, foreign-language dubbing where mouth fidelity matters most. Avoid for: cost-sensitive batch jobs — drop to sync v2.
sync/sync/lipsync/v2Standard Sync Labs tier, same workflow as Pro. Pick for: scaled / batch lipsync jobs, drafts. Avoid for: hero delivery — use v2 Pro.
kling/lipsync/audio-to-videoKling's lip-sync onto a source video, driven by an audio track. Pick for: Kling-pipeline integration; alternative to Sync Labs. Avoid for: top-tier mouth fidelity — Sync Labs Pro is the industry benchmark.
creatify/lipsyncCreatify's lipsync endpoint. Pick for: Creatify-ecosystem workflows. Avoid for: comparison shopping unless cost / latency favors it.
bytedance/omnihuman/apiByteDance's audio-driven full-body avatar. One portrait + one audio → video where the subject speaks / gestures naturally. Listed under RunComfy'sas the curated default. Pick for: UGC voiceover, virtual presenter, dubbed product demo from a single portrait. Avoid for: lip-sync onto an existing video (no portrait, want to preserve original motion) — use Sync Labs v2 instead./feature/lip-sync
audio_urlwan-ai/wan-2-7/text-to-videoOpen-weights t2v withfield — prompt describes the scene, audio drives the mouth. Pick for: full scene control (not just a portrait) with a specific voiceover MP3 + open-weights pipeline. Avoid for: simplest "portrait talks" — use OmniHuman.audio_url
kling/lipsync/text-to-videoGenerates speech audio in-pass from a script and syncs it to the resulting video. Pick for: "write a script → get a video with synced speech", no audio file needed. Avoid for: precise lip-sync to a specific MP3 (audio is regenerated each call, not locked).
happyhorse/happyhorse-1-0/text-to-video/image-to-videoArena #1 t2v / i2v with in-pass audio generated from prompt. Quote the spoken line inside the prompt with. Pick for: written script, in-pass audio with strong overall quality, social/UGC clips. Avoid for: locking mouth to a pre-recorded voiceover.says clearly: "…"
runcomfy run sync/sync/lipsync/v2/pro \
--input '{
"video_url": "https://your-cdn.example/source-video.mp4",
"audio_url": "https://your-cdn.example/voiceover.mp3"
}' \
--output-dir ./outbytedance/omnihuman/apiruncomfy run bytedance/omnihuman/api \
--input '{
"image_url": "https://your-cdn.example/portrait.jpg",
"audio_url": "https://your-cdn.example/voiceover.mp3"
}' \
--output-dir ./outai-avatar-videokling/lipsync/audio-to-videokling/lipsync/text-to-videoruncomfy run kling/lipsync/audio-to-video \
--input '{
"video_url": "https://your-cdn.example/source-video.mp4",
"audio_url": "https://your-cdn.example/voiceover.mp3"
}' \
--output-dir ./outcommunity/wan-2-2-animate/video-to-videoai-avatar-videokling| code | meaning |
|---|---|
| 0 | success |
| 64 | bad CLI args |
| 65 | bad input JSON / schema mismatch |
| 69 | upstream 5xx |
| 75 | retryable: timeout / 429 |
| 77 | not signed in or token rejected |
runcomfy run.runcomfy.net.runcomfy.com--output-dirnpm i -g @runcomfy/clinpx -y @runcomfy/cliruncomfy login~/.config/runcomfy/token.jsonRUNCOMFY_TOKEN--inputmodel-api.runcomfy.net*.runcomfy.net*.runcomfy.comBash(runcomfy *)runcomfy-cliai-avatar-videoai-video-generationface-swapvideo-edit