Loading...
Loading...
Pose-conditioned generation on RunComfy via the `runcomfy` CLI. Routes across Kling 2-6 Motion Control Pro / Standard (transfer the motion / blocking of a reference video onto a target character), community Wan 2-2 Animate (audio-driven character animation with pose conditioning), and Z-Image Turbo ControlNet LoRA (pose-conditioned image generation from an OpenPose / DWPose / canny / depth control image). Picks the right route based on video vs still and stylized vs photoreal. Triggers on "controlnet", "control net", "pose control", "openpose", "DWPose", "transfer pose", "motion control", "pose driven", "character pose", "depth control", "canny edge", "use this pose", or any explicit ask to condition generation on a pose / skeleton / motion / depth / canny reference.
npx skill4agent add agentspace-so/runcomfy-agent-skills controlnet-pose# 1. Install (see runcomfy-cli skill for details)
npm i -g @runcomfy/cli # or: npx -y @runcomfy/cli --version
# 2. Sign in
runcomfy login # or in CI: export RUNCOMFY_TOKEN=<token>
# 3. Pose-conditioned generate
runcomfy run <vendor>/<model> \
--input '{"reference_video_url": "...", "character_image_url": "..."}' \
--output-dir ./outruncomfy-clikling/kling-2-6/motion-control-proTakes a reference performance video + a target character image, produces video of the target performing the reference motion / pose. Pick for: transferring a source video's motion / blocking onto a new character; dance choreography re-shot; sports motion onto a stylized character. Avoid for: still-image pose conditioning — use Z-Image ControlNet LoRA.
kling/kling-2-6/motion-control-standardCheaper Kling Motion Control tier. Pick for: drafts, iteration on motion-control compositions. Avoid for: final delivery — use Pro.
community/wan-2-2-animate/video-to-videoCommunity-published variant on Wan 2-2. Audio-driven character animation that also accepts pose-style conditioning. Pick for: stylized character animation, mascot work. Avoid for: photoreal subjects — use Kling Motion Control.
tongyi-mai/z-image/turbo/controlnet/loraZ-Image Turbo with a ControlNet LoRA — feed a control image (pose skeleton, depth map, canny) and a prompt, get a generation conditioned on that control. Pick for: pose-locked image generation, character in specific stance, depth-locked composition. Avoid for: complex multi-condition stacks (e.g. pose + depth + reference) — those need a ComfyUI workflow.
kling/kling-2-6/motion-control-pro/motion-control-standardklingruncomfy run kling/kling-2-6/motion-control-pro \
--input '{
"reference_video_url": "https://your-cdn.example/source-performance.mp4",
"character_image_url": "https://your-cdn.example/target-character.png"
}' \
--output-dir ./outtongyi-mai/z-image/turbo/controlnet/loraruncomfy run tongyi-mai/z-image/turbo/controlnet/lora \
--input '{
"prompt": "A samurai in battle stance, traditional armor, cherry-blossom forest background, cinematic 35mm",
"control_image_url": "https://your-cdn.example/openpose-skeleton.png"
}' \
--output-dir ./out| Need | Workflow class |
|---|---|
| FLUX + multi-condition ControlNet (depth + canny + pose) | |
| Pose-driven motion video with VACE | |
| Pose-control lipsync (pose + audio together) | |
| Wan 2-2 Animate v2 with pose driving | |
| OpenPose motion alignment | |
| Pose-based character animation (Scail) | |
kling/feature/character-swap| code | meaning |
|---|---|
| 0 | success |
| 64 | bad CLI args |
| 65 | bad input JSON / schema mismatch |
| 69 | upstream 5xx |
| 75 | retryable: timeout / 429 |
| 77 | not signed in or token rejected |
--output-dirnpm i -g @runcomfy/clinpx -y @runcomfy/cliruncomfy login~/.config/runcomfy/token.jsonRUNCOMFY_TOKEN--inputmodel-api.runcomfy.net*.runcomfy.net*.runcomfy.comBash(runcomfy *)runcomfy-cliai-video-generationface-swapai-avatar-videoimage-edit