Loading...
Loading...
Swap a face / character into video or images on RunComfy via the `runcomfy` CLI. Routes across community Wan 2-2 Animate (audio-driven character animation + identity swap), GPT Image 2 Edit (single-shot precise face swap on still images via reference composition), Nano Banana Edit (batch identity-preserving swap), Flux Kontext (single-ref high-fidelity local face edit), and Kling 2-6 Motion Control Pro (transfer motion from one performance onto a target character). Picks the right model for the user's actual intent — single still vs video, full character vs face only, dialog scene vs silent motion. Triggers on "face swap", "swap face", "deepfake", "face replacement", "character swap", "head swap", "put X's face on Y", "make this video star X", "replace the actor in this video", "swap the character in the photo", "deepfake video", "ReActor alternative", or any explicit ask to substitute one identity for another.
npx skill4agent add agentspace-so/runcomfy-agent-skills face-swapruncomfy# 1. Install (see runcomfy-cli skill for details)
npm i -g @runcomfy/cli # or: npx -y @runcomfy/cli --version
# 2. Sign in
runcomfy login # or in CI: export RUNCOMFY_TOKEN=<token>
# 3. Swap
runcomfy run <vendor>/<model>/<endpoint> \
--input '{"image_url": "...", "identity_url": "..."}' \
--output-dir ./outruncomfy-clinpx skills add agentspace-so/runcomfy-agent-skills --skill face-swap -gcommunity/wan-2-2-animate/apiFeatured RunComfy endpoint under. Audio-driven full-body character animation: one reference image of the new identity + audio → video where the character drives. Pick for: replacing a character in a scene with a new identity, dubbed clips, stylized + photoreal both work. Avoid for: preserving the motion of a specific source video — use Kling Motion Control./feature/character-swap
kling/kling-2-6/motion-control-proTakes a reference performance video + target character image, produces the target performing the reference motion. Face-swap is the byproduct. Pick for: preserving exact source motion / blocking onto a new character; stylized characters handled cleanly. Avoid for: simple "swap face in an existing video" without motion preservation — use Wan 2-2 Animate.
google/nano-banana-2/editIdentity-preserving by default, 1–20 input images per call, spatial-language honored. Pick for: same identity across multiple frames consistently (SKU shots, A/B variants, narrative panels). Identity reference as, scenes after. Avoid for: precise multi-ref compositional ("face from img 1 onto body in img 2") — use GPT Image 2 Edit.image_urls[0]
openai/gpt-image-2/editUp to 10 reference images, multilingual in-image text rewrite, layout-precise compositional instructions. Pick for: hero still where exact face from a portrait must land in a scene, with explicit role assignment ("image 1", "image 2"); preserve pose + lighting + background while swapping only face. Avoid for: 1-20 batch — use Nano Banana 2 Edit.
blackforestlabs/flux-1-kontext/pro/editSingle source image, single declarative instruction, maximum fidelity preservation of everything except the targeted edit. Pick for: "keep pose / clothing / hair / lighting / background, change only the face to [prose description]" — works without a reference image of the new identity. Avoid for: batch, multi-ref, or when you have a target face image to swap in — use Nano Banana 2 Edit or GPT Image 2 Edit.
Audio-driven talking-head identity swap (face + voice in one pass)? → use theskill — OmniHuman handles face + audio together.ai-avatar-video
runcomfy run community/wan-2-2-animate/api \
--input '{
"image_url": "https://your-cdn.example/new-character.png",
"audio_url": "https://your-cdn.example/voiceover.mp3"
}' \
--output-dir ./outruncomfy run kling/kling-2-6/motion-control-pro \
--input '{
"reference_video_url": "https://your-cdn.example/source-performance.mp4",
"character_image_url": "https://your-cdn.example/target-character.png"
}' \
--output-dir ./outopenai/gpt-image-2/edit| Field | Type | Required | Default | Notes |
|---|---|---|---|---|
| string | yes | — | Compositional instruction; quote roles explicitly |
| string[] | yes | — | Up to 10 HTTPS reference URLs. Image 1 is primary |
| enum | no | | |
runcomfy run openai/gpt-image-2/edit \
--input '{
"prompt": "Replace the face of the person in image 1 with the face from image 2. Preserve image 1 pose, clothing, lighting, and background exactly. Match skin tone and lighting to image 1.",
"images": [
"https://your-cdn.example/target-scene.jpg",
"https://your-cdn.example/identity-face.jpg"
],
"size": "auto"
}' \
--output-dir ./out"image 1""image 2""Preserve pose, clothing, lighting, and background exactly. Replace only the face.""match skin tone and lighting to image 1"google/nano-banana-2/editruncomfy run google/nano-banana-2/edit \
--input '{
"prompt": "Replace the face in each image with the face shown in the first image. Keep all other elements — pose, clothing, lighting, background — unchanged.",
"image_urls": [
"https://your-cdn.example/identity-ref.jpg",
"https://your-cdn.example/scene-1.jpg",
"https://your-cdn.example/scene-2.jpg",
"https://your-cdn.example/scene-3.jpg"
],
"aspect_ratio": "auto",
"resolution": "1K"
}' \
--output-dir ./outaspect_ratioresolutionimage-editblackforestlabs/flux-1-kontext/pro/editruncomfy run blackforestlabs/flux-1-kontext/pro/edit \
--input '{
"prompt": "Keep pose, clothing, hair, lighting, and background exactly. Change only the face to that of a 35-year-old woman with high cheekbones, hazel eyes, and a small scar above the right eyebrow.",
"image": "https://your-cdn.example/scene.jpg"
}' \
--output-dir ./outimage_urls[0]aspect_ratioresolutionimages: [scene, face]ai-avatar-video/models/feature/character-swap/models/feature/lip-syncbest-image-editing-modelskling| code | meaning |
|---|---|
| 0 | success |
| 64 | bad CLI args |
| 65 | bad input JSON / schema mismatch |
| 69 | upstream 5xx |
| 75 | retryable: timeout / 429 |
| 77 | not signed in or token rejected |
runcomfy run <model_id>.runcomfy.net.runcomfy.com--output-dirnpm i -g @runcomfy/clinpx -y @runcomfy/cliruncomfy login~/.config/runcomfy/token.jsonRUNCOMFY_TOKEN--inputmodel-api.runcomfy.net*.runcomfy.net*.runcomfy.comallowed-tools: Bash(runcomfy *)runcomfy <subcommand>runcomfy-cliai-avatar-videoai-video-generationvideo-editimage-editlipsync