Loading...
Loading...
Generate AI videos using varg SDK React engine. Use when creating videos, animations, talking characters, slideshows, or social media content.
npx skill4agent add varghq/sdk varg-video-generationbunx vargai initbunx vargai hellocat .env 2>/dev/null | grep -E "^(FAL_KEY|ELEVENLABS_API_KEY)=" || echo "No API keys found"| Detail | Value |
|---|---|
| Provider | Fal.ai |
| Get it | https://fal.ai/dashboard/keys |
| Free tier | Yes (limited credits) |
| Used for | Image generation (Flux), Video generation (Wan 2.5, Kling) |
FAL_KEY.envFAL_KEY=fal_xxxxx| Feature | Key | Provider | URL |
|---|---|---|---|
| Music/Voice | | ElevenLabs | https://elevenlabs.io/app/settings/api-keys |
| Lipsync | | Replicate | https://replicate.com/account/api-tokens |
| Transcription | | Groq | https://console.groq.com/keys |
/** @jsxImportSource vargai */
import { Render, Clip, Image } from "vargai/react";
const SCENES = ["sunset over ocean", "mountain peaks", "city at night"];
export default (
<Render width={1080} height={1920}>
{SCENES.map((prompt, i) => (
<Clip key={i} duration={3} transition={{ name: "fade", duration: 0.5 }}>
<Image prompt={prompt} zoom="in" />
</Clip>
))}
</Render>
);/** @jsxImportSource vargai */
import { Render, Clip, Image, Video, Music } from "vargai/react";
import { fal, elevenlabs } from "vargai/ai";
const cat = Image({ prompt: "cute cat on windowsill" });
export default (
<Render width={1080} height={1920}>
<Music prompt="upbeat electronic" model={elevenlabs.musicModel()} />
<Clip duration={5}>
<Video
prompt={{ text: "cat turns head, blinks slowly", images: [cat] }}
model={fal.videoModel("wan-2.5")}
/>
</Clip>
</Render>
);/** @jsxImportSource vargai */
import { Render, Clip, Image, Video, Speech, Captions } from "vargai/react";
import { fal, elevenlabs } from "vargai/ai";
const robot = Image({ prompt: "friendly robot, blue metallic", aspectRatio: "9:16" });
const voiceover = Speech({
model: elevenlabs.speechModel("eleven_multilingual_v2"),
voice: "adam",
children: "Hello! I'm your AI assistant. Let's create something amazing!",
});
export default (
<Render width={1080} height={1920}>
<Clip duration={5}>
<Video
prompt={{ text: "robot talking, subtle head movements", images: [robot] }}
model={fal.videoModel("wan-2.5")}
/>
</Clip>
<Captions src={voiceover} style="tiktok" />
</Render>
);bunx vargai render your-video.tsx| Component | Purpose | Required Key |
|---|---|---|
| Root container | - |
| Sequential segment | - |
| AI image | FAL |
| Image-to-video | FAL |
| Background music | ElevenLabs |
| Text-to-speech | ElevenLabs |
const character = Image({ prompt: "blue robot" });
// Reuse same reference = same generated image
<Animate image={character} motion="waving" />
<Animate image={character} motion="dancing" /><Clip transition={{ name: "fade", duration: 0.5 }}>
// Options: fade, crossfade, wipeleft, cube, slideup, etc.9:1616:91:1<Image prompt="landscape" zoom="in" /> // Zoom in
<Image prompt="landscape" zoom="out" /> // Zoom out
<Image prompt="landscape" zoom="left" /> // Pan left.env=bunx vargai init.envbunx vargai render hello.tsx