daggr

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

daggr

daggr

Build visual DAG pipelines connecting Gradio Spaces, HF Inference Providers, and Python functions.
构建可视化DAG工作流管道,连接Gradio Spaces、HF推理提供商和Python函数。

Quick Start

快速开始

python
from daggr import GradioNode, FnNode, InferenceNode, Graph, ItemList
import gradio as gr

graph = Graph(name="My Workflow", nodes=[node1, node2, ...])
graph.launch()  # Starts web server with visual DAG UI
python
from daggr import GradioNode, FnNode, InferenceNode, Graph, ItemList
import gradio as gr

graph = Graph(name="My Workflow", nodes=[node1, node2, ...])
graph.launch()  # Starts web server with visual DAG UI

Node Types

节点类型

GradioNode - Gradio Spaces

GradioNode - Gradio Spaces

python
node = GradioNode(
    space_or_url="owner/space-name",
    api_name="/endpoint",
    inputs={
        "param": gr.Textbox(label="Input"),   # UI input
        "other": other_node.output_port,       # Port connection
        "fixed": "constant_value",             # Fixed value
    },
    postprocess=lambda *returns: returns[0],   # Transform response
    outputs={"result": gr.Image(label="Output")},
)
python
node = GradioNode(
    space_or_url="owner/space-name",
    api_name="/endpoint",
    inputs={
        "param": gr.Textbox(label="Input"),   # UI input
        "other": other_node.output_port,       # Port connection
        "fixed": "constant_value",             # Fixed value
    },
    postprocess=lambda *returns: returns[0],   # Transform response
    outputs={"result": gr.Image(label="Output")},
)

Example: image generation

Example: image generation

img = GradioNode("Tongyi-MAI/Z-Image-Turbo", api_name="/generate", inputs={"prompt": gr.Textbox(), "resolution": "1024x1024 ( 1:1 )"}, postprocess=lambda imgs, *_: imgs[0]["image"], outputs={"image": gr.Image()})

Find Spaces with semantic queries (describe what you need): `https://huggingface.co/api/spaces/semantic-search?q=generate+music+for+a+video&sdk=gradio&includeNonRunning=false`
Or by category: `https://huggingface.co/api/spaces/semantic-search?category=image-generation&sdk=gradio&includeNonRunning=false`
(categories: image-generation | video-generation | text-generation | speech-synthesis | music-generation | voice-cloning | image-editing | background-removal | image-upscaling | ocr | style-transfer | image-captioning)
img = GradioNode("Tongyi-MAI/Z-Image-Turbo", api_name="/generate", inputs={"prompt": gr.Textbox(), "resolution": "1024x1024 ( 1:1 )"}, postprocess=lambda imgs, *_: imgs[0]["image"], outputs={"image": gr.Image()})

通过语义搜索查找所需的Spaces(描述你的需求):`https://huggingface.co/api/spaces/semantic-search?q=generate+music+for+a+video&sdk=gradio&includeNonRunning=false`
或按分类查找:`https://huggingface.co/api/spaces/semantic-search?category=image-generation&sdk=gradio&includeNonRunning=false`
(分类包括:image-generation | video-generation | text-generation | speech-synthesis | music-generation | voice-cloning | image-editing | background-removal | image-upscaling | ocr | style-transfer | image-captioning)

FnNode - Python Functions

FnNode - Python函数

python
def process(input1: str, input2: int) -> str:
    return f"{input1}: {input2}"

node = FnNode(
    fn=process,
    inputs={"input1": gr.Textbox(), "input2": other_node.port},
    outputs={"result": gr.Textbox()},
)
python
def process(input1: str, input2: int) -> str:
    return f"{input1}: {input2}"

node = FnNode(
    fn=process,
    inputs={"input1": gr.Textbox(), "input2": other_node.port},
    outputs={"result": gr.Textbox()},
)

InferenceNode - HF Inference Providers

InferenceNode - HF推理提供商

Find models:
https://huggingface.co/api/models?inference_provider=all&pipeline_tag=text-to-image
(swap pipeline_tag: text-to-image | image-to-image | image-to-text | image-to-video | text-to-video | text-to-speech | automatic-speech-recognition)
python
node = InferenceNode(
    model="org/model:provider",  # model:provider (fal-ai, replicate, together, etc.)
    inputs={"image": other_node.image, "prompt": gr.Textbox()},
    outputs={"image": gr.Image()},
)
Auth: InferenceNode and ZeroGPU Spaces require a HF token. If not in env, ask user to create one:
https://huggingface.co/settings/tokens/new?ownUserPermissions=inference.serverless.write&tokenType=fineGrained
Out of quota? Pro gives 8x ZeroGPU + 10x inference:
https://huggingface.co/subscribe/pro
查找模型:
https://huggingface.co/api/models?inference_provider=all&pipeline_tag=text-to-image
(替换pipeline_tag:text-to-image | image-to-image | image-to-text | image-to-video | text-to-video | text-to-speech | automatic-speech-recognition)
python
node = InferenceNode(
    model="org/model:provider",  # model:provider (fal-ai, replicate, together, etc.)
    inputs={"image": other_node.image, "prompt": gr.Textbox()},
    outputs={"image": gr.Image()},
)
认证: InferenceNode和ZeroGPU Spaces需要HF令牌。如果环境中未设置,请让用户创建一个:
https://huggingface.co/settings/tokens/new?ownUserPermissions=inference.serverless.write&tokenType=fineGrained
配额不足?Pro版提供8倍ZeroGPU配额+10倍推理额度:
https://huggingface.co/subscribe/pro

Port Connections

端口连接

Pass ports via
inputs={...}
:
python
inputs={"param": previous_node.output_port}       # Basic connection
inputs={"item": items_node.items.field_name}      # Scattered (per-item)
inputs={"all": scattered_node.output.all()}       # Gathered (collect list)
通过
inputs={...}
传递端口:
python
inputs={"param": previous_node.output_port}       # Basic connection
inputs={"item": items_node.items.field_name}      # Scattered (per-item)
inputs={"all": scattered_node.output.all()}       # Gathered (collect list)

ItemList - Dynamic Lists

ItemList - 动态列表

python
def gen_items(n: int) -> list:
    return [{"text": f"Item {i}"} for i in range(n)]

items = FnNode(fn=gen_items,
    outputs={"items": ItemList(text=gr.Textbox())})
python
def gen_items(n: int) -> list:
    return [{"text": f"Item {i}"} for i in range(n)]

items = FnNode(fn=gen_items,
    outputs={"items": ItemList(text=gr.Textbox())})

Runs once per item

Runs once per item

process = FnNode(fn=process_item, inputs={"text": items.items.text}, outputs={"result": gr.Textbox()})
process = FnNode(fn=process_item, inputs={"text": items.items.text}, outputs={"result": gr.Textbox()})

Collect all results

Collect all results

final = FnNode(fn=combine, inputs={"all": process.result.all()}, outputs={"out": gr.Textbox()})
undefined
final = FnNode(fn=combine, inputs={"all": process.result.all()}, outputs={"out": gr.Textbox()})
undefined

Checklist

检查清单

  1. Check API before using a Space:
    bash
    curl -s "https://<space-subdomain>.hf.space/gradio_api/openapi.json"
    Replace
    <space-subdomain>
    with the Space's subdomain (e.g.,
    Tongyi-MAI/Z-Image-Turbo
    tongyi-mai-z-image-turbo
    ). (Spaces also have "Use via API" link in footer with endpoints and code snippets)
  2. Handle files (Gradio returns dicts):
    python
    path = file.get("path") if isinstance(file, dict) else file
  3. Use postprocess for multi-return APIs:
    python
    postprocess=lambda imgs, seed, num: imgs[0]["image"]
  4. Debug with
    .test()
    to validate a node in isolation:
    python
    node.test(param="value")
  1. 使用Space前先检查API
    bash
    curl -s "https://<space-subdomain>.hf.space/gradio_api/openapi.json"
    <space-subdomain>
    替换为Space的子域名(例如:
    Tongyi-MAI/Z-Image-Turbo
    tongyi-mai-z-image-turbo
    )。 (Space的页脚也有“Use via API”链接,包含端点和代码片段)
  2. 处理文件(Gradio返回字典):
    python
    path = file.get("path") if isinstance(file, dict) else file
  3. 对多返回值API使用postprocess
    python
    postprocess=lambda imgs, seed, num: imgs[0]["image"]
  4. 使用
    .test()
    调试
    以单独验证节点:
    python
    node.test(param="value")

Common Patterns

常见模式

python
undefined
python
undefined

Image Generation

Image Generation

GradioNode("Tongyi-MAI/Z-Image-Turbo", api_name="/generate", inputs={"prompt": gr.Textbox(), "resolution": "1024x1024 ( 1:1 )"}, postprocess=lambda imgs, *_: imgs[0]["image"], outputs={"image": gr.Image()})
GradioNode("Tongyi-MAI/Z-Image-Turbo", api_name="/generate", inputs={"prompt": gr.Textbox(), "resolution": "1024x1024 ( 1:1 )"}, postprocess=lambda imgs, *_: imgs[0]["image"], outputs={"image": gr.Image()})

Text-to-Speech

Text-to-Speech

GradioNode("Qwen/Qwen3-TTS", api_name="/generate_voice_design", inputs={"text": gr.Textbox(), "language": "English", "voice_description": "..."}, postprocess=lambda audio, status: audio, outputs={"audio": gr.Audio()})
GradioNode("Qwen/Qwen3-TTS", api_name="/generate_voice_design", inputs={"text": gr.Textbox(), "language": "English", "voice_description": "..."}, postprocess=lambda audio, status: audio, outputs={"audio": gr.Audio()})

Image-to-Video

Image-to-Video

GradioNode("alexnasa/ltx-2-TURBO", api_name="/generate_video", inputs={"input_image": img.image, "prompt": gr.Textbox(), "duration": 5}, postprocess=lambda video, seed: video, outputs={"video": gr.Video()})
GradioNode("alexnasa/ltx-2-TURBO", api_name="/generate_video", inputs={"input_image": img.image, "prompt": gr.Textbox(), "duration": 5}, postprocess=lambda video, seed: video, outputs={"video": gr.Video()})

ffmpeg composition (import tempfile, subprocess)

ffmpeg composition (import tempfile, subprocess)

def combine(video: str|dict, audio: str|dict) -> str: v = video.get("path") if isinstance(video, dict) else video a = audio.get("path") if isinstance(audio, dict) else audio out = tempfile.mktemp(suffix=".mp4") subprocess.run(["ffmpeg","-y","-i",v,"-i",a,"-shortest",out]) return out
undefined
def combine(video: str|dict, audio: str|dict) -> str: v = video.get("path") if isinstance(video, dict) else video a = audio.get("path") if isinstance(audio, dict) else audio out = tempfile.mktemp(suffix=".mp4") subprocess.run(["ffmpeg","-y","-i",v,"-i",a,"-shortest",out]) return out
undefined

Run

运行

bash
uvx --python 3.12 daggr workflow.py &  # Launch in background, hot reloads on file changes
bash
uvx --python 3.12 daggr workflow.py &  # Launch in background, hot reloads on file changes

Authentication

认证

Local development: Use
hf auth login
or set
HF_TOKEN
env var. This enables ZeroGPU quota tracking, private Spaces access, and gated models.
Deployed Spaces: Users can click "Login" in the UI and paste their HF token. This enables persistence (sheets) so they can save outputs and resume work later. The token is stored in browser localStorage.
When deploying: Pass secrets via
--secret HF_TOKEN=xxx
if your workflow needs server-side auth (e.g., for gated models in FnNode). Warning: this uses the deployer's token for all users.
本地开发: 使用
hf auth login
或设置
HF_TOKEN
环境变量。这将启用ZeroGPU配额跟踪、私有Space访问和受限模型使用。
已部署的Spaces: 用户可在UI中点击“Login”并粘贴其HF令牌。这将启用持久化(工作表)功能,以便用户保存输出并稍后恢复工作。令牌存储在浏览器localStorage中。
部署时: 如果你的工作流需要服务器端认证(例如FnNode中的受限模型),请通过
--secret HF_TOKEN=xxx
传递密钥。警告:这会将部署者的令牌用于所有用户。

Deploy to Hugging Face Spaces

部署到Hugging Face Spaces

Only deploy if the user has explicitly asked to publish/deploy their workflow.
bash
daggr deploy workflow.py
This extracts the Graph, creates a Space named after it, and uploads everything.
Options:
bash
daggr deploy workflow.py --name my-space      # Custom Space name
daggr deploy workflow.py --org huggingface    # Deploy to an organization
daggr deploy workflow.py --private            # Private Space
daggr deploy workflow.py --hardware t4-small  # GPU (t4-small, t4-medium, a10g-small, etc.)
daggr deploy workflow.py --secret KEY=value   # Add secrets (repeatable)
daggr deploy workflow.py --dry-run            # Preview without deploying
仅当用户明确要求发布/部署其工作流时才进行部署。
bash
daggr deploy workflow.py
这将提取Graph,创建一个以其命名的Space,并上传所有内容。
选项:
bash
daggr deploy workflow.py --name my-space      # 自定义Space名称
daggr deploy workflow.py --org huggingface    # 部署到组织
daggr deploy workflow.py --private            # 私有Space
daggr deploy workflow.py --hardware t4-small  # GPU(t4-small, t4-medium, a10g-small等)
daggr deploy workflow.py --secret KEY=value   # 添加密钥(可重复使用)
daggr deploy workflow.py --dry-run            # 预览而不实际部署