replicate-cli

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Replicate CLI

Replicate CLI

The Replicate CLI is a command-line tool for interacting with Replicate's AI model platform. It enables running predictions, managing models, creating deployments, and fine-tuning models directly from the terminal.
Replicate CLI是一款用于与Replicate AI模型平台交互的命令行工具,它支持直接在终端中运行预测、管理模型、创建部署以及微调模型。

Authentication

身份验证

Before using the Replicate CLI, set the API token:
bash
export REPLICATE_API_TOKEN=<token-from-replicate.com/account>
Alternatively, authenticate interactively:
bash
replicate auth login
Verify authentication:
bash
replicate account current
使用Replicate CLI之前,需设置API令牌:
bash
export REPLICATE_API_TOKEN=<token-from-replicate.com/account>
或者,通过交互式方式进行身份验证:
bash
replicate auth login
验证身份状态:
bash
replicate account current

Core Commands

核心命令

Running Predictions

运行预测

The primary use case is running predictions against hosted models.
Basic prediction:
bash
replicate run <owner/model> input_key=value
Examples:
Image generation:
bash
replicate run stability-ai/sdxl prompt="a studio photo of a rainbow colored corgi"
Text generation with streaming:
bash
replicate run meta/llama-2-70b-chat --stream prompt="Tell me a joke"
Prediction flags:
  • --stream
    - Stream output tokens in real-time (for text models)
  • --no-wait
    - Submit prediction without waiting for completion
  • --web
    - Open prediction in browser
  • --json
    - Output result as JSON
  • --save
    - Save outputs to local directory
  • --output-directory <dir>
    - Specify output directory (default:
    ./{prediction-id}
    )
主要使用场景是针对托管模型运行预测。
基础预测:
bash
replicate run <owner/model> input_key=value
示例:
图像生成:
bash
replicate run stability-ai/sdxl prompt="a studio photo of a rainbow colored corgi"
流式文本生成:
bash
replicate run meta/llama-2-70b-chat --stream prompt="Tell me a joke"
预测参数:
  • --stream
    - 实时流式输出令牌(适用于文本模型)
  • --no-wait
    - 提交预测后无需等待完成
  • --web
    - 在浏览器中打开预测页面
  • --json
    - 以JSON格式输出结果
  • --save
    - 将输出保存到本地目录
  • --output-directory <dir>
    - 指定输出目录(默认:
    ./{prediction-id}

Input Handling

输入处理

File uploads: Prefix local file paths with
@
:
bash
replicate run nightmareai/real-esrgan image=@photo.jpg
Output chaining: Use
{{.output}}
template syntax to chain predictions:
bash
replicate run stability-ai/sdxl prompt="a corgi" | \
replicate run nightmareai/real-esrgan image={{.output[0]}}
文件上传: 本地文件路径前添加
@
前缀:
bash
replicate run nightmareai/real-esrgan image=@photo.jpg
输出链式调用: 使用
{{.output}}
模板语法实现预测链式调用:
bash
replicate run stability-ai/sdxl prompt="a corgi" | \
replicate run nightmareai/real-esrgan image={{.output[0]}}

Model Operations

模型操作

View model schema (see required inputs and outputs):
bash
replicate model schema <owner/model>
replicate model schema stability-ai/sdxl --json
List models:
bash
replicate model list
replicate model list --json
Show model details:
bash
replicate model show <owner/model>
Create a new model:
bash
replicate model create <owner/name> \
  --hardware gpu-a100-large \
  --private \
  --description "Model description"
Model creation flags:
  • --hardware <sku>
    - Hardware SKU (see
    references/hardware.md
    )
  • --private
    /
    --public
    - Visibility setting
  • --description <text>
    - Model description
  • --github-url <url>
    - Link to source repository
  • --license-url <url>
    - License information
  • --cover-image-url <url>
    - Cover image for model page
查看模型架构(了解必填输入与输出):
bash
replicate model schema <owner/model>
replicate model schema stability-ai/sdxl --json
列出模型:
bash
replicate model list
replicate model list --json
查看模型详情:
bash
replicate model show <owner/model>
创建新模型:
bash
replicate model create <owner/name> \
  --hardware gpu-a100-large \
  --private \
  --description "Model description"
模型创建参数:
  • --hardware <sku>
    - 硬件SKU(详见
    references/hardware.md
  • --private
    /
    --public
    - 可见性设置
  • --description <text>
    - 模型描述
  • --github-url <url>
    - 源代码仓库链接
  • --license-url <url>
    - 许可证信息
  • --cover-image-url <url>
    - 模型页面封面图

Training (Fine-tuning)

训练(微调)

Fine-tune models using the training command:
bash
replicate train <base-model> \
  --destination <owner/new-model> \
  input_key=value
Example - Fine-tune SDXL with DreamBooth:
bash
replicate train stability-ai/sdxl \
  --destination myuser/custom-sdxl \
  --web \
  input_images=@training-images.zip \
  use_face_detection_instead=true
List trainings:
bash
replicate training list
Show training details:
bash
replicate training show <training-id>
使用训练命令微调模型:
bash
replicate train <base-model> \
  --destination <owner/new-model> \
  input_key=value
示例 - 使用DreamBooth微调SDXL:
bash
replicate train stability-ai/sdxl \
  --destination myuser/custom-sdxl \
  --web \
  input_images=@training-images.zip \
  use_face_detection_instead=true
列出训练任务:
bash
replicate training list
查看训练任务详情:
bash
replicate training show <training-id>

Deployments

部署

Deployments provide dedicated, always-on inference endpoints with predictable performance.
Create deployment:
bash
replicate deployments create <name> \
  --model <owner/model> \
  --hardware <sku> \
  --min-instances 1 \
  --max-instances 3
Example:
bash
replicate deployments create text-to-image \
  --model stability-ai/sdxl \
  --hardware gpu-a100-large \
  --min-instances 1 \
  --max-instances 5
Update deployment:
bash
replicate deployments update <name> \
  --max-instances 10 \
  --version <version-id>
List deployments:
bash
replicate deployments list
Show deployment details and schema:
bash
replicate deployments show <name>
replicate deployments schema <name>
部署功能提供专用的、始终在线的推理端点,性能可预测。
创建部署:
bash
replicate deployments create <name> \
  --model <owner/model> \
  --hardware <sku> \
  --min-instances 1 \
  --max-instances 3
示例:
bash
replicate deployments create text-to-image \
  --model stability-ai/sdxl \
  --hardware gpu-a100-large \
  --min-instances 1 \
  --max-instances 5
更新部署:
bash
replicate deployments update <name> \
  --max-instances 10 \
  --version <version-id>
列出部署:
bash
replicate deployments list
查看部署详情与架构:
bash
replicate deployments show <name>
replicate deployments schema <name>

Hardware

硬件

List available hardware options:
bash
replicate hardware list
See
references/hardware.md
for detailed hardware information and selection guidelines.
列出可用的硬件选项:
bash
replicate hardware list
详见
references/hardware.md
获取硬件详细信息与选型指南。

Scaffolding

脚手架

Create a local development environment from an existing prediction:
bash
replicate scaffold <prediction-id-or-url> --template=<node|python>
This generates a project with the prediction's model and inputs pre-configured.
从现有预测创建本地开发环境:
bash
replicate scaffold <prediction-id-or-url> --template=<node|python>
该命令会生成一个预配置好预测模型与输入参数的项目。

Command Aliases

命令别名

For convenience, these aliases are available:
AliasEquivalent Command
replicate run
replicate prediction create
replicate stream
replicate prediction create --stream
replicate train
replicate training create
Short aliases for subcommands:
  • replicate m
    =
    replicate model
  • replicate p
    =
    replicate prediction
  • replicate t
    =
    replicate training
  • replicate d
    =
    replicate deployments
  • replicate hw
    =
    replicate hardware
  • replicate a
    =
    replicate account
为方便使用,提供以下命令别名:
别名等效命令
replicate run
replicate prediction create
replicate stream
replicate prediction create --stream
replicate train
replicate training create
子命令的短别名:
  • replicate m
    =
    replicate model
  • replicate p
    =
    replicate prediction
  • replicate t
    =
    replicate training
  • replicate d
    =
    replicate deployments
  • replicate hw
    =
    replicate hardware
  • replicate a
    =
    replicate account

Common Workflows

常见工作流

Image Generation Pipeline

图像生成流水线

Generate an image and upscale it:
bash
replicate run stability-ai/sdxl \
  prompt="professional photo of a sunset" \
  negative_prompt="blurry, low quality" | \
replicate run nightmareai/real-esrgan \
  image={{.output[0]}} \
  --save
生成图像并进行超分辨率处理:
bash
replicate run stability-ai/sdxl \
  prompt="professional photo of a sunset" \
  negative_prompt="blurry, low quality" | \
replicate run nightmareai/real-esrgan \
  image={{.output[0]}} \
  --save

Check Model Inputs Before Running

运行前检查模型输入

Always check the model schema to understand required inputs:
bash
replicate model schema owner/model-name
运行前务必查看模型架构,了解必填输入:
bash
replicate model schema owner/model-name

Batch Processing

批量处理

Run predictions and save outputs:
bash
for prompt in "cat" "dog" "bird"; do
  replicate run stability-ai/sdxl prompt="$prompt" --save --output-directory "./outputs/$prompt"
done
运行预测并保存输出:
bash
for prompt in "cat" "dog" "bird"; do
  replicate run stability-ai/sdxl prompt="$prompt" --save --output-directory "./outputs/$prompt"
done

Monitor Long-Running Tasks

监控长时间运行的任务

Submit without waiting, then check status:
bash
undefined
提交任务后无需等待,后续再检查状态:
bash
undefined

Submit

提交任务

replicate run owner/model input=value --no-wait --json > prediction.json
replicate run owner/model input=value --no-wait --json > prediction.json

Check status later

后续检查状态

replicate prediction show $(jq -r '.id' prediction.json)
undefined
replicate prediction show $(jq -r '.id' prediction.json)
undefined

Best Practices

最佳实践

  1. Always check schema first - Run
    replicate model schema <model>
    to understand required and optional inputs before running predictions.
  2. Use streaming for text models - Add
    --stream
    flag when running language models to see output in real-time.
  3. Save outputs explicitly - Use
    --save
    and
    --output-directory
    to organize prediction outputs.
  4. Use JSON output for automation - Add
    --json
    flag when parsing outputs programmatically.
  5. Open in web for debugging - Add
    --web
    flag to view predictions in the Replicate dashboard for detailed logs.
  6. Chain predictions efficiently - Use the
    {{.output}}
    syntax to pass outputs between models without intermediate saves.
  1. 务必先查看架构 - 运行
    replicate model schema <model>
    以了解运行预测前需要的必填与可选输入。
  2. 文本模型使用流式输出 - 运行语言模型时添加
    --stream
    参数,实时查看输出内容。
  3. 显式保存输出 - 使用
    --save
    --output-directory
    参数来整理预测输出文件。
  4. 自动化场景使用JSON输出 - 当需要以编程方式解析输出时,添加
    --json
    参数。
  5. 调试时使用网页查看 - 添加
    --web
    参数,在Replicate控制台中查看预测的详细日志。
  6. 高效实现链式预测 - 使用
    {{.output}}
    语法在模型间传递输出,无需中间保存文件。

Troubleshooting

故障排查

Authentication errors:
  • Verify
    REPLICATE_API_TOKEN
    is set correctly
  • Run
    replicate account current
    to test authentication
Model not found:
  • Check model name format:
    owner/model-name
  • Verify model exists at replicate.com
Input validation errors:
  • Run
    replicate model schema <model>
    to see required inputs
  • Check input types (string, number, file)
File upload issues:
  • Ensure
    @
    prefix is used for local files
  • Verify file path is correct and file exists
身份验证错误:
  • 确认
    REPLICATE_API_TOKEN
    已正确设置
  • 运行
    replicate account current
    测试身份验证状态
模型未找到:
  • 检查模型名称格式:
    owner/model-name
  • 确认模型在replicate.com上存在
输入验证错误:
  • 运行
    replicate model schema <model>
    查看必填输入
  • 检查输入类型(字符串、数字、文件)
文件上传问题:
  • 确保本地文件路径前添加了
    @
    前缀
  • 确认文件路径正确且文件存在

Additional Resources

额外资源