openstoryline-install

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

OpenStoryline Install

OpenStoryline 安装

Use this skill when the task is to install or repair a local source checkout of FireRed-OpenStoryline.
Keep the workflow deterministic:
  1. Confirm the repo path and read the current README.md and config.toml.
  2. Detect local prerequisites before changing anything.
  3. Prefer a local
    venv
    install unless the user explicitly asks for Docker or
    conda
    .
  4. Download resources only after Python dependencies succeed.
  5. Validate imports and config loading before claiming success.
  6. This skill assumes macOS, Linux, or WSL with a POSIX shell.
当任务为安装或修复本地源码版本的FireRed-OpenStoryline时,使用本技能。
保持工作流的确定性:
  1. 确认仓库路径并读取当前的README.md和config.toml文件。
  2. 在进行任何更改前检测本地依赖项。
  3. 除非用户明确要求使用Docker或conda,否则优先选择本地venv安装。
  4. 仅在Python依赖安装成功后再下载资源。
  5. 在宣告成功前验证导入和配置加载情况。
  6. 本技能适用于macOS、Linux或带有POSIX shell的WSL环境。

What This Skill Covers

本技能涵盖的内容

  • Clone the GitHub repo if needed
  • Create a Python environment
  • Install Python dependencies
  • Download
    .storyline
    models and
    resource/
    assets
  • Fill
    config.toml
    model settings
  • Start MCP and web servers
  • Explain common installation/documentation gaps
  • 如有需要,克隆GitHub仓库
  • 创建Python环境
  • 安装Python依赖项
  • 下载.storyline模型和resource/资源文件
  • 填写config.toml中的模型设置
  • 启动MCP和Web服务器
  • 解释常见的安装/文档缺失问题

Preconditions

前置条件

Check these first:
  • git
  • Python
    >= 3.11
  • ffmpeg
  • wget
  • unzip
Optional:
  • docker
  • conda
If
ffmpeg
,
wget
, or
unzip
are missing, install them through the OS package manager before continuing.
Examples:
  • macOS with Homebrew:
    bash
    brew install ffmpeg wget unzip
  • Debian/Ubuntu:
    bash
    sudo apt-get update
    sudo apt-get install -y ffmpeg wget unzip
If no supported package manager or permission is available, stop and report the missing system dependency clearly.
首先检查以下内容:
  • git
  • Python
    >= 3.11
  • ffmpeg
  • wget
  • unzip
可选:
  • docker
  • conda
如果缺少
ffmpeg
wget
unzip
,请先通过操作系统包管理器安装它们,再继续操作。
示例:
  • 使用Homebrew的macOS:
    bash
    brew install ffmpeg wget unzip
  • Debian/Ubuntu:
    bash
    sudo apt-get update
    sudo apt-get install -y ffmpeg wget unzip
如果没有支持的包管理器或没有权限,请停止操作并清晰报告缺失的系统依赖项。

Interpreter selection

解释器选择

First prefer any interpreter that already exists and passes version checks:
  1. A system Python
    >= 3.11
  2. An already available conda Python
    >= 3.11
  3. An already available pyenv Python
    >= 3.11
    , but only if basic stdlib modules work
Validate candidate interpreters before using them:
bash
/path/to/python -c "import ssl, sqlite3, venv; print('stdlib_ok')"
If no supported interpreter already exists, peferr conda fallback:
bash
conda create -y -n openstoryline-py311 python=3.11
conda run -n openstoryline-py311 python --version
conda run -n openstoryline-py311 python -m venv .venv
After a supported interpreter is found, always create a repo-local .venv and continue using .venv/bin/python for install, config validation, and service startup.
Do not duplicate the rest of the workflow for pyenv or conda unless the user explicitly asks to stay inside a conda environment.
首先优先选择已存在且通过版本检查的解释器:
  1. 系统Python
    >= 3.11
  2. 已安装的conda Python
    >= 3.11
  3. 已安装的pyenv Python
    >= 3.11
    ,但仅当基础标准库模块可用时
在使用候选解释器前进行验证:
bash
/path/to/python -c "import ssl, sqlite3, venv; print('stdlib_ok')"
如果没有支持的解释器存在,优先使用conda作为备选:
bash
conda create -y -n openstoryline-py311 python=3.11
conda run -n openstoryline-py311 python --version
conda run -n openstoryline-py311 python -m venv .venv
找到支持的解释器后,始终创建仓库本地的.venv,并继续使用.venv/bin/python进行安装、配置验证和服务启动。
除非用户明确要求留在conda环境内,否则不要为pyenv或conda重复其余工作流。

Preferred Install Path

推荐安装路径

From the repo root:
bash
/path/to/python -m venv .venv
.venv/bin/python -m pip install --upgrade pip
.venv/bin/python -m pip install -r requirements.txt
bash download.sh
Notes:
  • download.sh
    pulls both model weights and a large resource archive. It can take a long time and may resume after network drops.
  • The resource download is required for a full local run, not just the Python package install.
从仓库根目录执行:
bash
/path/to/python -m venv .venv
.venv/bin/python -m pip install --upgrade pip
.venv/bin/python -m pip install -r requirements.txt
bash download.sh
注意:
  • download.sh
    会拉取模型权重和大型资源归档文件。此过程可能耗时较长,且网络中断后可恢复。
  • 资源下载是本地完整运行的必要步骤,而非仅Python包安装的步骤。

Configuration

配置

Before starting the app, update config.toml.
You can use scripts/update_config.py.
At minimum, fill:
bash
.venv/bin/python scripts/update_config.py --config ./config.toml --set llm.model=REPLACE_WITH_REAL_MODEL
.venv/bin/python scripts/update_config.py --config ./config.toml --set llm.base_url=REPLACE_WITH_REAL_URL
.venv/bin/python scripts/update_config.py --config ./config.toml --set llm.api_key=sk-REPLACE_WITH_REAL_KEY

.venv/bin/python scripts/update_config.py --config ./config.toml --set vlm.model=REPLACE_WITH_REAL_MODEL
.venv/bin/python scripts/update_config.py --config ./config.toml --set vlm.base_url=REPLACE_WITH_REAL_URL
.venv/bin/python scripts/update_config.py --config ./config.toml --set vlm.api_key=sk-REPLACE_WITH_REAL_KEY
Optional but common:
  • search_media.pexels_api_key
    for searching media
  • TTS provider keys under
    generate_voiceover.providers.*
    (choose one provider)
启动应用前,请更新config.toml文件。
您可以使用scripts/update_config.py脚本。
至少需要填写以下内容:
bash
.venv/bin/python scripts/update_config.py --config ./config.toml --set llm.model=REPLACE_WITH_REAL_MODEL
.venv/bin/python scripts/update_config.py --config ./config.toml --set llm.base_url=REPLACE_WITH_REAL_URL
.venv/bin/python scripts/update_config.py --config ./config.toml --set llm.api_key=sk-REPLACE_WITH_REAL_KEY

.venv/bin/python scripts/update_config.py --config ./config.toml --set vlm.model=REPLACE_WITH_REAL_MODEL
.venv/bin/python scripts/update_config.py --config ./config.toml --set vlm.base_url=REPLACE_WITH_REAL_URL
.venv/bin/python scripts/update_config.py --config ./config.toml --set vlm.api_key=sk-REPLACE_WITH_REAL_KEY
可选但常用的配置:
  • search_media.pexels_api_key
    用于搜索媒体资源
  • generate_voiceover.providers.*
    下的TTS提供商密钥(选择一个提供商即可)

Verification

验证

Run these checks before saying installation is complete:
bash
.venv/bin/pip check
PYTHONPATH=src .venv/bin/python -c "from open_storyline.config import load_settings; load_settings('config.toml'); print('config_ok')"
Also confirm key resources exist:
bash
test -f .storyline/models/transnetv2-pytorch-weights.pth
test -d resource/bgms
在宣告安装完成前,运行以下检查:
bash
.venv/bin/pip check
PYTHONPATH=src .venv/bin/python -c "from open_storyline.config import load_settings; load_settings('config.toml'); print('config_ok')"
同时确认关键资源已存在:
bash
test -f .storyline/models/transnetv2-pytorch-weights.pth
test -d resource/bgms

Start Services

启动服务

There are two common paths. These are long-running processes. Do not wait for them to exit normally. Treat successful startup log lines or confirmed listening ports as success, and keep the services running in separate shells/sessions as needed.
Manual start:
bash
PYTHONPATH=src .venv/bin/python -m open_storyline.mcp.server
In a second shell:
bash
PYTHONPATH=src .venv/bin/python -m uvicorn agent_fastapi:app --host 127.0.0.1 --port 8005
有两种常见方式。这些是长期运行的进程,无需等待其正常退出。将成功启动的日志行或确认监听的端口视为成功,并根据需要在单独的shell/会话中保持服务运行。
手动启动:
bash
PYTHONPATH=src .venv/bin/python -m open_storyline.mcp.server
在第二个shell中:
bash
PYTHONPATH=src .venv/bin/python -m uvicorn agent_fastapi:app --host 127.0.0.1 --port 8005

Expected Outputs

预期输出

After a successful install:
  • .venv/
    exists
  • MCP listens on the configured local port (commonly
    127.0.0.1:8001
    )
  • Web listens on the configured web port (commonly
    127.0.0.1:8005
    , though
    run.sh
    defaults may differ)
成功安装后:
  • .venv/
    目录存在
  • MCP在配置的本地端口监听(通常为
    127.0.0.1:8001
  • Web服务在配置的Web端口监听(通常为
    127.0.0.1:8005
    ,不过
    run.sh
    的默认设置可能不同)

Common Problems

常见问题

download.sh
is slow or interrupted

download.sh
速度慢或中断

Symptom:
  • Large downloads stall or reconnect
Fix:
  • Let
    wget
    continue; it supports resume behavior here
  • Verify extracted outputs instead of trusting the progress meter
症状:
  • 大型下载停滞或重新连接
解决方法:
  • wget
    继续运行;它在此处支持断点续传
  • 验证提取后的输出,而非依赖进度条

Web/MCP server fails to bind

Web/MCP服务器绑定失败

Symptom:
  • operation not permitted
    while binding
    127.0.0.1
    or
    0.0.0.0
Fix:
  • In agent sandboxes, request permission to open local listening ports
  • Prefer
    127.0.0.1
    over
    0.0.0.0
    unless external access is required
症状:
  • 绑定
    127.0.0.1
    0.0.0.0
    时出现
    operation not permitted
    错误
解决方法:
  • 在代理沙箱中,请求打开本地监听端口的权限
  • 除非需要外部访问,否则优先使用
    127.0.0.1
    而非
    0.0.0.0

Response Pattern

响应模式

When reporting status to the user, separate:
  • what is installed
  • what is still downloading
  • what config is still missing
  • what address the service is listening on
Do not say "installation complete" if only the Python packages are installed but the resource bundle is still missing.
向用户报告状态时,分开说明:
  • 已安装的内容
  • 仍在下载的内容
  • 仍缺失的配置项
  • 服务正在监听的地址
如果仅安装了Python包但仍缺失资源包,请勿说“安装完成”。