aliyun-qwen-livetranslate
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseCategory: provider
分类: provider
Model Studio Qwen LiveTranslate
Model Studio Qwen LiveTranslate
Validation
验证
bash
mkdir -p output/aliyun-qwen-livetranslate
python -m py_compile skills/ai/audio/aliyun-qwen-livetranslate/scripts/prepare_livetranslate_request.py && echo "py_compile_ok" > output/aliyun-qwen-livetranslate/validate.txtPass criteria: command exits 0 and is generated.
output/aliyun-qwen-livetranslate/validate.txtbash
mkdir -p output/aliyun-qwen-livetranslate
python -m py_compile skills/ai/audio/aliyun-qwen-livetranslate/scripts/prepare_livetranslate_request.py && echo "py_compile_ok" > output/aliyun-qwen-livetranslate/validate.txt通过标准:命令退出码为0,且成功生成文件。
output/aliyun-qwen-livetranslate/validate.txtOutput And Evidence
输出与证明材料
- Save translation session payloads and response summaries under .
output/aliyun-qwen-livetranslate/
- 将翻译会话的payload和响应摘要保存在目录下。
output/aliyun-qwen-livetranslate/
Critical model names
关键模型名称
Use one of these exact model strings:
qwen3-livetranslate-flashqwen3-livetranslate-flash-realtime
请使用以下准确的模型字符串之一:
qwen3-livetranslate-flashqwen3-livetranslate-flash-realtime
Typical use
典型使用场景
- Chinese/English meeting interpretation
- Live subtitles in another language
- Call-center agent assist with translated captions
- 中英会议口译
- 其他语言的实时字幕
- 为呼叫中心坐席提供翻译字幕辅助
Normalized interface (audio.livetranslate)
标准化接口(audio.livetranslate)
Request
请求参数
- (string, optional): default
modelqwen3-livetranslate-flash - (string, required)
source_language - (string, required)
target_language - (string, optional): e.g.
audio_formatpcm - (int, optional): e.g.
sample_rate16000
- (字符串,可选):默认值为
modelqwen3-livetranslate-flash - (字符串,必填)
source_language - (字符串,必填)
target_language - (字符串,可选):例如
audio_formatpcm - (整数,可选):例如
sample_rate16000
Response
返回结果
- (string)
translated_text - (string, optional)
source_text - or
audio_url(optional, model dependent)audio_chunk
- (字符串)
translated_text - (字符串,可选)
source_text - 或
audio_url(可选,取决于具体模型)audio_chunk
Quick start
快速开始
bash
python skills/ai/audio/aliyun-qwen-livetranslate/scripts/prepare_livetranslate_request.py \
--source-language zh \
--target-language en \
--output output/aliyun-qwen-livetranslate/request.jsonbash
python skills/ai/audio/aliyun-qwen-livetranslate/scripts/prepare_livetranslate_request.py \
--source-language zh \
--target-language en \
--output output/aliyun-qwen-livetranslate/request.jsonNotes
注意事项
- Prefer the realtime model for continuous streaming sessions.
- Prefer the non-realtime flash model for simpler integration and lower client complexity.
- 连续流会话优先使用实时模型
- 若需要更简单的集成、更低的客户端复杂度,优先使用非实时的flash模型
References
参考
references/sources.md
references/sources.md