hugging-face-model-trainer
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseTRL Training on Hugging Face Jobs
在Hugging Face Jobs上进行TRL训练
Overview
概述
Train language models using TRL (Transformer Reinforcement Learning) on fully managed Hugging Face infrastructure. No local GPU setup required—models train on cloud GPUs and results are automatically saved to the Hugging Face Hub.
TRL provides multiple training methods:
- SFT (Supervised Fine-Tuning) - Standard instruction tuning
- DPO (Direct Preference Optimization) - Alignment from preference data
- GRPO (Group Relative Policy Optimization) - Online RL training
- Reward Modeling - Train reward models for RLHF
For detailed TRL method documentation:
python
hf_doc_search("your query", product="trl")
hf_doc_fetch("https://huggingface.co/docs/trl/sft_trainer") # SFT
hf_doc_fetch("https://huggingface.co/docs/trl/dpo_trainer") # DPO在全托管的Hugging Face基础设施上使用TRL(Transformer Reinforcement Learning)训练语言模型。无需本地GPU设置——模型在云GPU上训练,结果自动保存到Hugging Face Hub。
TRL提供多种训练方法:
- SFT(监督微调)- 标准指令调优
- DPO(直接偏好优化)- 基于偏好数据的对齐
- GRPO(组相对策略优化)- 在线RL训练
- 奖励模型训练 - 为RLHF训练奖励模型
如需TRL方法的详细文档:
python
hf_doc_search("your query", product="trl")
hf_doc_fetch("https://huggingface.co/docs/trl/sft_trainer") # SFT
hf_doc_fetch("https://huggingface.co/docs/trl/dpo_trainer") # DPOetc.
etc.
**See also:** `references/training_methods.md` for method overviews and selection guidance
**另请参阅:** `references/training_methods.md` 获取方法概述和选择指导When to Use This Skill
何时使用本技能
Use this skill when users want to:
- Fine-tune language models on cloud GPUs without local infrastructure
- Train with TRL methods (SFT, DPO, GRPO, etc.)
- Run training jobs on Hugging Face Jobs infrastructure
- Convert trained models to GGUF for local deployment (Ollama, LM Studio, llama.cpp)
- Ensure trained models are permanently saved to the Hub
- Use modern workflows with optimized defaults
当用户需要以下操作时使用本技能:
- 在无本地基础设施的情况下,通过云GPU微调语言模型
- 使用TRL方法(SFT、DPO、GRPO等)进行训练
- 在Hugging Face Jobs基础设施上运行训练任务
- 将训练后的模型转换为GGUF格式以用于本地部署(Ollama、LM Studio、llama.cpp)
- 确保训练后的模型永久保存到Hub
- 使用具有优化默认值的现代工作流
Key Directives
核心指令
When assisting with training jobs:
-
ALWAYS useMCP tool - Submit jobs using
hf_jobs(), NOT bashhf_jobs("uv", {...})commands. Thetrl-jobsparameter accepts Python code directly. Do NOT save to local files unless the user explicitly requests it. Pass the script content as a string toscript. If user asks to "train a model", "fine-tune", or similar requests, you MUST create the training script AND submit the job immediately usinghf_jobs().hf_jobs() -
Always include Trackio - Every training script should include Trackio for real-time monitoring. Use example scripts inas templates.
scripts/ -
Provide job details after submission - After submitting, provide job ID, monitoring URL, estimated time, and note that the user can request status checks later.
-
Use example scripts as templates - Reference,
scripts/train_sft_example.py, etc. as starting points.scripts/train_dpo_example.py
协助处理训练任务时:
-
始终使用MCP工具 - 使用
hf_jobs()提交任务,而非bash的hf_jobs("uv", {...})命令。trl-jobs参数可直接接受Python代码。除非用户明确要求,否则不要保存到本地文件。将脚本内容作为字符串传递给script。如果用户要求“训练模型”“微调”或类似请求,你必须创建训练脚本并立即使用hf_jobs()提交任务。hf_jobs() -
始终包含Trackio - 每个训练脚本都应包含Trackio以实现实时监控。以中的示例脚本为模板。
scripts/ -
提交后提供任务详情 - 提交后,提供任务ID、监控URL、预计时间,并告知用户之后可请求状态检查。
-
以示例脚本为模板 - 参考、
scripts/train_sft_example.py等作为起点。scripts/train_dpo_example.py
Local Script Dependencies
本地脚本依赖
To run scripts locally (like ), install dependencies:
estimate_cost.pybash
pip install -r requirements.txt要在本地运行脚本(如),请安装依赖:
estimate_cost.pybash
pip install -r requirements.txtPrerequisites Checklist
先决条件检查清单
Before starting any training job, verify:
开始任何训练任务前,请验证:
✅ Account & Authentication
✅ 账户与认证
- Hugging Face Account with Pro, Team, or Enterprise plan (Jobs require paid plan)
- Authenticated login: Check with
hf_whoami() - HF_TOKEN for Hub Push ⚠️ CRITICAL - Training environment is ephemeral, must push to Hub or ALL training results are lost
- Token must have write permissions
- MUST pass in job config to make token available (the
secrets={"HF_TOKEN": "$HF_TOKEN"}syntax references your actual token value)$HF_TOKEN
- 拥有Pro、Team或Enterprise计划的Hugging Face账户(Jobs需要付费计划)
- 已登录认证:使用检查
hf_whoami() - 用于Hub推送的HF_TOKEN ⚠️ 至关重要 - 训练环境是临时的,必须推送到Hub,否则所有训练结果都会丢失
- Token必须具有写入权限
- 必须在任务配置中传入以使Token可用(
secrets={"HF_TOKEN": "$HF_TOKEN"}语法引用你的实际Token值)$HF_TOKEN
✅ Dataset Requirements
✅ 数据集要求
- Dataset must exist on Hub or be loadable via
datasets.load_dataset() - Format must match training method (SFT: "messages"/text/prompt-completion; DPO: chosen/rejected; GRPO: prompt-only)
- ALWAYS validate unknown datasets before GPU training to prevent format failures (see Dataset Validation section below)
- Size appropriate for hardware (Demo: 50-100 examples on t4-small; Production: 1K-10K+ on a10g-large/a100-large)
- 数据集必须存在于Hub上,或可通过加载
datasets.load_dataset() - 格式必须匹配训练方法(SFT:"messages"/文本/提示-补全;DPO:chosen/rejected;GRPO:仅提示)
- 在GPU训练前始终验证未知数据集,以避免格式错误(见下文数据集验证部分)
- 大小与硬件匹配(演示:t4-small上使用50-100个示例;生产:a10g-large/a100-large上使用1K-10K+个示例)
⚠️ Critical Settings
⚠️ 关键设置
- Timeout must exceed expected training time - Default 30min is TOO SHORT for most training. Minimum recommended: 1-2 hours. Job fails and loses all progress if timeout is exceeded.
- Hub push must be enabled - Config: ,
push_to_hub=True; Job:hub_model_id="username/model-name"secrets={"HF_TOKEN": "$HF_TOKEN"}
- 超时时间必须超过预期训练时间 - 默认30分钟对于大多数训练来说太短。建议最小值:1-2小时。如果超时,任务将失败,所有进度丢失。
- 必须启用Hub推送 - 配置:,
push_to_hub=True;任务:hub_model_id="username/model-name"secrets={"HF_TOKEN": "$HF_TOKEN"}
Asynchronous Job Guidelines
异步任务指南
⚠️ IMPORTANT: Training jobs run asynchronously and can take hours
⚠️ 重要提示:训练任务异步运行,可能需要数小时
Action Required
必要操作
When user requests training:
- Create the training script with Trackio included (use as template)
scripts/train_sft_example.py - Submit immediately using MCP tool with script content inline - don't save to file unless user requests
hf_jobs() - Report submission with job ID, monitoring URL, and estimated time
- Wait for user to request status checks - don't poll automatically
当用户请求训练时:
- 创建包含Trackio的训练脚本(以为模板)
scripts/train_sft_example.py - 立即提交 - 使用MCP工具并传入内联脚本内容,除非用户要求否则不要保存到文件
hf_jobs() - 提交后反馈 - 提供任务ID、监控URL和预计时间
- 等待用户请求状态检查 - 不要自动轮询
Ground Rules
基本规则
- Jobs run in background - Submission returns immediately; training continues independently
- Initial logs delayed - Can take 30-60 seconds for logs to appear
- User checks status - Wait for user to request status updates
- Avoid polling - Check logs only on user request; provide monitoring links instead
- 任务在后台运行 - 提交后立即返回,训练独立进行
- 初始日志延迟 - 日志可能需要30-60秒才会显示
- 用户主动检查状态 - 等待用户请求状态更新
- 避免轮询 - 仅在用户请求时检查日志,提供监控链接即可
After Submission
提交后操作
Provide to user:
- ✅ Job ID and monitoring URL
- ✅ Expected completion time
- ✅ Trackio dashboard URL
- ✅ Note that user can request status checks later
Example Response:
✅ Job submitted successfully!
Job ID: abc123xyz
Monitor: https://huggingface.co/jobs/username/abc123xyz
Expected time: ~2 hours
Estimated cost: ~$10
The job is running in the background. Ask me to check status/logs when ready!向用户提供:
- ✅ 任务ID和监控URL
- ✅ 预计完成时间
- ✅ Trackio仪表板URL
- ✅ 告知用户之后可请求状态检查
示例回复:
✅ 任务提交成功!
任务ID: abc123xyz
监控链接: https://huggingface.co/jobs/username/abc123xyz
预计时间: ~2小时
预计成本: ~$10
任务正在后台运行。准备好后可让我检查状态/日志!Quick Start: Three Approaches
快速开始:三种方法
💡 Tip for Demos: For quick demos on smaller GPUs (t4-small), omit and to save ~40% memory. You'll still see training loss and learning progress.
eval_dataseteval_strategy💡 演示提示: 在较小GPU(t4-small)上进行快速演示时,省略和可节省约40%内存。你仍能看到训练损失和学习进度。
eval_dataseteval_strategySequence Length Configuration
序列长度配置
TRL config classes use (not ) to control tokenized sequence length:
max_lengthmax_seq_lengthpython
undefined**TRL配置类使用(而非)**来控制分词后的序列长度:
max_lengthmax_seq_lengthpython
undefined✅ CORRECT - If you need to set sequence length
✅ 正确 - 如需设置序列长度
SFTConfig(max_length=512) # Truncate sequences to 512 tokens
DPOConfig(max_length=2048) # Longer context (2048 tokens)
SFTConfig(max_length=512) # 将序列截断为512个token
DPOConfig(max_length=2048) # 更长上下文(2048个token)
❌ WRONG - This parameter doesn't exist
❌ 错误 - 该参数不存在
SFTConfig(max_seq_length=512) # TypeError!
**Default behavior:** `max_length=1024` (truncates from right). This works well for most training.
**When to override:**
- **Longer context**: Set higher (e.g., `max_length=2048`)
- **Memory constraints**: Set lower (e.g., `max_length=512`)
- **Vision models**: Set `max_length=None` (prevents cutting image tokens)
**Usually you don't need to set this parameter at all** - the examples below use the sensible default.SFTConfig(max_seq_length=512) # 类型错误!
**默认行为:** `max_length=1024`(从右侧截断)。这适用于大多数训练场景。
**何时覆盖默认值:**
- **更长上下文**:设置更高值(如`max_length=2048`)
- **内存限制**:设置更低值(如`max_length=512`)
- **视觉模型**:设置`max_length=None`(避免截断图像token)
**通常你无需设置此参数** - 以下示例使用合理的默认值。Approach 1: UV Scripts (Recommended—Default Choice)
方法1:UV脚本(推荐——默认选择)
UV scripts use PEP 723 inline dependencies for clean, self-contained training. This is the primary approach for Claude Code.
python
hf_jobs("uv", {
"script": """UV脚本使用PEP 723内联依赖项,实现简洁、独立的训练。这是Claude Code的主要方法。
python
hf_jobs("uv", {
"script": """/// script
/// script
dependencies = ["trl>=0.12.0", "peft>=0.7.0", "trackio"]
dependencies = ["trl>=0.12.0", "peft>=0.7.0", "trackio"]
///
///
from datasets import load_dataset
from peft import LoraConfig
from trl import SFTTrainer, SFTConfig
import trackio
dataset = load_dataset("trl-lib/Capybara", split="train")
from datasets import load_dataset
from peft import LoraConfig
from trl import SFTTrainer, SFTConfig
import trackio
dataset = load_dataset("trl-lib/Capybara", split="train")
Create train/eval split for monitoring
创建训练/验证拆分以用于监控
dataset_split = dataset.train_test_split(test_size=0.1, seed=42)
trainer = SFTTrainer(
model="Qwen/Qwen2.5-0.5B",
train_dataset=dataset_split["train"],
eval_dataset=dataset_split["test"],
peft_config=LoraConfig(r=16, lora_alpha=32),
args=SFTConfig(
output_dir="my-model",
push_to_hub=True,
hub_model_id="username/my-model",
num_train_epochs=3,
eval_strategy="steps",
eval_steps=50,
report_to="trackio",
project="meaningful_prject_name", # project name for the training name (trackio)
run_name="meaningful_run_name", # descriptive name for the specific training run (trackio)
)
)
trainer.train()
trainer.push_to_hub()
""",
"flavor": "a10g-large",
"timeout": "2h",
"secrets": {"HF_TOKEN": "$HF_TOKEN"}
})
**Benefits:** Direct MCP tool usage, clean code, dependencies declared inline (PEP 723), no file saving required, full control
**When to use:** Default choice for all training tasks in Claude Code, custom training logic, any scenario requiring `hf_jobs()`dataset_split = dataset.train_test_split(test_size=0.1, seed=42)
trainer = SFTTrainer(
model="Qwen/Qwen2.5-0.5B",
train_dataset=dataset_split["train"],
eval_dataset=dataset_split["test"],
peft_config=LoraConfig(r=16, lora_alpha=32),
args=SFTConfig(
output_dir="my-model",
push_to_hub=True,
hub_model_id="username/my-model",
num_train_epochs=3,
eval_strategy="steps",
eval_steps=50,
report_to="trackio",
project="meaningful_prject_name", # 训练项目名称(trackio)
run_name="meaningful_run_name", # 特定训练运行的描述性名称(trackio)
)
)
trainer.train()
trainer.push_to_hub()
""",
"flavor": "a10g-large",
"timeout": "2h",
"secrets": {"HF_TOKEN": "$HF_TOKEN"}
})
**优势:** 直接使用MCP工具,代码简洁,内联声明依赖(PEP 723),无需保存文件,完全可控
**何时使用:** Claude Code中所有训练任务的默认选择,自定义训练逻辑,任何需要`hf_jobs()`的场景Working with Scripts
脚本使用注意事项
⚠️ Important: The parameter accepts either inline code (as shown above) OR a URL. Local file paths do NOT work.
scriptWhy local paths don't work:
Jobs run in isolated Docker containers without access to your local filesystem. Scripts must be:
- Inline code (recommended for custom training)
- Publicly accessible URLs
- Private repo URLs (with HF_TOKEN)
Common mistakes:
python
undefined⚠️ 重要: 参数可接受内联代码(如上所示)或URL。本地文件路径无效。
script本地路径无效的原因:
任务在隔离的Docker容器中运行,无法访问你的本地文件系统。脚本必须是:
- 内联代码(自定义训练推荐)
- 可公开访问的URL
- 私有仓库URL(需HF_TOKEN)
常见错误:
python
undefined❌ These will all fail
❌ 这些都会失败
hf_jobs("uv", {"script": "train.py"})
hf_jobs("uv", {"script": "./scripts/train.py"})
hf_jobs("uv", {"script": "/path/to/train.py"})
**Correct approaches:**
```pythonhf_jobs("uv", {"script": "train.py"})
hf_jobs("uv", {"script": "./scripts/train.py"})
hf_jobs("uv", {"script": "/path/to/train.py"})
**正确方式:**
```python✅ Inline code (recommended)
✅ 内联代码(推荐)
hf_jobs("uv", {"script": "# /// script\n# dependencies = [...]\n# ///\n\n<your code>"})
hf_jobs("uv", {"script": "# /// script\n# dependencies = [...]\n# ///\n\n<your code>"})
✅ From Hugging Face Hub
✅ 来自Hugging Face Hub
hf_jobs("uv", {"script": "https://huggingface.co/user/repo/resolve/main/train.py"})
hf_jobs("uv", {"script": "https://huggingface.co/user/repo/resolve/main/train.py"})
✅ From GitHub
✅ 来自GitHub
hf_jobs("uv", {"script": "https://raw.githubusercontent.com/user/repo/main/train.py"})
hf_jobs("uv", {"script": "https://raw.githubusercontent.com/user/repo/main/train.py"})
✅ From Gist
✅ 来自Gist
hf_jobs("uv", {"script": "https://gist.githubusercontent.com/user/id/raw/train.py"})
**To use local scripts:** Upload to HF Hub first:
```bash
huggingface-cli repo create my-training-scripts --type model
huggingface-cli upload my-training-scripts ./train.py train.pyhf_jobs("uv", {"script": "https://gist.githubusercontent.com/user/id/raw/train.py"})
**使用本地脚本的方法:** 先上传到HF Hub:
```bash
huggingface-cli repo create my-training-scripts --type model
huggingface-cli upload my-training-scripts ./train.py train.pyundefinedundefinedApproach 2: TRL Maintained Scripts (Official Examples)
方法2:TRL官方维护的脚本(官方示例)
TRL provides battle-tested scripts for all methods. Can be run from URLs:
python
hf_jobs("uv", {
"script": "https://github.com/huggingface/trl/blob/main/trl/scripts/sft.py",
"script_args": [
"--model_name_or_path", "Qwen/Qwen2.5-0.5B",
"--dataset_name", "trl-lib/Capybara",
"--output_dir", "my-model",
"--push_to_hub",
"--hub_model_id", "username/my-model"
],
"flavor": "a10g-large",
"timeout": "2h",
"secrets": {"HF_TOKEN": "$HF_TOKEN"}
})Benefits: No code to write, maintained by TRL team, production-tested
When to use: Standard TRL training, quick experiments, don't need custom code
Available: Scripts are available from https://github.com/huggingface/trl/tree/main/examples/scripts
TRL为所有方法提供了经过实战检验的脚本。可通过URL运行:
python
hf_jobs("uv", {
"script": "https://github.com/huggingface/trl/blob/main/trl/scripts/sft.py",
"script_args": [
"--model_name_or_path", "Qwen/Qwen2.5-0.5B",
"--dataset_name", "trl-lib/Capybara",
"--output_dir", "my-model",
"--push_to_hub",
"--hub_model_id", "username/my-model"
],
"flavor": "a10g-large",
"timeout": "2h",
"secrets": {"HF_TOKEN": "$HF_TOKEN"}
})优势: 无需编写代码,由TRL团队维护,经过生产环境测试
何时使用: 标准TRL训练,快速实验,无需自定义代码
可用脚本: 来自https://github.com/huggingface/trl/tree/main/examples/scripts
Finding More UV Scripts on Hub
在Hub上查找更多UV脚本
The organization provides ready-to-use UV scripts stored as datasets on Hugging Face Hub:
uv-scriptspython
undefineduv-scriptspython
undefinedDiscover available UV script collections
发现可用的UV脚本集合
dataset_search({"author": "uv-scripts", "sort": "downloads", "limit": 20})
dataset_search({"author": "uv-scripts", "sort": "downloads", "limit": 20})
Explore a specific collection
浏览特定集合
hub_repo_details(["uv-scripts/classification"], repo_type="dataset", include_readme=True)
**Popular collections:** ocr, classification, synthetic-data, vllm, dataset-creationhub_repo_details(["uv-scripts/classification"], repo_type="dataset", include_readme=True)
**热门集合:** ocr、classification、synthetic-data、vllm、dataset-creationApproach 3: HF Jobs CLI (Direct Terminal Commands)
方法3:HF Jobs CLI(直接终端命令)
When the MCP tool is unavailable, use the CLI directly.
hf_jobs()hf jobs⚠️ CRITICAL: CLI Syntax Rules
bash
undefined当 MCP工具不可用时,直接使用 CLI。
hf_jobs()hf jobs⚠️ 关键:CLI语法规则
bash
undefined✅ CORRECT syntax - flags BEFORE script URL
✅ 正确语法 - 标志在脚本URL之前
hf jobs uv run --flavor a10g-large --timeout 2h --secrets HF_TOKEN "https://example.com/train.py"
hf jobs uv run --flavor a10g-large --timeout 2h --secrets HF_TOKEN "https://example.com/train.py"
❌ WRONG - "run uv" instead of "uv run"
❌ 错误 - 使用"run uv"而非"uv run"
hf jobs run uv "https://example.com/train.py" --flavor a10g-large
hf jobs run uv "https://example.com/train.py" --flavor a10g-large
❌ WRONG - flags AFTER script URL (will be ignored!)
❌ 错误 - 标志在脚本URL之后(会被忽略!)
hf jobs uv run "https://example.com/train.py" --flavor a10g-large
hf jobs uv run "https://example.com/train.py" --flavor a10g-large
❌ WRONG - "--secret" instead of "--secrets" (plural)
❌ 错误 - 使用"--secret"而非"--secrets"(复数形式)
hf jobs uv run --secret HF_TOKEN "https://example.com/train.py"
**Key syntax rules:**
1. Command order is `hf jobs uv run` (NOT `hf jobs run uv`)
2. All flags (`--flavor`, `--timeout`, `--secrets`) must come BEFORE the script URL
3. Use `--secrets` (plural), not `--secret`
4. Script URL must be the last positional argument
**Complete CLI example:**
```bash
hf jobs uv run \
--flavor a10g-large \
--timeout 2h \
--secrets HF_TOKEN \
"https://huggingface.co/user/repo/resolve/main/train.py"Check job status via CLI:
bash
hf jobs ps # List all jobs
hf jobs logs <job-id> # View logs
hf jobs inspect <job-id> # Job details
hf jobs cancel <job-id> # Cancel a jobhf jobs uv run --secret HF_TOKEN "https://example.com/train.py"
**核心语法规则:**
1. 命令顺序为`hf jobs uv run`(而非`hf jobs run uv`)
2. 所有标志(`--flavor`、`--timeout`、`--secrets`)必须在脚本URL之前
3. 使用`--secrets`(复数),而非`--secret`
4. 脚本URL必须是最后一个位置参数
**完整CLI示例:**
```bash
hf jobs uv run \
--flavor a10g-large \
--timeout 2h \
--secrets HF_TOKEN \
"https://huggingface.co/user/repo/resolve/main/train.py"通过CLI检查任务状态:
bash
hf jobs ps # 列出所有任务
hf jobs logs <job-id> # 查看日志
hf jobs inspect <job-id> # 任务详情
hf jobs cancel <job-id> # 取消任务Approach 4: TRL Jobs Package (Simplified Training)
方法4:TRL Jobs包(简化训练)
The package provides optimized defaults and one-liner training.
trl-jobsbash
undefinedtrl-jobsbash
undefinedInstall
安装
pip install trl-jobs
pip install trl-jobs
Train with SFT (simplest possible)
使用SFT训练(最简单方式)
trl-jobs sft
--model_name Qwen/Qwen2.5-0.5B
--dataset_name trl-lib/Capybara
--model_name Qwen/Qwen2.5-0.5B
--dataset_name trl-lib/Capybara
**Benefits:** Pre-configured settings, automatic Trackio integration, automatic Hub push, one-line commands
**When to use:** User working in terminal directly (not Claude Code context), quick local experimentation
**Repository:** https://github.com/huggingface/trl-jobs
⚠️ **In Claude Code context, prefer using `hf_jobs()` MCP tool (Approach 1) when available.**trl-jobs sft
--model_name Qwen/Qwen2.5-0.5B
--dataset_name trl-lib/Capybara
--model_name Qwen/Qwen2.5-0.5B
--dataset_name trl-lib/Capybara
**优势:** 预配置设置,自动集成Trackio,自动推送到Hub,单行命令
**何时使用:** 用户直接在终端工作(非Claude Code环境),快速本地实验
**仓库:** https://github.com/huggingface/trl-jobs
⚠️ **在Claude Code环境中,优先使用`hf_jobs()` MCP工具(方法1)(若可用)。**Hardware Selection
硬件选择
| Model Size | Recommended Hardware | Cost (approx/hr) | Use Case |
|---|---|---|---|
| <1B params | | ~$0.75 | Demos, quick tests only without eval steps |
| 1-3B params | | ~$1.50-2.50 | Development |
| 3-7B params | | ~$3.50-5.00 | Production training |
| 7-13B params | | ~$5-10 | Large models (use LoRA) |
| 13B+ params | | ~$10-20 | Very large (use LoRA) |
GPU Flavors: cpu-basic/upgrade/performance/xl, t4-small/medium, l4x1/x4, a10g-small/large/largex2/largex4, a100-large, h100/h100x8
Guidelines:
- Use LoRA/PEFT for models >7B to reduce memory
- Multi-GPU automatically handled by TRL/Accelerate
- Start with smaller hardware for testing
See: for detailed specifications
references/hardware_guide.md| 模型大小 | 推荐硬件 | 大致成本(每小时) | 使用场景 |
|---|---|---|---|
| <1B参数 | | ~$0.75 | 演示、仅无验证步骤的快速测试 |
| 1-3B参数 | | ~$1.50-2.50 | 开发 |
| 3-7B参数 | | ~$3.50-5.00 | 生产训练 |
| 7-13B参数 | | ~$5-10 | 大型模型(使用LoRA) |
| 13B+参数 | | ~$10-20 | 超大型模型(使用LoRA) |
GPU类型: cpu-basic/upgrade/performance/xl, t4-small/medium, l4x1/x4, a10g-small/large/largex2/largex4, a100-large, h100/h100x8
指导原则:
- 对>7B的模型使用LoRA/PEFT以减少内存占用
- TRL/Accelerate自动处理多GPU
- 先使用较小硬件进行测试
另请参阅: 获取详细规格
references/hardware_guide.mdCritical: Saving Results to Hub
关键:将结果保存到Hub
⚠️ EPHEMERAL ENVIRONMENT—MUST PUSH TO HUB
The Jobs environment is temporary. All files are deleted when the job ends. If the model isn't pushed to Hub, ALL TRAINING IS LOST.
⚠️ 临时环境——必须推送到Hub
Jobs环境是临时的。任务结束后所有文件都会被删除。如果模型未推送到Hub,所有训练成果都会丢失。
Required Configuration
必要配置
In training script/config:
python
SFTConfig(
push_to_hub=True,
hub_model_id="username/model-name", # MUST specify
hub_strategy="every_save", # Optional: push checkpoints
)In job submission:
python
{
"secrets": {"HF_TOKEN": "$HF_TOKEN"} # Enables authentication
}在训练脚本/配置中:
python
SFTConfig(
push_to_hub=True,
hub_model_id="username/model-name", # 必须指定
hub_strategy="every_save", # 可选:推送检查点
)在任务提交中:
python
{
"secrets": {"HF_TOKEN": "$HF_TOKEN"} # 启用认证
}Verification Checklist
验证清单
Before submitting:
- set in config
push_to_hub=True - includes username/repo-name
hub_model_id - parameter includes HF_TOKEN
secrets - User has write access to target repo
See: for detailed troubleshooting
references/hub_saving.md提交前:
- 配置中设置
push_to_hub=True - 包含用户名/仓库名称
hub_model_id - 参数包含HF_TOKEN
secrets - 用户对目标仓库有写入权限
另请参阅: 获取详细故障排除指南
references/hub_saving.mdTimeout Management
超时管理
⚠️ DEFAULT: 30 MINUTES—TOO SHORT FOR TRAINING
⚠️ 默认值:30分钟——对于训练来说太短
Setting Timeouts
设置超时
python
{
"timeout": "2h" # 2 hours (formats: "90m", "2h", "1.5h", or seconds as integer)
}python
{
"timeout": "2h" # 2小时(格式:"90m", "2h", "1.5h",或整数秒)
}Timeout Guidelines
超时指导原则
| Scenario | Recommended | Notes |
|---|---|---|
| Quick demo (50-100 examples) | 10-30 min | Verify setup |
| Development training | 1-2 hours | Small datasets |
| Production (3-7B model) | 4-6 hours | Full datasets |
| Large model with LoRA | 3-6 hours | Depends on dataset |
Always add 20-30% buffer for model/dataset loading, checkpoint saving, Hub push operations, and network delays.
On timeout: Job killed immediately, all unsaved progress lost, must restart from beginning
| 场景 | 推荐超时 | 说明 |
|---|---|---|
| 快速演示(50-100个示例) | 10-30分钟 | 验证设置 |
| 开发训练 | 1-2小时 | 小型数据集 |
| 生产环境(3-7B模型) | 4-6小时 | 完整数据集 |
| 使用LoRA的大型模型 | 3-6小时 | 取决于数据集 |
始终增加20-30%的缓冲时间,用于模型/数据集加载、检查点保存、Hub推送操作和网络延迟。
超时后果: 任务立即终止,所有未保存的进度丢失,必须从头开始
Cost Estimation
成本估算
Offer to estimate cost when planning jobs with known parameters. Use :
scripts/estimate_cost.pybash
uv run scripts/estimate_cost.py \
--model meta-llama/Llama-2-7b-hf \
--dataset trl-lib/Capybara \
--hardware a10g-large \
--dataset-size 16000 \
--epochs 3Output includes estimated time, cost, recommended timeout (with buffer), and optimization suggestions.
When to offer: User planning a job, asks about cost/time, choosing hardware, job will run >1 hour or cost >$5
当参数已知时,主动提供成本估算。 使用:
scripts/estimate_cost.pybash
uv run scripts/estimate_cost.py \
--model meta-llama/Llama-2-7b-hf \
--dataset trl-lib/Capybara \
--hardware a10g-large \
--dataset-size 16000 \
--epochs 3输出包括预计时间、成本、推荐超时(含缓冲)和优化建议。
何时提供: 用户规划任务、询问成本/时间、选择硬件、任务运行时间>1小时或成本>$5时
Example Training Scripts
示例训练脚本
Production-ready templates with all best practices:
Load these scripts for correctly:
- - Complete SFT training with Trackio, LoRA, checkpoints
scripts/train_sft_example.py - - DPO training for preference learning
scripts/train_dpo_example.py - - GRPO training for online RL
scripts/train_grpo_example.py
These scripts demonstrate proper Hub saving, Trackio integration, checkpoint management, and optimized parameters. Pass their content inline to or use as templates for custom scripts.
hf_jobs()符合最佳实践的生产就绪模板:
加载以下脚本以获取正确配置:
- - 完整的SFT训练,包含Trackio、LoRA、检查点
scripts/train_sft_example.py - - 用于偏好学习的DPO训练
scripts/train_dpo_example.py - - 用于在线RL的GRPO训练
scripts/train_grpo_example.py
这些脚本展示了正确的Hub保存、Trackio集成、检查点管理和优化参数。将其内容内联传递给,或作为自定义脚本的模板。
hf_jobs()Monitoring and Tracking
监控与跟踪
Trackio provides real-time metrics visualization. See for complete setup guide.
references/trackio_guide.mdKey points:
- Add to dependencies
trackio - Configure trainer with
report_to="trackio" and run_name="meaningful_name"
Trackio提供实时指标可视化。完整设置指南见。
references/trackio_guide.md关键点:
- 将添加到依赖项
trackio - 配置训练器时设置和
report_to="trackio"run_name="meaningful_name"
Trackio Configuration Defaults
Trackio默认配置
Use sensible defaults unless user specifies otherwise. When generating training scripts with Trackio:
Default Configuration:
- Space ID: (use "trackio" as default space name)
{username}/trackio - Run naming: Unless otherwise specified, name the run in a way the user will recognize (e.g., descriptive of the task, model, or purpose)
- Config: Keep minimal - only include hyperparameters and model/dataset info
- Project Name: Use a Project Name to associate runs with a particular Project
User overrides: If user requests specific trackio configuration (custom space, run naming, grouping, or additional config), apply their preferences instead of defaults.
This is useful for managing multiple jobs with the same configuration or keeping training scripts portable.
See for complete documentation including grouping runs for experiments.
references/trackio_guide.md除非用户指定,否则使用合理的默认值。 生成包含Trackio的训练脚本时:
默认配置:
- 空间ID:(默认空间名称为"trackio")
{username}/trackio - 运行命名:除非另有指定,使用用户易识别的名称(例如,描述任务、模型或用途)
- 配置:保持精简 - 仅包含超参数和模型/数据集信息
- 项目名称:使用项目名称将运行与特定项目关联
用户自定义: 如果用户要求特定的trackio配置(自定义空间、运行命名、分组或额外配置),则优先使用用户偏好而非默认值。
这有助于管理多个相同配置的任务,或保持训练脚本的可移植性。
有关将运行分组用于实验的完整文档,请参阅。
references/trackio_guide.mdCheck Job Status
检查任务状态
python
undefinedpython
undefinedList all jobs
列出所有任务
hf_jobs("ps")
hf_jobs("ps")
Inspect specific job
检查特定任务
hf_jobs("inspect", {"job_id": "your-job-id"})
hf_jobs("inspect", {"job_id": "your-job-id"})
View logs
查看日志
hf_jobs("logs", {"job_id": "your-job-id"})
**Remember:** Wait for user to request status checks. Avoid polling repeatedly.hf_jobs("logs", {"job_id": "your-job-id"})
**记住:** 等待用户请求状态检查。避免重复轮询。Dataset Validation
数据集验证
Validate dataset format BEFORE launching GPU training to prevent the #1 cause of training failures: format mismatches.
在启动GPU训练前验证数据集格式,以避免训练失败的首要原因:格式不匹配。
Why Validate
为什么要验证
- 50%+ of training failures are due to dataset format issues
- DPO especially strict: requires exact column names (,
prompt,chosen)rejected - Failed GPU jobs waste $1-10 and 30-60 minutes
- Validation on CPU costs ~$0.01 and takes <1 minute
- 50%以上的训练失败是由于数据集格式问题
- DPO要求尤其严格:需要精确的列名(,
prompt,chosen)rejected - 失败的GPU任务会浪费$1-10和30-60分钟
- CPU验证成本约$0.01,耗时<1分钟
When to Validate
何时验证
ALWAYS validate for:
- Unknown or custom datasets
- DPO training (CRITICAL - 90% of datasets need mapping)
- Any dataset not explicitly TRL-compatible
Skip validation for known TRL datasets:
- ,
trl-lib/ultrachat_200k,trl-lib/Capybara, etc.HuggingFaceH4/ultrachat_200k
始终验证:
- 未知或自定义数据集
- DPO训练(至关重要 - 90%的数据集需要映射)
- 任何未明确标记为TRL兼容的数据集
已知TRL兼容数据集可跳过验证:
- ,
trl-lib/ultrachat_200k,trl-lib/Capybara等HuggingFaceH4/ultrachat_200k
Usage
使用方法
python
hf_jobs("uv", {
"script": "https://huggingface.co/datasets/mcp-tools/skills/raw/main/dataset_inspector.py",
"script_args": ["--dataset", "username/dataset-name", "--split", "train"]
})The script is fast, and will usually complete synchronously.
python
hf_jobs("uv", {
"script": "https://huggingface.co/datasets/mcp-tools/skills/raw/main/dataset_inspector.py",
"script_args": ["--dataset", "username/dataset-name", "--split", "train"]
})该脚本速度快,通常会同步完成。
Reading Results
解读结果
The output shows compatibility for each training method:
- - Dataset is compatible, use directly
✓ READY - - Compatible but needs preprocessing (mapping code provided)
✗ NEEDS MAPPING - - Cannot be used for this method
✗ INCOMPATIBLE
When mapping is needed, the output includes a "MAPPING CODE" section with copy-paste ready Python code.
输出显示数据集对每种训练方法的兼容性:
- - 数据集兼容,可直接使用
✓ READY - - 兼容但需要预处理(提供映射代码)
✗ NEEDS MAPPING - - 无法用于该方法
✗ INCOMPATIBLE
当需要映射时,输出会包含**"MAPPING CODE"**部分,提供可直接复制粘贴的Python代码。
Example Workflow
示例工作流
python
undefinedpython
undefined1. Inspect dataset (costs ~$0.01, <1 min on CPU)
1. 检查数据集(成本约$0.01,在CPU上耗时<1分钟)
hf_jobs("uv", {
"script": "https://huggingface.co/datasets/mcp-tools/skills/raw/main/dataset_inspector.py",
"script_args": ["--dataset", "argilla/distilabel-math-preference-dpo", "--split", "train"]
})
hf_jobs("uv", {
"script": "https://huggingface.co/datasets/mcp-tools/skills/raw/main/dataset_inspector.py",
"script_args": ["--dataset", "argilla/distilabel-math-preference-dpo", "--split", "train"]
})
2. Check output markers:
2. 检查输出标记:
✓ READY → proceed with training
✓ READY → 继续训练
✗ NEEDS MAPPING → apply mapping code below
✗ NEEDS MAPPING → 应用下方的映射代码
✗ INCOMPATIBLE → choose different method/dataset
✗ INCOMPATIBLE → 选择其他方法/数据集
3. If mapping needed, apply before training:
3. 如果需要映射,在训练前应用:
def format_for_dpo(example):
return {
'prompt': example['instruction'],
'chosen': example['chosen_response'],
'rejected': example['rejected_response'],
}
dataset = dataset.map(format_for_dpo, remove_columns=dataset.column_names)
def format_for_dpo(example):
return {
'prompt': example['instruction'],
'chosen': example['chosen_response'],
'rejected': example['rejected_response'],
}
dataset = dataset.map(format_for_dpo, remove_columns=dataset.column_names)
4. Launch training job with confidence
4. 放心启动训练任务
undefinedundefinedCommon Scenario: DPO Format Mismatch
常见场景:DPO格式不匹配
Most DPO datasets use non-standard column names. Example:
Dataset has: instruction, chosen_response, rejected_response
DPO expects: prompt, chosen, rejectedThe validator detects this and provides exact mapping code to fix it.
大多数DPO数据集使用非标准列名。示例:
数据集包含:instruction, chosen_response, rejected_response
DPO期望:prompt, chosen, rejected验证器会检测到这一点,并提供修复所需的精确映射代码。
Converting Models to GGUF
将模型转换为GGUF格式
After training, convert models to GGUF format for use with llama.cpp, Ollama, LM Studio, and other local inference tools.
What is GGUF:
- Optimized for CPU/GPU inference with llama.cpp
- Supports quantization (4-bit, 5-bit, 8-bit) to reduce model size
- Compatible with Ollama, LM Studio, Jan, GPT4All, llama.cpp
- Typically 2-8GB for 7B models (vs 14GB unquantized)
When to convert:
- Running models locally with Ollama or LM Studio
- Reducing model size with quantization
- Deploying to edge devices
- Sharing models for local-first use
See: for complete conversion guide, including production-ready conversion script, quantization options, hardware requirements, usage examples, and troubleshooting.
references/gguf_conversion.mdQuick conversion:
python
hf_jobs("uv", {
"script": "<see references/gguf_conversion.md for complete script>",
"flavor": "a10g-large",
"timeout": "45m",
"secrets": {"HF_TOKEN": "$HF_TOKEN"},
"env": {
"ADAPTER_MODEL": "username/my-finetuned-model",
"BASE_MODEL": "Qwen/Qwen2.5-0.5B",
"OUTPUT_REPO": "username/my-model-gguf"
}
})训练完成后,将模型转换为GGUF格式,以便在llama.cpp、Ollama、LM Studio和其他本地推理工具中使用。
什么是GGUF:
- 针对llama.cpp的CPU/GPU推理优化
- 支持量化(4位、5位、8位)以减小模型大小
- 兼容Ollama、LM Studio、Jan、GPT4All、llama.cpp
- 7B模型通常为2-8GB(未量化为14GB)
何时转换:
- 使用Ollama或LM Studio在本地运行模型
- 通过量化减小模型大小
- 部署到边缘设备
- 为本地优先使用场景共享模型
完整转换指南: 见,包括生产就绪的转换脚本、量化选项、硬件要求、使用示例和故障排除。
references/gguf_conversion.md快速转换:
python
hf_jobs("uv", {
"script": "<请参阅references/gguf_conversion.md获取完整脚本>",
"flavor": "a10g-large",
"timeout": "45m",
"secrets": {"HF_TOKEN": "$HF_TOKEN"},
"env": {
"ADAPTER_MODEL": "username/my-finetuned-model",
"BASE_MODEL": "Qwen/Qwen2.5-0.5B",
"OUTPUT_REPO": "username/my-model-gguf"
}
})Common Training Patterns
常见训练模式
See for detailed examples including:
references/training_patterns.md- Quick demo (5-10 minutes)
- Production with checkpoints
- Multi-GPU training
- DPO training (preference learning)
- GRPO training (online RL)
详细示例见,包括:
references/training_patterns.md- 快速演示(5-10分钟)
- 带检查点的生产训练
- 多GPU训练
- DPO训练(偏好学习)
- GRPO训练(在线RL)
Common Failure Modes
常见失败模式
Out of Memory (OOM)
内存不足(OOM)
Fix (try in order):
- Reduce batch size: , increase
per_device_train_batch_size=1. Effective batch size isgradient_accumulation_steps=8xper_device_train_batch_size. For best performance keep effective batch size close to 128.gradient_accumulation_steps - Enable:
gradient_checkpointing=True - Upgrade hardware: t4-small → l4x1, a10g-small → a10g-large etc.
修复方法(按顺序尝试):
- 减小批量大小:,增加
per_device_train_batch_size=1。有效批量大小为gradient_accumulation_steps=8×per_device_train_batch_size。为获得最佳性能,保持有效批量大小接近128。gradient_accumulation_steps - 启用:
gradient_checkpointing=True - 升级硬件:t4-small → l4x1,a10g-small → a10g-large等。
Dataset Misformatted
数据集格式错误
Fix:
- Validate first with dataset inspector:
bash
uv run https://huggingface.co/datasets/mcp-tools/skills/raw/main/dataset_inspector.py \ --dataset name --split train - Check output for compatibility markers (✓ READY, ✗ NEEDS MAPPING, ✗ INCOMPATIBLE)
- Apply mapping code from inspector output if needed
修复方法:
- 首先使用数据集检查器验证:
bash
uv run https://huggingface.co/datasets/mcp-tools/skills/raw/main/dataset_inspector.py \ --dataset name --split train - 检查输出中的兼容性标记(✓ READY, ✗ NEEDS MAPPING, ✗ INCOMPATIBLE)
- 如果需要,应用检查器输出中的映射代码
Job Timeout
任务超时
Fix:
- Check logs for actual runtime:
hf_jobs("logs", {"job_id": "..."}) - Increase timeout with buffer: (add 30% to estimated time)
"timeout": "3h" - Or reduce training: lower , use smaller dataset, enable
num_train_epochsmax_steps - Save checkpoints: ,
save_strategy="steps",save_steps=500hub_strategy="every_save"
Note: Default 30min is insufficient for real training. Minimum 1-2 hours.
修复方法:
- 查看日志了解实际运行时间:
hf_jobs("logs", {"job_id": "..."}) - 增加超时并预留缓冲:(在预计时间基础上增加30%)
"timeout": "3h" - 或减少训练量:降低,使用更小的数据集,启用
num_train_epochsmax_steps - 保存检查点:,
save_strategy="steps",save_steps=500hub_strategy="every_save"
注意: 默认30分钟不足以进行实际训练。建议最少1-2小时。
Hub Push Failures
Hub推送失败
Fix:
- Add to job:
secrets={"HF_TOKEN": "$HF_TOKEN"} - Add to config: ,
push_to_hub=Truehub_model_id="username/model-name" - Verify auth:
mcp__huggingface__hf_whoami() - Check token has write permissions and repo exists (or set )
hub_private_repo=True
修复方法:
- 在任务中添加:
secrets={"HF_TOKEN": "$HF_TOKEN"} - 在配置中添加:,
push_to_hub=Truehub_model_id="username/model-name" - 验证认证:
mcp__huggingface__hf_whoami() - 检查Token是否具有写入权限,以及仓库是否存在(或设置)
hub_private_repo=True
Missing Dependencies
依赖缺失
Fix:
Add to PEP 723 header:
python
undefined修复方法:
添加到PEP 723头部:
python
undefined/// script
/// script
dependencies = ["trl>=0.12.0", "peft>=0.7.0", "trackio", "missing-package"]
dependencies = ["trl>=0.12.0", "peft>=0.7.0", "trackio", "missing-package"]
///
///
undefinedundefinedTroubleshooting
故障排除
Common issues:
- Job times out → Increase timeout, reduce epochs/dataset, use smaller model/LoRA
- Model not saved to Hub → Check push_to_hub=True, hub_model_id, secrets=HF_TOKEN
- Out of Memory (OOM) → Reduce batch size, increase gradient accumulation, enable LoRA, use larger GPU
- Dataset format error → Validate with dataset inspector (see Dataset Validation section)
- Import/module errors → Add PEP 723 header with dependencies, verify format
- Authentication errors → Check , token permissions, secrets parameter
mcp__huggingface__hf_whoami()
See: for complete troubleshooting guide
references/troubleshooting.md常见问题:
- 任务超时 → 增加超时,减少轮次/数据集,使用更小的模型/LoRA
- 模型未保存到Hub → 检查push_to_hub=True、hub_model_id、secrets=HF_TOKEN
- 内存不足(OOM) → 减小批量大小,增加梯度累积,启用LoRA,使用更大的GPU
- 数据集格式错误 → 使用数据集检查器验证(见数据集验证部分)
- 导入/模块错误 → 添加PEP 723头部并包含依赖,验证格式
- 认证错误 → 检查、Token权限、secrets参数
mcp__huggingface__hf_whoami()
完整故障排除指南: 见
references/troubleshooting.mdResources
资源
References (In This Skill)
本技能中的参考文档
- - Overview of SFT, DPO, GRPO, KTO, PPO, Reward Modeling
references/training_methods.md - - Common training patterns and examples
references/training_patterns.md - - Complete GGUF conversion guide
references/gguf_conversion.md - - Trackio monitoring setup
references/trackio_guide.md - - Hardware specs and selection
references/hardware_guide.md - - Hub authentication troubleshooting
references/hub_saving.md - - Common issues and solutions
references/troubleshooting.md
- - SFT、DPO、GRPO、KTO、PPO、奖励模型训练概述
references/training_methods.md - - 常见训练模式和示例
references/training_patterns.md - - 完整GGUF转换指南
references/gguf_conversion.md - - Trackio监控设置
references/trackio_guide.md - - 硬件规格和选择
references/hardware_guide.md - - Hub认证故障排除
references/hub_saving.md - - 常见问题与解决方案
references/troubleshooting.md
Scripts (In This Skill)
本技能中的脚本
- - Production SFT template
scripts/train_sft_example.py - - Production DPO template
scripts/train_dpo_example.py - - Production GRPO template
scripts/train_grpo_example.py - - Estimate time and cost (offer when appropriate)
scripts/estimate_cost.py - - Complete GGUF conversion script
scripts/convert_to_gguf.py
- - 生产环境SFT模板
scripts/train_sft_example.py - - 生产环境DPO模板
scripts/train_dpo_example.py - - 生产环境GRPO模板
scripts/train_grpo_example.py - - 估算时间和成本(适当时提供)
scripts/estimate_cost.py - - 完整GGUF转换脚本
scripts/convert_to_gguf.py
External Scripts
外部脚本
- Dataset Inspector - Validate dataset format before training (use via or
uv run)hf_jobs
- 数据集检查器 - 训练前验证数据集格式(通过或
uv run使用)hf_jobs
External Links
外部链接
Key Takeaways
核心要点
- Submit scripts inline - The parameter accepts Python code directly; no file saving required unless user requests
script - Jobs are asynchronous - Don't wait/poll; let user check when ready
- Always set timeout - Default 30 min is insufficient; minimum 1-2 hours recommended
- Always enable Hub push - Environment is ephemeral; without push, all results lost
- Include Trackio - Use example scripts as templates for real-time monitoring
- Offer cost estimation - When parameters are known, use
scripts/estimate_cost.py - Use UV scripts (Approach 1) - Default to with inline scripts; TRL maintained scripts for standard training; avoid bash
hf_jobs("uv", {...})commands in Claude Codetrl-jobs - Use hf_doc_fetch/hf_doc_search for latest TRL documentation
- Validate dataset format before training with dataset inspector (see Dataset Validation section)
- Choose appropriate hardware for model size; use LoRA for models >7B
- 内联提交脚本 - 参数可直接接受Python代码;除非用户要求,否则无需保存文件
script - 任务是异步的 - 不要等待/轮询;让用户在准备好时检查
- 始终设置超时 - 默认30分钟不足;建议最少1-2小时
- 始终启用Hub推送 - 环境是临时的;不推送的话,所有结果都会丢失
- 包含Trackio - 以示例脚本为模板实现实时监控
- 提供成本估算 - 当参数已知时,使用
scripts/estimate_cost.py - 使用UV脚本(方法1) - 默认使用和内联脚本;标准训练使用TRL官方维护的脚本;在Claude Code中避免使用bash的
hf_jobs("uv", {...})命令trl-jobs - 使用hf_doc_fetch/hf_doc_search获取最新TRL文档
- 训练前验证数据集格式 - 使用数据集检查器(见数据集验证部分)
- 为模型大小选择合适的硬件;对>7B的模型使用LoRA