hugging-face-vision-trainer
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseVision Model Training on Hugging Face Jobs
在Hugging Face Jobs上训练视觉模型
Train object detection, image classification, and SAM/SAM2 segmentation models on managed cloud GPUs. No local GPU setup required—results are automatically saved to the Hugging Face Hub.
在托管云GPU上训练目标检测、图像分类和SAM/SAM2分割模型。无需本地GPU配置——训练结果将自动保存至Hugging Face Hub。
When to Use This Skill
何时使用本技能
Use this skill when users want to:
- Fine-tune object detection models (D-FINE, RT-DETR v2, DETR, YOLOS) on cloud GPUs or local
- Fine-tune image classification models (timm: MobileNetV3, MobileViT, ResNet, ViT/DINOv3, or any Transformers classifier) on cloud GPUs or local
- Fine-tune SAM or SAM2 models for segmentation / image matting using bbox or point prompts
- Train bounding-box detectors on custom datasets
- Train image classifiers on custom datasets
- Train segmentation models on custom mask datasets with prompts
- Run vision training jobs on Hugging Face Jobs infrastructure
- Ensure trained vision models are permanently saved to the Hub
当用户需要以下功能时,可使用本技能:
- 在云GPU或本地微调目标检测模型(D-FINE、RT-DETR v2、DETR、YOLOS)
- 在云GPU或本地微调图像分类模型(timm系列:MobileNetV3、MobileViT、ResNet、ViT/DINOv3,或任意Transformers分类器)
- 使用边界框或点提示微调SAM或SAM2模型,用于分割/图像抠图
- 在自定义数据集上训练边界框检测器
- 在自定义数据集上训练图像分类器
- 在带提示的自定义掩码数据集上训练分割模型
- 在Hugging Face Jobs基础设施上运行视觉训练任务
- 确保训练后的视觉模型永久保存至Hub
Related Skills
相关技能
- — General HF Jobs infrastructure: token authentication, hardware flavors, timeout management, cost estimation, secrets, environment variables, scheduled jobs, and result persistence. Refer to the Jobs skill for any non-training-specific Jobs questions (e.g., "how do secrets work?", "what hardware is available?", "how do I pass tokens?").
hugging-face-jobs - — TRL-based language model training (SFT, DPO, GRPO). Use that skill for text/language model fine-tuning.
hugging-face-model-trainer
- —— 通用HF Jobs基础设施:令牌认证、硬件规格、超时管理、成本估算、密钥、环境变量、定时任务及结果持久化。任何与训练无关的Jobs问题请参考该技能(例如:“密钥如何工作?”、“有哪些可用硬件?”、“如何传递令牌?”)。
hugging-face-jobs - —— 基于TRL的语言模型训练(SFT、DPO、GRPO)。文本/语言模型微调请使用该技能。
hugging-face-model-trainer
Local Script Execution
本地脚本执行
Helper scripts use PEP 723 inline dependencies. Run them with :
uv runbash
uv run scripts/dataset_inspector.py --dataset username/dataset-name --split train
uv run scripts/estimate_cost.py --help辅助脚本使用PEP 723内联依赖,可通过运行:
uv runbash
uv run scripts/dataset_inspector.py --dataset username/dataset-name --split train
uv run scripts/estimate_cost.py --helpPrerequisites Checklist
前置检查清单
Before starting any training job, verify:
启动任何训练任务前,请验证以下内容:
Account & Authentication
账户与认证
- Hugging Face Account with Pro, Team, or Enterprise plan (Jobs require paid plan)
- Authenticated login: Check with (tool) or
hf_whoami()(terminal)hf auth whoami - Token has write permissions
- MUST pass token in job secrets — see directive #3 below for syntax (MCP tool vs Python API)
- 拥有Pro、Team或Enterprise计划的Hugging Face账户(Jobs需要付费计划)
- 已完成登录认证:可通过工具或终端命令
hf_whoami()检查hf auth whoami - 令牌拥有写入权限
- 必须在任务密钥中传递令牌——语法请参考下方指令#3(MCP工具与Python API的差异)
Dataset Requirements — Object Detection
目标检测数据集要求
- Dataset must exist on Hub
- Annotations must use the column with
objects,bbox(and optionallycategory) sub-fieldsarea - Bboxes can be in xywh (COCO) or xyxy (Pascal VOC) format — auto-detected and converted
- Categories can be integers or strings — strings are auto-remapped to integer IDs
- column is optional — generated automatically if missing
image_id - ALWAYS validate unknown datasets before GPU training (see Dataset Validation section)
- 数据集必须已存在于Hub
- 注释需使用包含、
bbox(可选category)子字段的area列objects - 边界框支持**xywh(COCO)或xyxy(Pascal VOC)**格式——会自动检测并转换
- 类别支持整数或字符串——字符串会自动映射为整数ID
- 列可选——缺失时会自动生成
image_id - GPU训练前务必验证未知数据集(请参考数据集验证章节)
Dataset Requirements — Image Classification
图像分类数据集要求
- Dataset must exist on Hub
- Must have an column (PIL images) and a
imagecolumn (integer class IDs or strings)label - The label column can be type (with names) or plain integers/strings — strings are auto-remapped
ClassLabel - Common column names auto-detected: ,
label,labels,classfine_label - ALWAYS validate unknown datasets before GPU training (see Dataset Validation section)
- 数据集必须已存在于Hub
- 必须包含**列**(PIL图像)和**
image列**(整数类别ID或字符串)label - 标签列可以是类型(带名称)或普通整数/字符串——字符串会自动映射
ClassLabel - 自动识别常见列名:、
label、labels、classfine_label - GPU训练前务必验证未知数据集(请参考数据集验证章节)
Dataset Requirements — SAM/SAM2 Segmentation
SAM/SAM2分割数据集要求
- Dataset must exist on Hub
- Must have an column (PIL images) and a
imagecolumn (binary ground-truth segmentation mask)mask - Must have a prompt — either:
- A column with JSON containing
promptor{"bbox": [x0,y0,x1,y1]}{"point": [x,y]} - OR a dedicated column with
bboxvalues[x0,y0,x1,y1] - OR a dedicated column with
pointor[x,y]values[[x,y],...]
- A
- Bboxes should be in xyxy format (absolute pixel coordinates)
- Example dataset: (image matting with bbox prompts)
merve/MicroMat-mini - ALWAYS validate unknown datasets before GPU training (see Dataset Validation section)
- 数据集必须已存在于Hub
- 必须包含**列**(PIL图像)和**
image列**(二值真实分割掩码)mask - 必须包含提示信息——以下任一形式:
- 包含JSON格式或
{"bbox": [x0,y0,x1,y1]}的**{"point": [x,y]}列**prompt - 或单独的**列**,值为
bbox[x0,y0,x1,y1] - 或单独的**列**,值为
point或[x,y][[x,y],...]
- 包含JSON格式
- 边界框需为xyxy格式(绝对像素坐标)
- 示例数据集:(带边界框提示的图像抠图数据集)
merve/MicroMat-mini - GPU训练前务必验证未知数据集(请参考数据集验证章节)
Critical Settings
关键设置
- Timeout must exceed expected training time — Default 30min is TOO SHORT. See directive #6 for recommended values.
- Hub push must be enabled — ,
push_to_hub=True, token inhub_model_id="username/model-name"secrets
- 超时时间必须超过预期训练时长——默认30分钟过短。推荐值请参考指令#6。
- 必须启用Hub推送——设置、
push_to_hub=True,并在密钥中传入令牌hub_model_id="username/model-name"
Dataset Validation
数据集验证
Validate dataset format BEFORE launching GPU training to prevent the #1 cause of training failures: format mismatches.
ALWAYS validate for unknown/custom datasets or any dataset you haven't trained with before. Skip for (the default in the training script).
cppe-5GPU训练前务必验证数据集格式,这是避免训练失败的首要原因:格式不匹配。
务必验证未知/自定义数据集,或之前未用于训练的数据集。可跳过验证的数据集:训练脚本中的默认数据集。
cppe-5Running the Inspector
运行检查工具
Option 1: Via HF Jobs (recommended — avoids local SSL/dependency issues):
python
hf_jobs("uv", {
"script": "path/to/dataset_inspector.py",
"script_args": ["--dataset", "username/dataset-name", "--split", "train"]
})Option 2: Locally:
bash
uv run scripts/dataset_inspector.py --dataset username/dataset-name --split trainOption 3: Via (if hf_jobs MCP unavailable):
HfApi().run_uv_job()python
from huggingface_hub import HfApi
api = HfApi()
api.run_uv_job(
script="scripts/dataset_inspector.py",
script_args=["--dataset", "username/dataset-name", "--split", "train"],
flavor="cpu-basic",
timeout=300,
)选项1:通过HF Jobs(推荐——避免本地SSL/依赖问题):
python
hf_jobs("uv", {
"script": "path/to/dataset_inspector.py",
"script_args": ["--dataset", "username/dataset-name", "--split", "train"]
})选项2:本地运行:
bash
uv run scripts/dataset_inspector.py --dataset username/dataset-name --split train选项3:通过(如果hf_jobs MCP工具不可用):
HfApi().run_uv_job()python
from huggingface_hub import HfApi
api = HfApi()
api.run_uv_job(
script="scripts/dataset_inspector.py",
script_args=["--dataset", "username/dataset-name", "--split", "train"],
flavor="cpu-basic",
timeout=300,
)Reading Results
结果解读
- — Dataset is compatible, use directly
✓ READY - — Needs preprocessing (mapping code provided in output)
✗ NEEDS FORMATTING
- —— 数据集兼容,可直接使用
✓ READY - —— 需要预处理(输出中会提供映射代码)
✗ NEEDS FORMATTING
Automatic Bbox Preprocessing
自动边界框预处理
The object detection training script () automatically handles bbox format detection (xyxy→xywh conversion), bbox sanitization, generation, string category→integer remapping, and dataset truncation. No manual preprocessing needed — just ensure the dataset has and columns.
scripts/object_detection_training.pyimage_idobjects.bboxobjects.category目标检测训练脚本()会自动处理边界框格式检测(xyxy→xywh转换)、边界框清理、生成、字符串类别→整数映射及数据集截断。无需手动预处理——只需确保数据集包含和列即可。
scripts/object_detection_training.pyimage_idobjects.bboxobjects.categoryTraining workflow
训练工作流
Copy this checklist and track progress:
Training Progress:
- [ ] Step 1: Verify prerequisites (account, token, dataset)
- [ ] Step 2: Validate dataset format (run dataset_inspector.py)
- [ ] Step 3: Ask user about dataset size and validation split
- [ ] Step 4: Prepare training script (OD: scripts/object_detection_training.py, IC: scripts/image_classification_training.py, SAM: scripts/sam_segmentation_training.py)
- [ ] Step 5: Save script locally, submit job, and report detailsStep 1: Verify prerequisites
Follow the Prerequisites Checklist above.
Step 2: Validate dataset
Run the dataset inspector BEFORE spending GPU time. See "Dataset Validation" section above.
Step 3: Ask user preferences
ALWAYS use the AskUserQuestion tool with option-style format:
python
AskUserQuestion({
"questions": [
{
"question": "Do you want to run a quick test with a subset of the data first?",
"header": "Dataset Size",
"options": [
{"label": "Quick test run (10% of data)", "description": "Faster, cheaper (~30-60 min, ~$2-5) to validate setup"},
{"label": "Full dataset (Recommended)", "description": "Complete training for best model quality"}
],
"multiSelect": false
},
{
"question": "Do you want to create a validation split from the training data?",
"header": "Split data",
"options": [
{"label": "Yes (Recommended)", "description": "Automatically split 15% of training data for validation"},
{"label": "No", "description": "Use existing validation split from dataset"}
],
"multiSelect": false
},
{
"question": "Which GPU hardware do you want to use?",
"header": "Hardware Flavor",
"options": [
{"label": "t4-small ($0.40/hr)", "description": "1x T4, 16 GB VRAM — sufficient for all OD models under 100M params"},
{"label": "l4x1 ($0.80/hr)", "description": "1x L4, 24 GB VRAM — more headroom for large images or batch sizes"},
{"label": "a10g-large ($1.50/hr)", "description": "1x A10G, 24 GB VRAM — faster training, more CPU/RAM"},
{"label": "a100-large ($2.50/hr)", "description": "1x A100, 80 GB VRAM — fastest, for very large datasets or image sizes"}
],
"multiSelect": false
}
]
})Step 4: Prepare training script
For object detection, use scripts/object_detection_training.py as the production-ready template. For image classification, use scripts/image_classification_training.py. For SAM/SAM2 segmentation, use scripts/sam_segmentation_training.py. All scripts use — all configuration is passed via CLI arguments in , NOT by editing Python variables. For timm model details, see references/timm_trainer.md. For SAM2 training details, see references/finetune_sam2_trainer.md.
HfArgumentParserscript_argsStep 5: Save script, submit job, and report
- Save the script locally to in the workspace root (create if needed) with a descriptive name like
submitted_jobs/. Tell the user the path.training_<dataset>_<YYYYMMDD_HHMMSS>.py - Submit using MCP tool (preferred) or
hf_jobs— see directive #1 for both methods. Pass all config viaHfApi().run_uv_job().script_args - Report the job ID (from attribute), monitoring URL, Trackio dashboard (
.id), expected time, and estimated cost.https://huggingface.co/spaces/{username}/trackio - Wait for user to request status checks — don't poll automatically. Training jobs run asynchronously and can take hours.
复制以下清单并跟踪进度:
训练进度:
- [ ] 步骤1:验证前置条件(账户、令牌、数据集)
- [ ] 步骤2:验证数据集格式(运行dataset_inspector.py)
- [ ] 步骤3:询问用户数据集大小和验证拆分偏好
- [ ] 步骤4:准备训练脚本(目标检测:scripts/object_detection_training.py,图像分类:scripts/image_classification_training.py,SAM分割:scripts/sam_segmentation_training.py)
- [ ] 步骤5:本地保存脚本、提交任务并报告详情步骤1:验证前置条件
遵循上述前置检查清单。
步骤2:验证数据集
在投入GPU资源前运行数据集检查工具。请参考“数据集验证”章节。
步骤3:询问用户偏好
务必使用AskUserQuestion工具,采用选项式格式:
python
AskUserQuestion({
"questions": [
{
"question": "是否要先使用数据集子集进行快速测试?",
"header": "数据集规模",
"options": [
{"label": "快速测试(10%数据)", "description": "更快、更便宜(约30-60分钟,2-5美元),用于验证配置"},
{"label": "完整数据集(推荐)", "description": "完整训练以获得最佳模型质量"}
],
"multiSelect": false
},
{
"question": "是否要从训练数据中拆分出验证集?",
"header": "数据拆分",
"options": [
{"label": "是(推荐)", "description": "自动从训练数据中拆分15%作为验证集"},
{"label": "否", "description": "使用数据集已有的验证拆分"}
],
"multiSelect": false
},
{
"question": "要使用哪种GPU硬件?",
"header": "硬件规格",
"options": [
{"label": "t4-small(0.40美元/小时)", "description": "1x T4,16 GB显存——适用于所有参数低于1亿的目标检测模型"},
{"label": "l4x1(0.80美元/小时)", "description": "1x L4,24 GB显存——处理大尺寸图像或更大批次数据时更有余量"},
{"label": "a10g-large(1.50美元/小时)", "description": "1x A10G,24 GB显存——训练速度更快,CPU/RAM资源更充足"},
{"label": "a100-large(2.50美元/小时)", "description": "1x A100,80 GB显存——速度最快,适用于超大规模数据集或大尺寸图像"}
],
"multiSelect": false
}
]
})步骤4:准备训练脚本
目标检测请使用scripts/object_detection_training.py作为生产级模板。图像分类请使用scripts/image_classification_training.py。SAM/SAM2分割请使用scripts/sam_segmentation_training.py。所有脚本均使用——所有配置通过中的CLI参数传递,无需编辑Python变量。timm模型详情请参考references/timm_trainer.md。SAM2训练详情请参考references/finetune_sam2_trainer.md。
HfArgumentParserscript_args步骤5:保存脚本、提交任务并报告
- 本地保存脚本至工作区根目录的文件夹(不存在则创建),命名需清晰,例如
submitted_jobs/。告知用户保存路径。training_<dataset>_<YYYYMMDD_HHMMSS>.py - 提交任务:优先使用MCP工具,或
hf_jobs——两种方法请参考指令#1。所有配置通过HfApi().run_uv_job()传递。script_args - 报告信息:任务ID(来自属性)、监控URL、Trackio仪表盘(
.id)、预计时长及估算成本。https://huggingface.co/spaces/{username}/trackio - 等待用户请求:不要自动轮询状态。训练任务为异步运行,可能需要数小时。
Critical directives
关键指令
These rules prevent common failures. Follow them exactly.
以下规则可避免常见失败,请严格遵循。
1. Job submission: hf_jobs
MCP tool vs Python API
hf_jobs1. 任务提交:hf_jobs
MCP工具 vs Python API
hf_jobshf_jobs()huggingface_hubhf_jobs("uv", {"script": training_script_content, "flavor": "a10g-large", "timeout": "4h", "secrets": {"HF_TOKEN": "$HF_TOKEN"}})If MCP tool is unavailable, use the Python API directly:
hf_jobspython
from huggingface_hub import HfApi, get_token
api = HfApi()
job_info = api.run_uv_job(
script="path/to/training_script.py", # file PATH, NOT content
script_args=["--dataset_name", "cppe-5", ...],
flavor="a10g-large",
timeout=14400, # seconds (4 hours)
env={"PYTHONUNBUFFERED": "1"},
secrets={"HF_TOKEN": get_token()}, # MUST use get_token(), NOT "$HF_TOKEN"
)
print(f"Job ID: {job_info.id}")Critical differences between the two methods:
| | |
|---|---|---|
| Python code string or URL (NOT local paths) | File path to |
| Token in secrets | | |
| Timeout format | String ( | Seconds ( |
Rules for both methods:
- The training script MUST include PEP 723 inline metadata with dependencies
- Do NOT use or
imageparameters (those belong tocommand, notrun_job())run_uv_job()
hf_jobs()huggingface_hubhf_jobs("uv", {"script": training_script_content, "flavor": "a10g-large", "timeout": "4h", "secrets": {"HF_TOKEN": "$HF_TOKEN"}})如果 MCP工具不可用,请直接使用Python API:
hf_jobspython
from huggingface_hub import HfApi, get_token
api = HfApi()
job_info = api.run_uv_job(
script="path/to/training_script.py", # 文件路径,而非代码内容
script_args=["--dataset_name", "cppe-5", ...],
flavor="a10g-large",
timeout=14400, # 秒(4小时)
env={"PYTHONUNBUFFERED": "1"},
secrets={"HF_TOKEN": get_token()}, # 必须使用get_token(),而非"$HF_TOKEN"
)
print(f"Job ID: {job_info.id}")两种方法的关键差异:
| | |
|---|---|---|
| Python代码字符串或URL(非本地路径) | |
| 密钥中的令牌 | | |
| 超时格式 | 字符串( | 秒数( |
两种方法的通用规则:
- 训练脚本必须包含带依赖的PEP 723内联元数据
- 请勿使用或
image参数(这些属于command,而非run_job())run_uv_job()
2. Authentication via job secrets + explicit hub_token injection
2. 通过任务密钥认证 + 显式注入hub_token
Job config MUST include the token in secrets — syntax depends on submission method (see table above).
Training script requirement: The Transformers calls during when . The training script MUST inject into AFTER parsing args but BEFORE creating the . The template already includes this:
Trainercreate_repo(token=self.args.hub_token)__init__()push_to_hub=TrueHF_TOKENtraining_args.hub_tokenTrainerscripts/object_detection_training.pypython
hf_token = os.environ.get("HF_TOKEN")
if training_args.push_to_hub and not training_args.hub_token:
if hf_token:
training_args.hub_token = hf_tokenIf you write a custom script, you MUST include this token injection before the call.
Trainer(...)- Do NOT call in custom scripts unless replicating the full pattern from
login()scripts/object_detection_training.py - Do NOT rely on implicit token resolution () — unreliable in Jobs
hub_token=None - See the skill → Token Usage Guide for full details
hugging-face-jobs
任务配置必须在密钥中包含令牌——语法取决于提交方法(请参考上表)。
训练脚本要求: 当时,Transformers 会在期间调用。训练脚本必须在解析参数后、创建前,将注入。模板已包含该逻辑:
push_to_hub=TrueTrainer__init__()create_repo(token=self.args.hub_token)TrainerHF_TOKENtraining_args.hub_tokenscripts/object_detection_training.pypython
hf_token = os.environ.get("HF_TOKEN")
if training_args.push_to_hub and not training_args.hub_token:
if hf_token:
training_args.hub_token = hf_token如果编写自定义脚本,必须在调用前添加该令牌注入逻辑。
Trainer(...)- 除非完全复制中的完整逻辑,否则请勿在自定义脚本中调用
scripts/object_detection_training.pylogin() - 请勿依赖隐式令牌解析()——在Jobs环境中不可靠
hub_token=None - 完整细节请参考技能 → 令牌使用指南
hugging-face-jobs
3. JobInfo attribute
3. JobInfo属性
Access the job identifier using (NOT or — these don't exist):
.id.job_id.namepython
job_info = api.run_uv_job(...) # or hf_jobs("uv", {...})
job_id = job_info.id # Correct -- returns string like "687fb701029421ae5549d998"通过访问任务标识符(不要使用或——这些属性不存在):
.id.job_id.namepython
job_info = api.run_uv_job(...) # 或hf_jobs("uv", {...})
job_id = job_info.id # 正确方式——返回类似"687fb701029421ae5549d998"的字符串4. Required training flags and HfArgumentParser boolean syntax
4. 必要训练标志与HfArgumentParser布尔语法
scripts/object_detection_training.pyHfArgumentParserscript_args- fields (e.g.,
bool,push_to_hub): Use as bare flags (do_train) or negate with--push_to_hubprefix (--no_)--no_remove_unused_columns - fields (e.g.,
Optional[bool]): MUST pass explicit value (greater_is_better). Bare--greater_is_better Truecauses--greater_is_bettererror: expected one argument
Required flags for object detection:
--no_remove_unused_columns # MUST: preserves image column for pixel_values
--no_eval_do_concat_batches # MUST: images have different numbers of target boxes
--push_to_hub # MUST: environment is ephemeral
--hub_model_id username/model-name
--metric_for_best_model eval_map
--greater_is_better True # MUST pass "True" explicitly (Optional[bool])
--do_train
--do_evalRequired flags for image classification:
--no_remove_unused_columns # MUST: preserves image column for pixel_values
--push_to_hub # MUST: environment is ephemeral
--hub_model_id username/model-name
--metric_for_best_model eval_accuracy
--greater_is_better True # MUST pass "True" explicitly (Optional[bool])
--do_train
--do_evalRequired flags for SAM/SAM2 segmentation:
--remove_unused_columns False # MUST: preserves input_boxes/input_points
--push_to_hub # MUST: environment is ephemeral
--hub_model_id username/model-name
--do_train
--prompt_type bbox # or "point"
--dataloader_pin_memory False # MUST: avoids pin_memory issues with custom collatorscripts/object_detection_training.pyHfArgumentParserscript_args- 字段(例如
bool、push_to_hub):可作为裸标志(do_train)或使用--push_to_hub前缀取反(--no_)--no_remove_unused_columns - 字段(例如
Optional[bool]):必须传递显式值(greater_is_better)。仅使用裸标志--greater_is_better True会导致--greater_is_bettererror: expected one argument
目标检测的必要标志:
--no_remove_unused_columns # 必须设置:保留用于生成pixel_values的image列
--no_eval_do_concat_batches # 必须设置:处理图像中目标框数量不同的情况
--push_to_hub # 必须设置:环境为临时环境
--hub_model_id username/model-name
--metric_for_best_model eval_map
--greater_is_better True # 必须显式传递"True"(Optional[bool]类型)
--do_train
--do_eval图像分类的必要标志:
--no_remove_unused_columns # 必须设置:保留用于生成pixel_values的image列
--push_to_hub # 必须设置:环境为临时环境
--hub_model_id username/model-name
--metric_for_best_model eval_accuracy
--greater_is_better True # 必须显式传递"True"(Optional[bool]类型)
--do_train
--do_evalSAM/SAM2分割的必要标志:
--remove_unused_columns False # 必须设置:保留input_boxes/input_points
--push_to_hub # 必须设置:环境为临时环境
--hub_model_id username/model-name
--do_train
--prompt_type bbox # 或"point"
--dataloader_pin_memory False # 必须设置:避免自定义collator的pin_memory问题5. Timeout management
5. 超时管理
Default 30 min is TOO SHORT for object detection. Set minimum 2-4 hours. Add 30% buffer for model loading, preprocessing, and Hub push.
| Scenario | Timeout |
|---|---|
| Quick test (100-200 images, 5-10 epochs) | 1h |
| Development (500-1K images, 15-20 epochs) | 2-3h |
| Production (1K-5K images, 30 epochs) | 4-6h |
| Large dataset (5K+ images) | 6-12h |
默认30分钟对目标检测来说过短。最少设置2-4小时,为模型加载、预处理和Hub推送预留30%的缓冲时间。
| 场景 | 超时设置 |
|---|---|
| 快速测试(100 - 200张图像,5 - 10轮) | 1小时 |
| 开发测试(500 - 1000张图像,15 - 20轮) | 2 - 3小时 |
| 生产训练(1000 - 5000张图像,30轮) | 4 - 6小时 |
| 大规模数据集(5000+张图像) | 6 - 12小时 |
6. Trackio monitoring
6. Trackio监控
Trackio is always enabled in the object detection training script — it calls and automatically. No need to pass . The project name is taken from and the run name from . For image classification, pass in .
trackio.init()trackio.finish()--report_to trackio--output_dir--run_name--report_to trackioTrainingArgumentsDashboard at:
https://huggingface.co/spaces/{username}/trackio目标检测训练脚本中默认启用Trackio——会自动调用和。无需传递。项目名称取自,运行名称取自。图像分类需在中传递。
trackio.init()trackio.finish()--report_to trackio--output_dir--run_nameTrainingArguments--report_to trackio仪表盘地址:
https://huggingface.co/spaces/{username}/trackioModel & hardware selection
模型与硬件选择
Recommended object detection models
推荐目标检测模型
| Model | Params | Use case |
|---|---|---|
| 10.4M | Best starting point — fast, cheap, SOTA quality |
| 20.2M | Lightweight real-time detector |
| 31.4M | Higher accuracy, still efficient |
| 43M | Strong real-time baseline |
| 63.5M | Best accuracy (pretrained on Objects365) |
| 76M | Largest RT-DETR v2 variant |
Start with for fast iteration. Move to D-FINE Large or RT-DETR v2 R50 for better accuracy.
ustc-community/dfine-small-coco| 模型 | 参数规模 | 使用场景 |
|---|---|---|
| 10.4M | 最佳入门选择——快速、低成本、SOTA精度 |
| 20.2M | 轻量级实时检测器 |
| 31.4M | 更高精度,仍保持高效 |
| 43M | 强大的实时基线模型 |
| 63.5M | 最佳精度(在Objects365上预训练) |
| 76M | 最大的RT-DETR v2变体 |
入门推荐使用以快速迭代。如需更高精度,可切换至D-FINE Large或RT-DETR v2 R50。
ustc-community/dfine-small-cocoRecommended image classification models
推荐图像分类模型
All models work out of the box via (loaded as ). See references/timm_trainer.md for details.
timm/AutoModelForImageClassificationTimmWrapperForImageClassification| Model | Params | Use case |
|---|---|---|
| 2.5M | Ultra-lightweight — mobile/edge, fastest training |
| 5.6M | Mobile transformer — good accuracy/speed trade-off |
| 25.6M | Strong CNN baseline — reliable, well-studied |
| 86.6M | Best accuracy — DINOv3 self-supervised ViT |
Start with for fast iteration. Move to or for better accuracy.
timm/mobilenetv3_small_100.lamb_in1ktimm/resnet50.a1_in1ktimm/vit_base_patch16_dinov3.lvd1689m所有模型均可通过直接使用(以加载)。详情请参考references/timm_trainer.md。
timm/AutoModelForImageClassificationTimmWrapperForImageClassification| 模型 | 参数规模 | 使用场景 |
|---|---|---|
| 2.5M | 超轻量级——适用于移动端/边缘设备,训练速度最快 |
| 5.6M | 移动端Transformer——精度与速度的平衡之选 |
| 25.6M | 强大的CNN基线——可靠且研究充分 |
| 86.6M | 最佳精度——基于DINOv3自监督的ViT模型 |
入门推荐使用以快速迭代。如需更高精度,可切换至或。
timm/mobilenetv3_small_100.lamb_in1ktimm/resnet50.a1_in1ktimm/vit_base_patch16_dinov3.lvd1689mRecommended SAM/SAM2 segmentation models
推荐SAM/SAM2分割模型
| Model | Params | Use case |
|---|---|---|
| 38.9M | Fastest SAM2 — good for quick experiments |
| 46.0M | Best starting point — good quality/speed balance |
| 80.8M | Higher capacity for complex segmentation |
| 224.4M | Best SAM2 accuracy — requires more VRAM |
| 93.7M | Original SAM — ViT-B backbone |
| 312.3M | Original SAM — ViT-L backbone |
| 641.1M | Original SAM — ViT-H, best SAM v1 accuracy |
Start with for fast iteration. SAM2 models are generally more efficient than SAM v1 at similar quality. Only the mask decoder is trained by default (vision and prompt encoders are frozen).
facebook/sam2.1-hiera-small| 模型 | 参数规模 | 使用场景 |
|---|---|---|
| 38.9M | 最快的SAM2——适用于快速实验 |
| 46.0M | 最佳入门选择——精度与速度平衡 |
| 80.8M | 更高容量,适用于复杂分割任务 |
| 224.4M | SAM2最佳精度——需要更多显存 |
| 93.7M | 初代SAM——ViT-B backbone |
| 312.3M | 初代SAM——ViT-L backbone |
| 641.1M | 初代SAM——ViT-H,SAM v1最佳精度 |
入门推荐使用以快速迭代。相同精度下,SAM2模型通常比SAM v1更高效。默认仅训练掩码解码器(视觉编码器和提示编码器冻结)。
facebook/sam2.1-hiera-smallHardware recommendation
硬件推荐
All recommended OD and IC models are under 100M params — (16 GB VRAM, $0.40/hr) is sufficient for all of them. Image classification models are generally smaller and faster than object detection models — handles even ViT-Base comfortably. For SAM2 models up to , is sufficient since only the mask decoder is trained. For or SAM v1 models, use or . Only upgrade if you hit OOM from large batch sizes — reduce batch size first before switching hardware. Common upgrade path: → ($0.80/hr, 24 GB) → ($1.50/hr, 24 GB).
t4-smallt4-smallhiera-base-plust4-smallsam2.1-hiera-largel4x1a10g-larget4-smalll4x1a10g-largeFor full hardware flavor list: refer to the skill. For cost estimation: run .
hugging-face-jobsscripts/estimate_cost.py所有推荐的目标检测和图像分类模型参数均低于1亿——(16 GB显存,0.40美元/小时)足以运行所有模型。图像分类模型通常比目标检测模型更小、更快——甚至可以轻松处理ViT-Base模型。对于及以下的SAM2模型,足够(因为仅训练掩码解码器)。对于或SAM v1模型,建议使用或。仅当大批次训练出现OOM时才升级硬件——优先减小批次大小,再考虑切换硬件。常见升级路径: → (0.80美元/小时,24 GB) → (1.50美元/小时,24 GB)。
t4-smallt4-smallhiera-base-plust4-smallsam2.1-hiera-largel4x1a10g-larget4-smalll4x1a10g-large完整硬件规格列表请参考技能。成本估算可运行。
hugging-face-jobsscripts/estimate_cost.pyQuick start — Object Detection
快速入门——目标检测
The below are the same for both submission methods. See directive #1 for the critical differences between them.
script_argspython
OD_SCRIPT_ARGS = [
"--model_name_or_path", "ustc-community/dfine-small-coco",
"--dataset_name", "cppe-5",
"--image_square_size", "640",
"--output_dir", "dfine_finetuned",
"--num_train_epochs", "30",
"--per_device_train_batch_size", "8",
"--learning_rate", "5e-5",
"--eval_strategy", "epoch",
"--save_strategy", "epoch",
"--save_total_limit", "2",
"--load_best_model_at_end",
"--metric_for_best_model", "eval_map",
"--greater_is_better", "True",
"--no_remove_unused_columns",
"--no_eval_do_concat_batches",
"--push_to_hub",
"--hub_model_id", "username/model-name",
"--do_train",
"--do_eval",
]python
from huggingface_hub import HfApi, get_token
api = HfApi()
job_info = api.run_uv_job(
script="scripts/object_detection_training.py",
script_args=OD_SCRIPT_ARGS,
flavor="t4-small",
timeout=14400,
env={"PYTHONUNBUFFERED": "1"},
secrets={"HF_TOKEN": get_token()},
)
print(f"Job ID: {job_info.id}")以下适用于两种提交方法。两种方法的关键差异请参考指令#1。
script_argspython
OD_SCRIPT_ARGS = [
"--model_name_or_path", "ustc-community/dfine-small-coco",
"--dataset_name", "cppe-5",
"--image_square_size", "640",
"--output_dir", "dfine_finetuned",
"--num_train_epochs", "30",
"--per_device_train_batch_size", "8",
"--learning_rate", "5e-5",
"--eval_strategy", "epoch",
"--save_strategy", "epoch",
"--save_total_limit", "2",
"--load_best_model_at_end",
"--metric_for_best_model", "eval_map",
"--greater_is_better", "True",
"--no_remove_unused_columns",
"--no_eval_do_concat_batches",
"--push_to_hub",
"--hub_model_id", "username/model-name",
"--do_train",
"--do_eval",
]python
from huggingface_hub import HfApi, get_token
api = HfApi()
job_info = api.run_uv_job(
script="scripts/object_detection_training.py",
script_args=OD_SCRIPT_ARGS,
flavor="t4-small",
timeout=14400,
env={"PYTHONUNBUFFERED": "1"},
secrets={"HF_TOKEN": get_token()},
)
print(f"Job ID: {job_info.id}")Key OD script_args
script_args目标检测关键script_args
script_args- — recommended:
--model_name_or_path(see model table above)"ustc-community/dfine-small-coco" - — the Hub dataset ID
--dataset_name - — 480 (fast iteration) or 800 (better accuracy)
--image_square_size - —
--hub_model_idfor Hub persistence"username/model-name" - — 30 typical for convergence
--num_train_epochs - — fraction to split for validation (default 0.15), set if dataset lacks a validation split
--train_val_split - — truncate training set (useful for quick test runs, e.g.
--max_train_samplesfor ~10% of a 7.8K dataset)"785" - — truncate evaluation set
--max_eval_samples
- —— 推荐:
--model_name_or_path(请参考上方模型表格)"ustc-community/dfine-small-coco" - —— Hub上的数据集ID
--dataset_name - —— 480(快速迭代)或800(更高精度)
--image_square_size - —— 用于Hub持久化的
--hub_model_id"username/model-name" - —— 通常30轮即可收敛
--num_train_epochs - —— 用于拆分验证集的比例(默认0.15),当数据集无验证拆分时设置
--train_val_split - —— 截断训练集(适用于快速测试,例如785表示取7.8K数据集的10%)
--max_train_samples - —— 截断评估集
--max_eval_samples
Quick start — Image Classification
快速入门——图像分类
python
IC_SCRIPT_ARGS = [
"--model_name_or_path", "timm/mobilenetv3_small_100.lamb_in1k",
"--dataset_name", "ethz/food101",
"--output_dir", "food101_classifier",
"--num_train_epochs", "5",
"--per_device_train_batch_size", "32",
"--per_device_eval_batch_size", "32",
"--learning_rate", "5e-5",
"--eval_strategy", "epoch",
"--save_strategy", "epoch",
"--save_total_limit", "2",
"--load_best_model_at_end",
"--metric_for_best_model", "eval_accuracy",
"--greater_is_better", "True",
"--no_remove_unused_columns",
"--push_to_hub",
"--hub_model_id", "username/food101-classifier",
"--do_train",
"--do_eval",
]python
from huggingface_hub import HfApi, get_token
api = HfApi()
job_info = api.run_uv_job(
script="scripts/image_classification_training.py",
script_args=IC_SCRIPT_ARGS,
flavor="t4-small",
timeout=7200,
env={"PYTHONUNBUFFERED": "1"},
secrets={"HF_TOKEN": get_token()},
)
print(f"Job ID: {job_info.id}")python
IC_SCRIPT_ARGS = [
"--model_name_or_path", "timm/mobilenetv3_small_100.lamb_in1k",
"--dataset_name", "ethz/food101",
"--output_dir", "food101_classifier",
"--num_train_epochs", "5",
"--per_device_train_batch_size", "32",
"--per_device_eval_batch_size", "32",
"--learning_rate", "5e-5",
"--eval_strategy", "epoch",
"--save_strategy", "epoch",
"--save_total_limit", "2",
"--load_best_model_at_end",
"--metric_for_best_model", "eval_accuracy",
"--greater_is_better", "True",
"--no_remove_unused_columns",
"--push_to_hub",
"--hub_model_id", "username/food101-classifier",
"--do_train",
"--do_eval",
]python
from huggingface_hub import HfApi, get_token
api = HfApi()
job_info = api.run_uv_job(
script="scripts/image_classification_training.py",
script_args=IC_SCRIPT_ARGS,
flavor="t4-small",
timeout=7200,
env={"PYTHONUNBUFFERED": "1"},
secrets={"HF_TOKEN": get_token()},
)
print(f"Job ID: {job_info.id}")Key IC script_args
script_args图像分类关键script_args
script_args- — any
--model_name_or_pathmodel or Transformers classification model (see model table above)timm/ - — the Hub dataset ID
--dataset_name - — column containing PIL images (default:
--image_column_name)"image" - — column containing class labels (default:
--label_column_name)"label" - —
--hub_model_idfor Hub persistence"username/model-name" - — 3-5 typical for classification (fewer than OD)
--num_train_epochs - — 16-64 (classification models use less memory than OD)
--per_device_train_batch_size - — fraction to split for validation (default 0.15), set if dataset lacks a validation split
--train_val_split - /
--max_train_samples— truncate for quick tests--max_eval_samples
- —— 任何
--model_name_or_path模型或Transformers分类模型(请参考上方模型表格)timm/ - —— Hub上的数据集ID
--dataset_name - —— 包含PIL图像的列(默认:
--image_column_name)"image" - —— 包含类标签的列(默认:
--label_column_name)"label" - —— 用于Hub持久化的
--hub_model_id"username/model-name" - —— 分类任务通常3-5轮即可
--num_train_epochs - —— 16-64(分类模型比目标检测模型占用内存更少)
--per_device_train_batch_size - —— 用于拆分验证集的比例(默认0.15),当数据集无验证拆分时设置
--train_val_split - /
--max_train_samples—— 截断数据集以进行快速测试--max_eval_samples
Quick start — SAM/SAM2 Segmentation
快速入门——SAM/SAM2分割
python
SAM_SCRIPT_ARGS = [
"--model_name_or_path", "facebook/sam2.1-hiera-small",
"--dataset_name", "merve/MicroMat-mini",
"--prompt_type", "bbox",
"--prompt_column_name", "prompt",
"--output_dir", "sam2-finetuned",
"--num_train_epochs", "30",
"--per_device_train_batch_size", "4",
"--learning_rate", "1e-5",
"--logging_steps", "1",
"--save_strategy", "epoch",
"--save_total_limit", "2",
"--remove_unused_columns", "False",
"--dataloader_pin_memory", "False",
"--push_to_hub",
"--hub_model_id", "username/sam2-finetuned",
"--do_train",
"--report_to", "trackio",
]python
from huggingface_hub import HfApi, get_token
api = HfApi()
job_info = api.run_uv_job(
script="scripts/sam_segmentation_training.py",
script_args=SAM_SCRIPT_ARGS,
flavor="t4-small",
timeout=7200,
env={"PYTHONUNBUFFERED": "1"},
secrets={"HF_TOKEN": get_token()},
)
print(f"Job ID: {job_info.id}")python
SAM_SCRIPT_ARGS = [
"--model_name_or_path", "facebook/sam2.1-hiera-small",
"--dataset_name", "merve/MicroMat-mini",
"--prompt_type", "bbox",
"--prompt_column_name", "prompt",
"--output_dir", "sam2-finetuned",
"--num_train_epochs", "30",
"--per_device_train_batch_size", "4",
"--learning_rate", "1e-5",
"--logging_steps", "1",
"--save_strategy", "epoch",
"--save_total_limit", "2",
"--remove_unused_columns", "False",
"--dataloader_pin_memory", "False",
"--push_to_hub",
"--hub_model_id", "username/sam2-finetuned",
"--do_train",
"--report_to", "trackio",
]python
from huggingface_hub import HfApi, get_token
api = HfApi()
job_info = api.run_uv_job(
script="scripts/sam_segmentation_training.py",
script_args=SAM_SCRIPT_ARGS,
flavor="t4-small",
timeout=7200,
env={"PYTHONUNBUFFERED": "1"},
secrets={"HF_TOKEN": get_token()},
)
print(f"Job ID: {job_info.id}")Key SAM script_args
script_argsSAM分割关键script_args
script_args- — SAM or SAM2 model (see model table above); auto-detects SAM vs SAM2
--model_name_or_path - — the Hub dataset ID (e.g.,
--dataset_name)"merve/MicroMat-mini" - —
--prompt_typeor"bbox"— type of prompt in the dataset"point" - — column with JSON-encoded prompts (default:
--prompt_column_name)"prompt" - — dedicated bbox column (alternative to JSON prompt column)
--bbox_column_name - — dedicated point column (alternative to JSON prompt column)
--point_column_name - — column with ground-truth masks (default:
--mask_column_name)"mask" - —
--hub_model_idfor Hub persistence"username/model-name" - — 20-30 typical for SAM fine-tuning
--num_train_epochs - — 2-4 (SAM models use significant memory)
--per_device_train_batch_size - /
--freeze_vision_encoder— freeze encoder weights (default: both frozen, only mask decoder trains)--freeze_prompt_encoder - — fraction to split for validation (default 0.1)
--train_val_split
- —— SAM或SAM2模型(请参考上方模型表格);会自动检测SAM与SAM2
--model_name_or_path - —— Hub上的数据集ID(例如
--dataset_name)"merve/MicroMat-mini" - ——
--prompt_type或"bbox"——数据集中的提示类型"point" - —— 包含JSON格式提示的列(默认:
--prompt_column_name)"prompt" - —— 单独的边界框列(替代JSON提示列)
--bbox_column_name - —— 单独的点提示列(替代JSON提示列)
--point_column_name - —— 包含真实掩码的列(默认:
--mask_column_name)"mask" - —— 用于Hub持久化的
--hub_model_id"username/model-name" - —— SAM微调通常需要20-30轮
--num_train_epochs - —— 2-4(SAM模型占用内存较多)
--per_device_train_batch_size - /
--freeze_vision_encoder—— 冻结编码器权重(默认:两者均冻结,仅训练掩码解码器)--freeze_prompt_encoder - —— 用于拆分验证集的比例(默认0.1)
--train_val_split
Checking job status
检查任务状态
MCP tool (if available):
hf_jobs("ps") # List all jobs
hf_jobs("logs", {"job_id": "your-job-id"}) # View logs
hf_jobs("inspect", {"job_id": "your-job-id"}) # Job detailsPython API fallback:
python
from huggingface_hub import HfApi
api = HfApi()
api.list_jobs() # List all jobs
api.get_job_logs(job_id="your-job-id") # View logs
api.get_job(job_id="your-job-id") # Job detailsMCP工具(如果可用):
hf_jobs("ps") # 列出所有任务
hf_jobs("logs", {"job_id": "your-job-id"}) # 查看日志
hf_jobs("inspect", {"job_id": "your-job-id"}) # 任务详情Python API备选方案:
python
from huggingface_hub import HfApi
api = HfApi()
api.list_jobs() # 列出所有任务
api.get_job_logs(job_id="your-job-id") # 查看日志
api.get_job(job_id="your-job-id") # 任务详情Common failure modes
常见失败模式
OOM (CUDA out of memory)
OOM(CUDA内存不足)
Reduce (try 4, then 2), reduce , or upgrade hardware.
per_device_train_batch_sizeIMAGE_SIZE减小(尝试4,再尝试2)、降低,或升级硬件。
per_device_train_batch_sizeIMAGE_SIZEDataset format errors
数据集格式错误
Run first. The training script auto-detects xyxy vs xywh, converts string categories to integer IDs, and adds if missing. Ensure contains 4-value coordinate lists in absolute pixels and contains either integer IDs or string labels.
scripts/dataset_inspector.pyimage_idobjects.bboxobjects.category先运行。训练脚本会自动检测xyxy与xywh格式、将字符串类别转换为整数ID,并在缺失时添加。确保包含4个绝对像素坐标值的列表,且包含整数ID或字符串标签。
scripts/dataset_inspector.pyimage_idobjects.bboxobjects.categoryHub push failures (401)
Hub推送失败(401)
Verify: (1) job secrets include token (see directive #2), (2) script sets BEFORE creating the , (3) is set, (4) correct , (5) token has write permissions.
training_args.hub_tokenTrainerpush_to_hub=Truehub_model_id验证以下内容:(1) 任务密钥包含令牌(请参考指令#2),(2) 脚本在创建前设置了,(3) 已设置,(4) 正确,(5) 令牌拥有写入权限。
Trainertraining_args.hub_tokenpush_to_hub=Truehub_model_idJob timeout
任务超时
Increase timeout (see directive #5 table), reduce epochs/dataset, or use checkpoint strategy with .
hub_strategy="every_save"增加超时时间(请参考指令#5的表格)、减少轮数/数据集规模,或使用的检查点策略。
hub_strategy="every_save"KeyError: 'test' (missing test split)
KeyError: 'test'(缺失测试拆分)
The object detection training script handles this gracefully — it falls back to the split. Ensure you're using the latest .
validationscripts/object_detection_training.py目标检测训练脚本会优雅处理该问题——自动回退到拆分。请确保使用最新版本的。
validationscripts/object_detection_training.pySingle-class dataset: "iteration over a 0-d tensor"
单类别数据集:"iteration over a 0-d tensor"
torchmetrics.MeanAveragePrecisionscripts/object_detection_training.py.unsqueeze(0)当只有一个类别时,返回标量(0维)张量作为每类指标。模板通过对这些张量调用处理该问题。请确保使用最新模板。
torchmetrics.MeanAveragePrecisionscripts/object_detection_training.py.unsqueeze(0)Poor detection performance (mAP < 0.15)
检测性能不佳(mAP < 0.15)
Increase epochs (30-50), ensure 500+ images, check per-class mAP for imbalanced classes, try different learning rates (1e-5 to 1e-4), increase image size.
For comprehensive troubleshooting: see references/reliability_principles.md
增加轮数(30-50)、确保数据集包含500+张图像、检查类别不平衡情况下的每类mAP、尝试不同学习率(1e-5至1e-4)、增大图像尺寸。
完整故障排除请参考references/reliability_principles.md
Reference files
参考文件
- scripts/object_detection_training.py — Production-ready object detection training script
- scripts/image_classification_training.py — Production-ready image classification training script (supports timm models)
- scripts/sam_segmentation_training.py — Production-ready SAM/SAM2 segmentation training script (bbox & point prompts)
- scripts/dataset_inspector.py — Validate dataset format for OD, classification, and SAM segmentation
- scripts/estimate_cost.py — Estimate training costs for any vision model (includes SAM/SAM2)
- references/object_detection_training_notebook.md — Object detection training workflow, augmentation strategies, and training patterns
- references/image_classification_training_notebook.md — Image classification training workflow with ViT, preprocessing, and evaluation
- references/finetune_sam2_trainer.md — SAM2 fine-tuning walkthrough with MicroMat dataset, DiceCE loss, and Trainer integration
- references/timm_trainer.md — Using timm models with HF Trainer (TimmWrapper, transforms, full example)
- references/hub_saving.md — Detailed Hub persistence guide and verification checklist
- references/reliability_principles.md — Failure prevention principles from production experience
- scripts/object_detection_training.py —— 生产级目标检测训练脚本
- scripts/image_classification_training.py —— 生产级图像分类训练脚本(支持timm模型)
- scripts/sam_segmentation_training.py —— 生产级SAM/SAM2分割训练脚本(支持边界框和点提示)
- scripts/dataset_inspector.py —— 验证目标检测、分类和SAM分割的数据集格式
- scripts/estimate_cost.py —— 估算任意视觉模型的训练成本(含SAM/SAM2)
- references/object_detection_training_notebook.md —— 目标检测训练工作流、数据增强策略及训练模式
- references/image_classification_training_notebook.md —— 基于ViT的图像分类训练工作流、预处理及评估
- references/finetune_sam2_trainer.md —— SAM2微调指南,含MicroMat数据集、DiceCE损失及Trainer集成
- references/timm_trainer.md —— 在HF Trainer中使用timm模型(TimmWrapper、变换、完整示例)
- references/hub_saving.md —— 详细的Hub持久化指南及验证清单
- references/reliability_principles.md —— 基于生产经验的故障预防原则
External links
外部链接
- Transformers Object Detection Guide
- Transformers Image Classification Guide
- DETR Model Documentation
- ViT Model Documentation
- HF Jobs Guide — Main Jobs documentation
- HF Jobs Configuration — Hardware, secrets, timeouts, namespaces
- HF Jobs CLI Reference — Command line interface
- Object Detection Models
- Image Classification Models
- SAM2 Model Documentation
- SAM Model Documentation
- Object Detection Datasets
- Image Classification Datasets
- Transformers目标检测指南
- Transformers图像分类指南
- DETR模型文档
- ViT模型文档
- HF Jobs指南 —— Jobs主文档
- HF Jobs配置 —— 硬件、密钥、超时、命名空间
- HF Jobs CLI参考 —— 命令行接口
- 目标检测模型
- 图像分类模型
- SAM2模型文档
- SAM模型文档
- 目标检测数据集
- 图像分类数据集