Vision Model Training on Hugging Face Jobs
Train object detection, image classification, and SAM/SAM2 segmentation models on managed cloud GPUs. No local GPU setup required—results are automatically saved to the Hugging Face Hub.
When to Use This Skill
Use this skill when users want to:
- Fine-tune object detection models (D-FINE, RT-DETR v2, DETR, YOLOS) on cloud GPUs or local
- Fine-tune image classification models (timm: MobileNetV3, MobileViT, ResNet, ViT/DINOv3, or any Transformers classifier) on cloud GPUs or local
- Fine-tune SAM or SAM2 models for segmentation / image matting using bbox or point prompts
- Train bounding-box detectors on custom datasets
- Train image classifiers on custom datasets
- Train segmentation models on custom mask datasets with prompts
- Run vision training jobs on Hugging Face Jobs infrastructure
- Ensure trained vision models are permanently saved to the Hub
Related Skills
- — General HF Jobs infrastructure: token authentication, hardware flavors, timeout management, cost estimation, secrets, environment variables, scheduled jobs, and result persistence. Refer to the Jobs skill for any non-training-specific Jobs questions (e.g., "how do secrets work?", "what hardware is available?", "how do I pass tokens?").
hugging-face-model-trainer
— TRL-based language model training (SFT, DPO, GRPO). Use that skill for text/language model fine-tuning.
Local Script Execution
Helper scripts use PEP 723 inline dependencies. Run them with
:
bash
uv run scripts/dataset_inspector.py --dataset username/dataset-name --split train
uv run scripts/estimate_cost.py --help
Prerequisites Checklist
Before starting any training job, verify:
Account & Authentication
- Hugging Face Account with Pro, Team, or Enterprise plan (Jobs require paid plan)
- Authenticated login: Check with (tool) or (terminal)
- Token has write permissions
- MUST pass token in job secrets — see directive #3 below for syntax (MCP tool vs Python API)
Dataset Requirements — Object Detection
- Dataset must exist on Hub
- Annotations must use the column with , (and optionally ) sub-fields
- Bboxes can be in xywh (COCO) or xyxy (Pascal VOC) format — auto-detected and converted
- Categories can be integers or strings — strings are auto-remapped to integer IDs
- column is optional — generated automatically if missing
- ALWAYS validate unknown datasets before GPU training (see Dataset Validation section)
Dataset Requirements — Image Classification
- Dataset must exist on Hub
- Must have an column (PIL images) and a column (integer class IDs or strings)
- The label column can be type (with names) or plain integers/strings — strings are auto-remapped
- Common column names auto-detected: , , ,
- ALWAYS validate unknown datasets before GPU training (see Dataset Validation section)
Dataset Requirements — SAM/SAM2 Segmentation
- Dataset must exist on Hub
- Must have an column (PIL images) and a column (binary ground-truth segmentation mask)
- Must have a prompt — either:
- A column with JSON containing or
- OR a dedicated column with values
- OR a dedicated column with or values
- Bboxes should be in xyxy format (absolute pixel coordinates)
- Example dataset: (image matting with bbox prompts)
- ALWAYS validate unknown datasets before GPU training (see Dataset Validation section)
Critical Settings
- Timeout must exceed expected training time — Default 30min is TOO SHORT. See directive #6 for recommended values.
- Hub push must be enabled — ,
hub_model_id="username/model-name"
, token in
Dataset Validation
Validate dataset format BEFORE launching GPU training to prevent the #1 cause of training failures: format mismatches.
ALWAYS validate for unknown/custom datasets or any dataset you haven't trained with before.
Skip for (the default in the training script).
Running the Inspector
Option 1: Via HF Jobs (recommended — avoids local SSL/dependency issues):
python
hf_jobs("uv", {
"script": "path/to/dataset_inspector.py",
"script_args": ["--dataset", "username/dataset-name", "--split", "train"]
})
Option 2: Locally:
bash
uv run scripts/dataset_inspector.py --dataset username/dataset-name --split train
Option 3: Via (if hf_jobs MCP unavailable):
python
from huggingface_hub import HfApi
api = HfApi()
api.run_uv_job(
script="scripts/dataset_inspector.py",
script_args=["--dataset", "username/dataset-name", "--split", "train"],
flavor="cpu-basic",
timeout=300,
)
Reading Results
- — Dataset is compatible, use directly
- — Needs preprocessing (mapping code provided in output)
Automatic Bbox Preprocessing
The object detection training script (
scripts/object_detection_training.py
) automatically handles bbox format detection (xyxy→xywh conversion), bbox sanitization,
generation, string category→integer remapping, and dataset truncation.
No manual preprocessing needed — just ensure the dataset has
and
columns.
Training workflow
Copy this checklist and track progress:
Training Progress:
- [ ] Step 1: Verify prerequisites (account, token, dataset)
- [ ] Step 2: Validate dataset format (run dataset_inspector.py)
- [ ] Step 3: Ask user about dataset size and validation split
- [ ] Step 4: Prepare training script (OD: scripts/object_detection_training.py, IC: scripts/image_classification_training.py, SAM: scripts/sam_segmentation_training.py)
- [ ] Step 5: Save script locally, submit job, and report details
Step 1: Verify prerequisites
Follow the Prerequisites Checklist above.
Step 2: Validate dataset
Run the dataset inspector BEFORE spending GPU time. See "Dataset Validation" section above.
Step 3: Ask user preferences
ALWAYS use the AskUserQuestion tool with option-style format:
python
AskUserQuestion({
"questions": [
{
"question": "Do you want to run a quick test with a subset of the data first?",
"header": "Dataset Size",
"options": [
{"label": "Quick test run (10% of data)", "description": "Faster, cheaper (~30-60 min, ~$2-5) to validate setup"},
{"label": "Full dataset (Recommended)", "description": "Complete training for best model quality"}
],
"multiSelect": false
},
{
"question": "Do you want to create a validation split from the training data?",
"header": "Split data",
"options": [
{"label": "Yes (Recommended)", "description": "Automatically split 15% of training data for validation"},
{"label": "No", "description": "Use existing validation split from dataset"}
],
"multiSelect": false
},
{
"question": "Which GPU hardware do you want to use?",
"header": "Hardware Flavor",
"options": [
{"label": "t4-small ($0.40/hr)", "description": "1x T4, 16 GB VRAM — sufficient for all OD models under 100M params"},
{"label": "l4x1 ($0.80/hr)", "description": "1x L4, 24 GB VRAM — more headroom for large images or batch sizes"},
{"label": "a10g-large ($1.50/hr)", "description": "1x A10G, 24 GB VRAM — faster training, more CPU/RAM"},
{"label": "a100-large ($2.50/hr)", "description": "1x A100, 80 GB VRAM — fastest, for very large datasets or image sizes"}
],
"multiSelect": false
}
]
})
Step 4: Prepare training script
For object detection, use
scripts/object_detection_training.py as the production-ready template. For image classification, use
scripts/image_classification_training.py. For SAM/SAM2 segmentation, use
scripts/sam_segmentation_training.py. All scripts use
— all configuration is passed via CLI arguments in
, NOT by editing Python variables. For timm model details, see
references/timm_trainer.md. For SAM2 training details, see
references/finetune_sam2_trainer.md.
Step 5: Save script, submit job, and report
- Save the script locally to in the workspace root (create if needed) with a descriptive name like
training_<dataset>_<YYYYMMDD_HHMMSS>.py
. Tell the user the path.
- Submit using MCP tool (preferred) or — see directive #1 for both methods. Pass all config via .
- Report the job ID (from attribute), monitoring URL, Trackio dashboard (
https://huggingface.co/spaces/{username}/trackio
), expected time, and estimated cost.
- Wait for user to request status checks — don't poll automatically. Training jobs run asynchronously and can take hours.
Critical directives
These rules prevent common failures. Follow them exactly.
1. Job submission: MCP tool vs Python API
is an MCP tool, NOT a Python function. Do NOT try to import it from
. Call it as a tool:
hf_jobs("uv", {"script": training_script_content, "flavor": "a10g-large", "timeout": "4h", "secrets": {"HF_TOKEN": "$HF_TOKEN"}})
If MCP tool is unavailable, use the Python API directly:
python
from huggingface_hub import HfApi, get_token
api = HfApi()
job_info = api.run_uv_job(
script="path/to/training_script.py", # file PATH, NOT content
script_args=["--dataset_name", "cppe-5", ...],
flavor="a10g-large",
timeout=14400, # seconds (4 hours)
env={"PYTHONUNBUFFERED": "1"},
secrets={"HF_TOKEN": get_token()}, # MUST use get_token(), NOT "$HF_TOKEN"
)
print(f"Job ID: {job_info.id}")
Critical differences between the two methods:
| MCP tool | |
|---|
| param | Python code string or URL (NOT local paths) | File path to file (NOT content) |
| Token in secrets | (auto-replaced) | (actual token value) |
| Timeout format | String () | Seconds () |
Rules for both methods:
- The training script MUST include PEP 723 inline metadata with dependencies
- Do NOT use or parameters (those belong to , not )
2. Authentication via job secrets + explicit hub_token injection
Job config MUST include the token in secrets — syntax depends on submission method (see table above).
Training script requirement: The Transformers
calls
create_repo(token=self.args.hub_token)
during
when
. The training script MUST inject
into
AFTER parsing args but BEFORE creating the
. The template
scripts/object_detection_training.py
already includes this:
python
hf_token = os.environ.get("HF_TOKEN")
if training_args.push_to_hub and not training_args.hub_token:
if hf_token:
training_args.hub_token = hf_token
If you write a custom script, you MUST include this token injection before the
call.
- Do NOT call in custom scripts unless replicating the full pattern from
scripts/object_detection_training.py
- Do NOT rely on implicit token resolution () — unreliable in Jobs
- See the skill → Token Usage Guide for full details
3. JobInfo attribute
Access the job identifier using
(NOT
or
— these don't exist):
python
job_info = api.run_uv_job(...) # or hf_jobs("uv", {...})
job_id = job_info.id # Correct -- returns string like "687fb701029421ae5549d998"
4. Required training flags and HfArgumentParser boolean syntax
scripts/object_detection_training.py
uses
— all config is passed via
. Boolean arguments have two syntaxes:
- fields (e.g., , ): Use as bare flags () or negate with prefix (
--no_remove_unused_columns
)
- fields (e.g., ): MUST pass explicit value (). Bare causes
error: expected one argument
Required flags for object detection:
--no_remove_unused_columns # MUST: preserves image column for pixel_values
--no_eval_do_concat_batches # MUST: images have different numbers of target boxes
--push_to_hub # MUST: environment is ephemeral
--hub_model_id username/model-name
--metric_for_best_model eval_map
--greater_is_better True # MUST pass "True" explicitly (Optional[bool])
--do_train
--do_eval
Required flags for image classification:
--no_remove_unused_columns # MUST: preserves image column for pixel_values
--push_to_hub # MUST: environment is ephemeral
--hub_model_id username/model-name
--metric_for_best_model eval_accuracy
--greater_is_better True # MUST pass "True" explicitly (Optional[bool])
--do_train
--do_eval
Required flags for SAM/SAM2 segmentation:
--remove_unused_columns False # MUST: preserves input_boxes/input_points
--push_to_hub # MUST: environment is ephemeral
--hub_model_id username/model-name
--do_train
--prompt_type bbox # or "point"
--dataloader_pin_memory False # MUST: avoids pin_memory issues with custom collator
5. Timeout management
Default 30 min is TOO SHORT for object detection. Set minimum 2-4 hours. Add 30% buffer for model loading, preprocessing, and Hub push.
| Scenario | Timeout |
|---|
| Quick test (100-200 images, 5-10 epochs) | 1h |
| Development (500-1K images, 15-20 epochs) | 2-3h |
| Production (1K-5K images, 30 epochs) | 4-6h |
| Large dataset (5K+ images) | 6-12h |
6. Trackio monitoring
Trackio is
always enabled in the object detection training script — it calls
and
automatically. No need to pass
. The project name is taken from
and the run name from
. For image classification, pass
in
.
Dashboard at:
https://huggingface.co/spaces/{username}/trackio
Model & hardware selection
Recommended object detection models
| Model | Params | Use case |
|---|
ustc-community/dfine-small-coco
| 10.4M | Best starting point — fast, cheap, SOTA quality |
| 20.2M | Lightweight real-time detector |
ustc-community/dfine-large-coco
| 31.4M | Higher accuracy, still efficient |
| 43M | Strong real-time baseline |
ustc-community/dfine-xlarge-obj365
| 63.5M | Best accuracy (pretrained on Objects365) |
| 76M | Largest RT-DETR v2 variant |
Start with
ustc-community/dfine-small-coco
for fast iteration. Move to D-FINE Large or RT-DETR v2 R50 for better accuracy.
Recommended image classification models
All
models work out of the box via
AutoModelForImageClassification
(loaded as
TimmWrapperForImageClassification
). See
references/timm_trainer.md for details.
| Model | Params | Use case |
|---|
timm/mobilenetv3_small_100.lamb_in1k
| 2.5M | Ultra-lightweight — mobile/edge, fastest training |
timm/mobilevit_s.cvnets_in1k
| 5.6M | Mobile transformer — good accuracy/speed trade-off |
| 25.6M | Strong CNN baseline — reliable, well-studied |
timm/vit_base_patch16_dinov3.lvd1689m
| 86.6M | Best accuracy — DINOv3 self-supervised ViT |
Start with
timm/mobilenetv3_small_100.lamb_in1k
for fast iteration. Move to
or
timm/vit_base_patch16_dinov3.lvd1689m
for better accuracy.
Recommended SAM/SAM2 segmentation models
| Model | Params | Use case |
|---|
facebook/sam2.1-hiera-tiny
| 38.9M | Fastest SAM2 — good for quick experiments |
facebook/sam2.1-hiera-small
| 46.0M | Best starting point — good quality/speed balance |
facebook/sam2.1-hiera-base-plus
| 80.8M | Higher capacity for complex segmentation |
facebook/sam2.1-hiera-large
| 224.4M | Best SAM2 accuracy — requires more VRAM |
| 93.7M | Original SAM — ViT-B backbone |
| 312.3M | Original SAM — ViT-L backbone |
| 641.1M | Original SAM — ViT-H, best SAM v1 accuracy |
Start with
facebook/sam2.1-hiera-small
for fast iteration. SAM2 models are generally more efficient than SAM v1 at similar quality. Only the mask decoder is trained by default (vision and prompt encoders are frozen).
Hardware recommendation
All recommended OD and IC models are under 100M params —
(16 GB VRAM, $0.40/hr) is sufficient for all of them. Image classification models are generally smaller and faster than object detection models —
handles even ViT-Base comfortably. For SAM2 models up to
,
is sufficient since only the mask decoder is trained. For
or SAM v1 models, use
or
. Only upgrade if you hit OOM from large batch sizes — reduce batch size first before switching hardware. Common upgrade path:
→
($0.80/hr, 24 GB) →
($1.50/hr, 24 GB).
For full hardware flavor list: refer to the
skill. For cost estimation: run
.
Quick start — Object Detection
The
below are the same for both submission methods. See directive #1 for the critical differences between them.
python
OD_SCRIPT_ARGS = [
"--model_name_or_path", "ustc-community/dfine-small-coco",
"--dataset_name", "cppe-5",
"--image_square_size", "640",
"--output_dir", "dfine_finetuned",
"--num_train_epochs", "30",
"--per_device_train_batch_size", "8",
"--learning_rate", "5e-5",
"--eval_strategy", "epoch",
"--save_strategy", "epoch",
"--save_total_limit", "2",
"--load_best_model_at_end",
"--metric_for_best_model", "eval_map",
"--greater_is_better", "True",
"--no_remove_unused_columns",
"--no_eval_do_concat_batches",
"--push_to_hub",
"--hub_model_id", "username/model-name",
"--do_train",
"--do_eval",
]
python
from huggingface_hub import HfApi, get_token
api = HfApi()
job_info = api.run_uv_job(
script="scripts/object_detection_training.py",
script_args=OD_SCRIPT_ARGS,
flavor="t4-small",
timeout=14400,
env={"PYTHONUNBUFFERED": "1"},
secrets={"HF_TOKEN": get_token()},
)
print(f"Job ID: {job_info.id}")
Key OD
- — recommended:
"ustc-community/dfine-small-coco"
(see model table above)
- — the Hub dataset ID
- — 480 (fast iteration) or 800 (better accuracy)
- — for Hub persistence
- — 30 typical for convergence
- — fraction to split for validation (default 0.15), set if dataset lacks a validation split
- — truncate training set (useful for quick test runs, e.g. for ~10% of a 7.8K dataset)
- — truncate evaluation set
Quick start — Image Classification
python
IC_SCRIPT_ARGS = [
"--model_name_or_path", "timm/mobilenetv3_small_100.lamb_in1k",
"--dataset_name", "ethz/food101",
"--output_dir", "food101_classifier",
"--num_train_epochs", "5",
"--per_device_train_batch_size", "32",
"--per_device_eval_batch_size", "32",
"--learning_rate", "5e-5",
"--eval_strategy", "epoch",
"--save_strategy", "epoch",
"--save_total_limit", "2",
"--load_best_model_at_end",
"--metric_for_best_model", "eval_accuracy",
"--greater_is_better", "True",
"--no_remove_unused_columns",
"--push_to_hub",
"--hub_model_id", "username/food101-classifier",
"--do_train",
"--do_eval",
]
python
from huggingface_hub import HfApi, get_token
api = HfApi()
job_info = api.run_uv_job(
script="scripts/image_classification_training.py",
script_args=IC_SCRIPT_ARGS,
flavor="t4-small",
timeout=7200,
env={"PYTHONUNBUFFERED": "1"},
secrets={"HF_TOKEN": get_token()},
)
print(f"Job ID: {job_info.id}")
Key IC
- — any model or Transformers classification model (see model table above)
- — the Hub dataset ID
- — column containing PIL images (default: )
- — column containing class labels (default: )
- — for Hub persistence
- — 3-5 typical for classification (fewer than OD)
--per_device_train_batch_size
— 16-64 (classification models use less memory than OD)
- — fraction to split for validation (default 0.15), set if dataset lacks a validation split
- / — truncate for quick tests
Quick start — SAM/SAM2 Segmentation
python
SAM_SCRIPT_ARGS = [
"--model_name_or_path", "facebook/sam2.1-hiera-small",
"--dataset_name", "merve/MicroMat-mini",
"--prompt_type", "bbox",
"--prompt_column_name", "prompt",
"--output_dir", "sam2-finetuned",
"--num_train_epochs", "30",
"--per_device_train_batch_size", "4",
"--learning_rate", "1e-5",
"--logging_steps", "1",
"--save_strategy", "epoch",
"--save_total_limit", "2",
"--remove_unused_columns", "False",
"--dataloader_pin_memory", "False",
"--push_to_hub",
"--hub_model_id", "username/sam2-finetuned",
"--do_train",
"--report_to", "trackio",
]
python
from huggingface_hub import HfApi, get_token
api = HfApi()
job_info = api.run_uv_job(
script="scripts/sam_segmentation_training.py",
script_args=SAM_SCRIPT_ARGS,
flavor="t4-small",
timeout=7200,
env={"PYTHONUNBUFFERED": "1"},
secrets={"HF_TOKEN": get_token()},
)
print(f"Job ID: {job_info.id}")
Key SAM
- — SAM or SAM2 model (see model table above); auto-detects SAM vs SAM2
- — the Hub dataset ID (e.g., )
- — or — type of prompt in the dataset
- — column with JSON-encoded prompts (default: )
- — dedicated bbox column (alternative to JSON prompt column)
- — dedicated point column (alternative to JSON prompt column)
- — column with ground-truth masks (default: )
- — for Hub persistence
- — 20-30 typical for SAM fine-tuning
--per_device_train_batch_size
— 2-4 (SAM models use significant memory)
- / — freeze encoder weights (default: both frozen, only mask decoder trains)
- — fraction to split for validation (default 0.1)
Checking job status
MCP tool (if available):
hf_jobs("ps") # List all jobs
hf_jobs("logs", {"job_id": "your-job-id"}) # View logs
hf_jobs("inspect", {"job_id": "your-job-id"}) # Job details
Python API fallback:
python
from huggingface_hub import HfApi
api = HfApi()
api.list_jobs() # List all jobs
api.get_job_logs(job_id="your-job-id") # View logs
api.get_job(job_id="your-job-id") # Job details
Common failure modes
OOM (CUDA out of memory)
Reduce
per_device_train_batch_size
(try 4, then 2), reduce
, or upgrade hardware.
Dataset format errors
Run
scripts/dataset_inspector.py
first. The training script auto-detects xyxy vs xywh, converts string categories to integer IDs, and adds
if missing. Ensure
contains 4-value coordinate lists in absolute pixels and
contains either integer IDs or string labels.
Hub push failures (401)
Verify: (1) job secrets include token (see directive #2), (2) script sets
BEFORE creating the
, (3)
is set, (4) correct
, (5) token has write permissions.
Job timeout
Increase timeout (see directive #5 table), reduce epochs/dataset, or use checkpoint strategy with
hub_strategy="every_save"
.
KeyError: 'test' (missing test split)
The object detection training script handles this gracefully — it falls back to the
split. Ensure you're using the latest
scripts/object_detection_training.py
.
Single-class dataset: "iteration over a 0-d tensor"
torchmetrics.MeanAveragePrecision
returns scalar (0-d) tensors for per-class metrics when there's only one class. The template
scripts/object_detection_training.py
handles this by calling
on these tensors. Ensure you're using the latest template.
Poor detection performance (mAP < 0.15)
Increase epochs (30-50), ensure 500+ images, check per-class mAP for imbalanced classes, try different learning rates (1e-5 to 1e-4), increase image size.
For comprehensive troubleshooting: see references/reliability_principles.md
Reference files
- scripts/object_detection_training.py — Production-ready object detection training script
- scripts/image_classification_training.py — Production-ready image classification training script (supports timm models)
- scripts/sam_segmentation_training.py — Production-ready SAM/SAM2 segmentation training script (bbox & point prompts)
- scripts/dataset_inspector.py — Validate dataset format for OD, classification, and SAM segmentation
- scripts/estimate_cost.py — Estimate training costs for any vision model (includes SAM/SAM2)
- references/object_detection_training_notebook.md — Object detection training workflow, augmentation strategies, and training patterns
- references/image_classification_training_notebook.md — Image classification training workflow with ViT, preprocessing, and evaluation
- references/finetune_sam2_trainer.md — SAM2 fine-tuning walkthrough with MicroMat dataset, DiceCE loss, and Trainer integration
- references/timm_trainer.md — Using timm models with HF Trainer (TimmWrapper, transforms, full example)
- references/hub_saving.md — Detailed Hub persistence guide and verification checklist
- references/reliability_principles.md — Failure prevention principles from production experience
External links