onnx-webgpu-converter
Original:🇺🇸 English
Translated
1 scriptsChecked / no sensitive code detected
Convert HuggingFace transformer models to ONNX format for browser inference with Transformers.js and WebGPU. Use when given a HuggingFace model link to convert to ONNX, when setting up optimum-cli for ONNX export, when quantizing models (fp16, q8, q4) for web deployment, when configuring Transformers.js with WebGPU acceleration, or when troubleshooting ONNX conversion errors. Triggers on mentions of ONNX conversion, Transformers.js, WebGPU inference, optimum export, model quantization for browser, or running ML models in the browser.
1installs
Sourcejakerains/agentskills
Added on
NPX Install
npx skill4agent add jakerains/agentskills onnx-webgpu-converterTags
Translated version includes tags in frontmatterSKILL.md Content
View Translation Comparison →ONNX WebGPU Model Converter
Convert any HuggingFace model to ONNX and run it in the browser with Transformers.js + WebGPU.
Workflow Overview
- Check if ONNX version already exists on HuggingFace
- Set up Python environment with optimum
- Export model to ONNX with optimum-cli
- Quantize for target deployment (WebGPU vs WASM)
- Upload to HuggingFace Hub (optional)
- Use in Transformers.js with WebGPU
Step 1: Check for Existing ONNX Models
Before converting, check if the model already has an ONNX version:
- Search on HuggingFace Hub
onnx-community/<model-name> - Check the model repo for an folder
onnx/ - Browse https://huggingface.co/models?library=transformers.js (1200+ pre-converted)
If found, skip to Step 6.
Step 2: Environment Setup
bash
# Create venv (recommended)
python -m venv onnx-env && source onnx-env/bin/activate
# Install optimum with ONNX support
pip install "optimum[onnx]" onnxruntime
# For GPU-accelerated export (optional)
pip install onnxruntime-gpuVerify installation:
bash
optimum-cli export onnx --helpStep 3: Export to ONNX
Basic Export (auto-detect task)
bash
optimum-cli export onnx --model <model_id_or_path> ./output_dir/With Explicit Task
bash
optimum-cli export onnx \
--model <model_id> \
--task <task> \
./output_dir/Common tasks: , , , , , , , , ,
text-generationtext-classificationfeature-extractionimage-classificationautomatic-speech-recognitionobject-detectionimage-segmentationquestion-answeringtoken-classificationzero-shot-classificationFor decoder models, append for KV cache reuse (default behavior):
, ,
-with-pasttext-generation-with-pasttext2text-generation-with-pastautomatic-speech-recognition-with-pastFull CLI Reference
| Flag | Description |
|---|---|
| HuggingFace model ID or local path (required) |
| Export task (auto-detected if on Hub) |
| ONNX opset version (default: auto) |
| Export device, |
| ONNX Runtime optimization level |
| Force single ONNX file (vs split encoder/decoder) |
| Skip post-processing (e.g., decoder merging) |
| Allow custom model code from Hub |
| Override pad token (needed for some models) |
| Cache directory for downloaded models |
| Batch size for dummy inputs |
| Sequence length for dummy inputs |
| Source framework |
| Absolute tolerance for validation |
Optimization Levels
| Level | Description |
|---|---|
| O1 | Basic general optimizations |
| O2 | Basic + extended + transformer fusions |
| O3 | O2 + GELU approximation |
| O4 | O3 + mixed precision fp16 (GPU only, requires |
Step 4: Quantize for Web Deployment
Quantization Types for Transformers.js
| dtype | Precision | Best For | Size Reduction |
|---|---|---|---|
| Full 32-bit | Maximum accuracy | None (baseline) |
| Half 16-bit | WebGPU default quality | ~50% |
| 8-bit | WASM default, good balance | ~75% |
| 4-bit | Maximum compression | ~87% |
| 4-bit weights, fp16 compute | WebGPU + small size | ~87% |
Using optimum-cli quantization
bash
# Dynamic quantization (post-export)
optimum-cli onnxruntime quantize \
--onnx_model ./output_dir/ \
--avx512 \
-o ./quantized_dir/Using Python API for finer control
python
from optimum.onnxruntime import ORTQuantizer, ORTModelForSequenceClassification
from optimum.onnxruntime.configuration import AutoQuantizationConfig
model = ORTModelForSequenceClassification.from_pretrained("./output_dir/")
quantizer = ORTQuantizer.from_pretrained(model)
config = AutoQuantizationConfig.avx512_vnni(is_static=False, per_channel=False)
quantizer.quantize(save_dir="./quantized_dir/", quantization_config=config)Producing Multiple dtype Variants for Transformers.js
To provide fp32, fp16, q8, and q4 variants (like models), organize output as:
onnx-communitymodel_onnx/
├── onnx/
│ ├── model.onnx # fp32
│ ├── model_fp16.onnx # fp16
│ ├── model_quantized.onnx # q8
│ └── model_q4.onnx # q4
├── config.json
├── tokenizer.json
└── tokenizer_config.jsonStep 5: Upload to HuggingFace Hub (Optional)
bash
# Login
huggingface-cli login
# Upload
huggingface-cli upload <your-username>/<model-name>-onnx ./output_dir/
# Add transformers.js tag to model card for discoverabilityStep 6: Use in Transformers.js with WebGPU
Install
bash
npm install @huggingface/transformersBasic Pipeline with WebGPU
javascript
import { pipeline } from "@huggingface/transformers";
const pipe = await pipeline("task-name", "model-id-or-path", {
device: "webgpu", // GPU acceleration
dtype: "q4", // Quantization level
});
const result = await pipe("input text");Per-Module dtypes (encoder-decoder models)
Some models (Whisper, Florence-2) need different quantization per component:
javascript
const model = await Florence2ForConditionalGeneration.from_pretrained(
"onnx-community/Florence-2-base-ft",
{
dtype: {
embed_tokens: "fp16",
vision_encoder: "fp16",
encoder_model: "q4",
decoder_model_merged: "q4",
},
device: "webgpu",
},
);For detailed Transformers.js WebGPU usage patterns: See references/webgpu-usage.md
Troubleshooting
For conversion errors and common issues: See references/conversion-guide.md
Quick Fixes
- "Task not found": Use flag explicitly. For decoder models try
--tasktext-generation-with-past - "trust_remote_code": Add flag for custom model architectures
--trust-remote-code - Out of memory: Use and smaller
--device cpu--batch_size - Validation fails: Try or increase
--no-post-process--atol - Model not supported: Check supported architectures — 120+ architectures supported
- WebGPU fallback to WASM: Ensure browser supports WebGPU (Chrome 113+, Edge 113+)
Supported Task → Pipeline Mapping
| Task | Transformers.js Pipeline | Example Model |
|---|---|---|
| text-classification | | distilbert-base-uncased-finetuned-sst-2 |
| text-generation | | Qwen2.5-0.5B-Instruct |
| feature-extraction | | mxbai-embed-xsmall-v1 |
| automatic-speech-recognition | | whisper-tiny.en |
| image-classification | | mobilenetv4_conv_small |
| object-detection | | detr-resnet-50 |
| image-segmentation | | segformer-b0 |
| zero-shot-image-classification | | clip-vit-base-patch32 |
| depth-estimation | | depth-anything-small |
| translation | | nllb-200-distilled-600M |
| summarization | | bart-large-cnn |