huggingface-community-evals
Original:🇺🇸 English
Translated
3 scriptsChecked / no sensitive code detected
Run evaluations for Hugging Face Hub models using inspect-ai and lighteval on local hardware. Use for backend selection, local GPU evals, and choosing between vLLM / Transformers / accelerate. Not for HF Jobs orchestration, model-card PRs, .eval_results publication, or community-evals automation.
2installs
Sourcehuggingface/skills
Added on
NPX Install
npx skill4agent add huggingface/skills huggingface-community-evalsTags
Translated version includes tags in frontmatterSKILL.md Content
View Translation Comparison →Overview
This skill is for running evaluations against models on the Hugging Face Hub on local hardware.
It covers:
- with local inference
inspect-ai - with local inference
lighteval - choosing between , Hugging Face Transformers, and
vllmaccelerate - smoke tests, task selection, and backend fallback strategy
It does not cover:
- Hugging Face Jobs orchestration
- model-card or edits
model-index - README table extraction
- Artificial Analysis imports
- generation or publishing
.eval_results - PR creation or community-evals automation
If the user wants to run the same eval remotely on Hugging Face Jobs, hand off to the skill and pass it one of the local scripts in this skill.
hugging-face-jobsIf the user wants to publish results into the community evals workflow, stop after generating the evaluation run and hand off that publishing step to .
~/code/community-evalsAll paths below are relative to the directory containing this.SKILL.md
When To Use Which Script
| Use case | Script |
|---|---|
Local | |
Local GPU eval with | |
Local GPU eval with | |
| Extra command patterns | |
Prerequisites
- Prefer for local execution.
uv run - Set for gated/private models.
HF_TOKEN - For local GPU runs, verify GPU access before starting:
bash
uv --version
printenv HF_TOKEN >/dev/null
nvidia-smiIf is unavailable, either:
nvidia-smi- use for lighter provider-backed evaluation, or
scripts/inspect_eval_uv.py - hand off to the skill if the user wants remote compute.
hugging-face-jobs
Core Workflow
- Choose the evaluation framework.
- Use when you want explicit task control and inspect-native flows.
inspect-ai - Use when the benchmark is naturally expressed as a lighteval task string, especially leaderboard-style tasks.
lighteval
- Use
- Choose the inference backend.
- Prefer for throughput on supported architectures.
vllm - Use Hugging Face Transformers () or
--backend hfas compatibility fallbacks.accelerate
- Prefer
- Start with a smoke test.
- : add
inspect-aior similar.--limit 10 - : add
lighteval.--max-samples 10
- Scale up only after the smoke test passes.
- If the user wants remote execution, hand off to with the same script + args.
hugging-face-jobs
Quick Start
Option A: inspect-ai with local inference providers path
Best when the model is already supported by Hugging Face Inference Providers and you want the lowest local setup overhead.
bash
uv run scripts/inspect_eval_uv.py \
--model meta-llama/Llama-3.2-1B \
--task mmlu \
--limit 20Use this path when:
- you want a quick local smoke test
- you do not need direct GPU control
- the task already exists in
inspect-evals
Option B: inspect-ai on Local GPU
Best when you need to load the Hub model directly, use , or fall back to Transformers for unsupported architectures.
vllmLocal GPU:
bash
uv run scripts/inspect_vllm_uv.py \
--model meta-llama/Llama-3.2-1B \
--task gsm8k \
--limit 20Transformers fallback:
bash
uv run scripts/inspect_vllm_uv.py \
--model microsoft/phi-2 \
--task mmlu \
--backend hf \
--trust-remote-code \
--limit 20Option C: lighteval on Local GPU
Best when the task is naturally expressed as a task string, especially Open LLM Leaderboard style benchmarks.
lightevalLocal GPU:
bash
uv run scripts/lighteval_vllm_uv.py \
--model meta-llama/Llama-3.2-3B-Instruct \
--tasks "leaderboard|mmlu|5,leaderboard|gsm8k|5" \
--max-samples 20 \
--use-chat-templateacceleratebash
uv run scripts/lighteval_vllm_uv.py \
--model microsoft/phi-2 \
--tasks "leaderboard|mmlu|5" \
--backend accelerate \
--trust-remote-code \
--max-samples 20Remote Execution Boundary
This skill intentionally stops at local execution and backend selection.
If the user wants to:
- run these scripts on Hugging Face Jobs
- pick remote hardware
- pass secrets to remote jobs
- schedule recurring runs
- inspect / cancel / monitor jobs
then switch to the skill and pass it one of these scripts plus the chosen arguments.
hugging-face-jobsTask Selection
inspect-aimmlugsm8khellaswagarc_challengetruthfulqawinograndehumaneval
lightevalsuite|task|num_fewshotleaderboard|mmlu|5leaderboard|gsm8k|5leaderboard|arc_challenge|25lighteval|hellaswag|0
Multiple tasks can be comma-separated in .
lighteval--tasksBackend Selection
- Prefer for fast GPU inference on supported architectures.
inspect_vllm_uv.py --backend vllm - Use when
inspect_vllm_uv.py --backend hfdoes not support the model.vllm - Prefer for throughput on supported models.
lighteval_vllm_uv.py --backend vllm - Use as the compatibility fallback.
lighteval_vllm_uv.py --backend accelerate - Use when Inference Providers already cover the model and you do not need direct GPU control.
inspect_eval_uv.py
Hardware Guidance
| Model size | Suggested local hardware |
|---|---|
| consumer GPU / Apple Silicon / small dev GPU |
| stronger local GPU |
| high-memory local GPU or hand off to |
For smoke tests, prefer cheaper local runs plus or .
--limit--max-samplesTroubleshooting
- CUDA or vLLM OOM:
- reduce
--batch-size - reduce
--gpu-memory-utilization - switch to a smaller model for the smoke test
- if necessary, hand off to
hugging-face-jobs
- reduce
- Model unsupported by :
vllm- switch to for
--backend hfinspect-ai - switch to for
--backend acceleratelighteval
- switch to
- Gated/private repo access fails:
- verify
HF_TOKEN
- verify
- Custom model code required:
- add
--trust-remote-code
- add
Examples
See:
- for local command patterns
examples/USAGE_EXAMPLES.md scripts/inspect_eval_uv.pyscripts/inspect_vllm_uv.pyscripts/lighteval_vllm_uv.py