Loading...
Loading...
Use to select models to run locally with llama.cpp and GGUF on CPU, Mac Metal, CUDA, or ROCm. Covers finding GGUFs, quant selection, running servers, exact GGUF file lookup, conversion, and OpenAI-compatible local serving.
npx skill4agent add huggingface/skills huggingface-local-modelsllama-clillama-serverapps=llama.cpphttps://huggingface.co/<repo>?local-app=llama.cpp.ggufhttps://huggingface.co/api/models/<repo>/tree/main?recursive=truellama-cli -hf <repo>:<QUANT>llama-server -hf <repo>:<QUANT>--hf-repo--hf-filebrew install llama.cpp
winget install llama.cppgit clone https://github.com/ggml-org/llama.cpp
cd llama.cpp
makehf auth loginhttps://huggingface.co/models?apps=llama.cpp&sort=trending
https://huggingface.co/models?search=Qwen3.6&apps=llama.cpp&sort=trending
https://huggingface.co/models?search=<term>&apps=llama.cpp&num_parameters=min:0,max:24B&sort=trendingllama-cli -hf unsloth/Qwen3.6-35B-A3B-GGUF:UD-Q4_K_M
llama-server -hf unsloth/Qwen3.6-35B-A3B-GGUF:UD-Q4_K_Mllama-server \
--hf-repo unsloth/Qwen3.6-35B-A3B-GGUF \
--hf-file Qwen3.6-35B-A3B-UD-Q4_K_M.gguf \
-c 4096hf download <repo-without-gguf> --local-dir ./model-src
python convert_hf_to_gguf.py ./model-src \
--outfile model-f16.gguf \
--outtype f16
llama-quantize model-f16.gguf model-q4_k_m.gguf Q4_K_Mllama-server -hf unsloth/Qwen3.6-35B-A3B-GGUF:UD-Q4_K_Mcurl http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer no-key" \
-d '{
"messages": [
{"role": "user", "content": "Write a limerick about exception handling"}
]
}'?local-app=llama.cppUD-Q4_K_MQ4_K_MQ5_K_MQ6_KQ3_K_MQ4_K_SIQUD-*mmproj-*.ggufimatrixhttps://github.com/ggml-org/llama.cpphttps://huggingface.co/docs/hub/gguf-llamacpphttps://huggingface.co/docs/hub/main/local-appshttps://huggingface.co/docs/hub/agents-localhttps://huggingface.co/spaces/ggml-org/gguf-my-repo