Loading...
Loading...
Local LLM operations with Ollama on Apple Silicon, including setup, model pulls, chat launchers, benchmarks, and diagnostics.
npx skill4agent add bobmatnyc/claude-mpm-skills local-llm-opslocalLLM./setup_chatbot.sh
./chatllmollama pull mistralbrew install ollamabrew services start ollama./setup_chatbot.shcurl http://localhost:11434/api/version./chatllm./chat./chat.py./install_aliases.shllmllm-codellm-fast./chat -t coding -m codellama:70b
./chat -t creative -m llama3.1:70b
./chat -t analyticalscripts/run_benchmarks.sh./scripts/run_benchmarks.shbench_ollama.pybenchmarks/prompts.yamlbenchmarks/models.yaml./diagnose.sh./setup_chatbot.shollamaollama pull mistral.venv~/.localllm/http://localhost:11434toolchains/universal/infrastructure/docker