local-llm-ops
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseLocal LLM Ops (Ollama)
本地LLM运维(Ollama)
Overview
概述
Your repo provides a full local LLM toolchain on Apple Silicon: setup scripts, a rich CLI chat launcher, benchmarks, and diagnostics. The operational path is: install Ollama, ensure the service is running, initialize the venv, pull models, then launch chat or benchmarks.
localLLM你的仓库为Apple Silicon提供了完整的本地LLM工具链:安装脚本、功能丰富的CLI聊天启动器、基准测试和诊断工具。操作流程为:安装Ollama,确保服务运行,初始化venv,拉取模型,然后启动聊天或基准测试。
localLLMQuick Start
快速开始
bash
./setup_chatbot.sh
./chatllmIf no models are present:
bash
ollama pull mistralbash
./setup_chatbot.sh
./chatllm如果没有模型,请执行:
bash
ollama pull mistralSetup Checklist
安装配置清单
- Install Ollama:
brew install ollama - Start the service:
brew services start ollama - Run setup:
./setup_chatbot.sh - Verify service:
curl http://localhost:11434/api/version
- 安装Ollama:
brew install ollama - 启动服务:
brew services start ollama - 运行安装脚本:
./setup_chatbot.sh - 验证服务:
curl http://localhost:11434/api/version
Chat Launchers
聊天启动器
- (primary launcher)
./chatllm - or
./chat(alternate launchers)./chat.py - Aliases: then
./install_aliases.sh,llm,llm-codellm-fast
Task modes:
bash
./chat -t coding -m codellama:70b
./chat -t creative -m llama3.1:70b
./chat -t analytical- (主启动器)
./chatllm - 或
./chat(备用启动器)./chat.py - 别名设置:执行后可使用
./install_aliases.sh,llm,llm-codellm-fast
任务模式示例:
bash
./chat -t coding -m codellama:70b
./chat -t creative -m llama3.1:70b
./chat -t analyticalBenchmark Workflow
基准测试流程
Benchmarks are scripted in :
scripts/run_benchmarks.shbash
./scripts/run_benchmarks.shThis runs with:
bench_ollama.pybenchmarks/prompts.yamlbenchmarks/models.yaml- Multiple runs and max token limits
基准测试脚本位于:
scripts/run_benchmarks.shbash
./scripts/run_benchmarks.sh该脚本会运行,并使用以下资源:
bench_ollama.pybenchmarks/prompts.yamlbenchmarks/models.yaml- 多轮运行及最大令牌限制
Diagnostics
诊断工具
Run the built-in diagnostic script when setup fails:
bash
./diagnose.shCommon fixes:
- Re-run
./setup_chatbot.sh - Ensure is in PATH
ollama - Pull at least one model:
ollama pull mistral
当安装配置失败时,运行内置的诊断脚本:
bash
./diagnose.sh常见修复方案:
- 重新运行
./setup_chatbot.sh - 确保在系统PATH中
ollama - 至少拉取一个模型:
ollama pull mistral
Operational Notes
运维注意事项
- Virtualenv lives in
.venv - Chat configs and sessions live under
~/.localllm/ - Ollama API runs at
http://localhost:11434
- 虚拟环境(virtualenv)位于
.venv - 聊天配置和会话存储在目录下
~/.localllm/ - Ollama API运行地址为
http://localhost:11434
Related Skills
相关技能
toolchains/universal/infrastructure/docker
toolchains/universal/infrastructure/docker