Loading...
Loading...
Found 3 Skills
Operate LM Studio's `lms` CLI and local/remote LM Studio servers for model discovery, server status checks, model loading, endpoint smoke tests, and downstream OpenAI-compatible wiring. Use when the user mentions LM Studio, `lms`, a local model server, `/v1/models`, a remote LM Studio host, or wants to connect another tool to LM Studio; even if they only ask to test a local OpenAI-compatible endpoint or choose the correct loaded-model identifier. Triggers on: lmstudio, lm studio, lms, local model server, LM Studio API, LM Studio endpoint, /v1/models, connect Strix to LM Studio, load model in LM Studio.
Quick install and deploy vLLM, start serving with a simple LLM, and test OpenAI API.
Configure a Mac mini as a reliable local LLM server with remote access, observability, and power-safe operation.