Loading...
Loading...
Access and interact with Large Language Models from the command line using Simon Willison's llm CLI tool. Supports OpenAI, Anthropic, Gemini, Llama, and dozens of other models via plugins. Features include chat sessions, embeddings, structured data extraction with schemas, prompt templates, conversation logging, and tool use. This skill is triggered when the user says things like "run a prompt with llm", "use the llm command", "call an LLM from the command line", "set up llm API keys", "install llm plugins", "create embeddings", or "extract structured data from text".
npx skill4agent add seckatie/katies-ai-skills llm# Run a prompt
llm "Your prompt here"
# Use a specific model
llm -m claude-4-opus "Your prompt"
# Chat mode
llm chat -m gpt-4.1
# With attachments (images, audio, video)
llm "describe this" -a image.jpg
# Pipe content
cat file.py | llm -s "Explain this code"llm keys set openai
llm keys set anthropic
llm keys set geminillm install llm-anthropic
llm install llm-gemini
llm install llm-ollama
llm plugins# Start chat with context
llm chat -m gpt-4.1 -s "You are a helpful coding assistant"
# Continue a previous conversation
llm -c "Follow up question"# Analyze code
cat script.py | llm "Review this code for bugs"
# Process multiple files
cat *.md | llm "Summarize these documents"# Extract data with schema
llm -m gpt-4.1 "Extract person info" -a photo.jpg --schema name,age,occupation# List templates
llm templates
# Use a template
llm -t summarize < article.txttemplates//audio-to-article# Use with a transcript
cat transcript.txt | llm -t templates/audio-to-article.yaml
# Or with the full path
python3 ../parakeet/srt_to_text.py audio.srt | llm -t templates/audio-to-article.yamlsrt_to_text.py