llm-cli
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseLLM CLI Skill
LLM CLI 技能
Purpose
用途
This skill enables seamless interaction with multiple LLM providers (OpenAI, Anthropic, Google Gemini, Ollama) through the CLI tool. It processes textual and multimedia information with support for both one-off executions and interactive conversation modes.
llm该技能通过 CLI工具实现与多家LLM服务商(OpenAI、Anthropic、Google Gemini、Ollama)的无缝交互。它支持处理文本和多媒体信息,同时支持一次性执行和交互式对话两种模式。
llmWhen to Use This Skill
适用场景
Trigger this skill when:
- User wants to process text/files with an LLM
- User needs to choose between multiple available LLMs
- User wants interactive conversation with an LLM
- User needs to pipe content through an LLM for processing
- User wants to use specific model aliases (e.g., "claude-opus", "gpt-4o")
Example user requests:
- "Process this file with Claude"
- "Analyze this text with the fastest available model"
- "Start an interactive chat with OpenAI"
- "Use Gemini to summarize this document"
- "Chat mode with my local Ollama instance"
在以下场景触发该技能:
- 用户希望使用LLM处理文本/文件
- 用户需要在多个可用LLM中进行选择
- 用户希望与LLM进行交互式对话
- 用户需要通过管道传递内容给LLM进行处理
- 用户希望使用特定的模型别名(如"claude-opus"、"gpt-4o")
示例用户请求:
- "使用Claude处理这个文件"
- "用最快的可用模型分析这段文本"
- "启动与OpenAI的交互式聊天"
- "使用Gemini总结这份文档"
- "与本地Ollama实例进行聊天模式"
Supported Providers & Models
支持的服务商与模型
OpenAI
OpenAI
- Latest Models (2025):
- - Most advanced model
gpt-5 - /
gpt-4-1- Latest high-performancegpt-4.1 - /
gpt-4-1-mini- Smaller, faster versiongpt-4.1-mini - - Multimodal omni model
gpt-4o - - Lightweight multimodal
gpt-4o-mini - - Advanced reasoning
o3 - /
o3-mini- Reasoning variantso3-mini-high
Aliases: ,
openaigpt- 2025年最新模型:
- - 最先进的模型
gpt-5 - /
gpt-4-1- 最新高性能模型gpt-4.1 - /
gpt-4-1-mini- 更轻量、更快的版本gpt-4.1-mini - - 多模态全能模型
gpt-4o - - 轻量级多模态模型
gpt-4o-mini - - 高级推理模型
o3 - /
o3-mini- 推理变体模型o3-mini-high
别名:,
openaigptAnthropic
Anthropic
- Latest Models (2025):
- - Latest flagship model
claude-sonnet-4.5 - - Complex task specialist
claude-opus-4.1 - - Coding specialist
claude-opus-4 - - Balanced performance
claude-sonnet-4 - - Previous generation
claude-3.5-sonnet - - Fast & efficient
claude-3.5-haiku
Aliases: ,
anthropicclaude- 2025年最新模型:
- - 最新旗舰模型
claude-sonnet-4.5 - - 复杂任务专用模型
claude-opus-4.1 - - 编码专用模型
claude-opus-4 - - 性能均衡模型
claude-sonnet-4 - - 上一代模型
claude-3.5-sonnet - - 快速高效模型
claude-3.5-haiku
别名:,
anthropicclaudeGoogle Gemini
Google Gemini
- Latest Models (2025):
- - Most advanced
gemini-2.5-pro - - Default fast model
gemini-2.5-flash - - Speed optimized
gemini-2.5-flash-lite - - Previous generation
gemini-2.0-flash - - UI interaction
gemini-2.5-computer-use
Aliases: ,
googlegemini- 2025年最新模型:
- - 最先进模型
gemini-2.5-pro - - 默认快速模型
gemini-2.5-flash - - 速度优化模型
gemini-2.5-flash-lite - - 上一代模型
gemini-2.0-flash - - UI交互模型
gemini-2.5-computer-use
别名:,
googlegeminiOllama (Local)
Ollama(本地)
- Popular Models:
- - Meta's latest (8b, 70b, 405b)
llama3.1 - - Compact versions (1b, 3b)
llama3.2 - - Mistral flagship
mistral-large-2 - - Code specialist
deepseek-coder - - Code models
starcode2
Aliases: ,
ollamalocal- 热门模型:
- - Meta最新模型(8b、70b、405b参数版本)
llama3.1 - - 紧凑版本(1b、3b参数版本)
llama3.2 - - Mistral旗舰模型
mistral-large-2 - - 编码专用模型
deepseek-coder - - 代码模型
starcode2
别名:,
ollamalocalWorkflow Overview
工作流程概述
User Input (with optional model)
↓
Check Available Providers (env vars)
↓
Determine Model to Use:
- If specified: Use provided model
- If ambiguous: Show selection menu
- Otherwise: Use last remembered choice
↓
Load/Create Config (~/.claude/llm-skill-config.json)
↓
Detect Input Type:
- stdin/piped
- file path
- inline text
↓
Execute llm CLI:
- Non-interactive: Process & return
- Interactive: Keep conversation loop
↓
Save Model Choice to ConfigUser Input (with optional model)
↓
Check Available Providers (env vars)
↓
Determine Model to Use:
- If specified: Use provided model
- If ambiguous: Show selection menu
- Otherwise: Use last remembered choice
↓
Load/Create Config (~/.claude/llm-skill-config.json)
↓
Detect Input Type:
- stdin/piped
- file path
- inline text
↓
Execute llm CLI:
- Non-interactive: Process & return
- Interactive: Keep conversation loop
↓
Save Model Choice to ConfigFeatures
功能特性
1. Provider Detection
1. 服务商检测
- Checks environment variables for API keys
- Suggests available LLM providers on first run
- Detects: ,
OPENAI_API_KEY,ANTHROPIC_API_KEY,GOOGLE_API_KEYOLLAMA_BASE_URL
- 检查环境变量中的API密钥
- 首次运行时推荐可用的LLM服务商
- 检测的变量:,
OPENAI_API_KEY,ANTHROPIC_API_KEY,GOOGLE_API_KEYOLLAMA_BASE_URL
2. Model Selection
2. 模型选择
- Accept model aliases (,
gpt-4o,claude-opus)gemini-2.5-pro - Accept provider aliases (,
openai,anthropic,google)ollama - Interactive menu when selection is ambiguous
- Remembers last used model in
~/.claude/llm-skill-config.json
- 支持模型别名(,
gpt-4o,claude-opus)gemini-2.5-pro - 支持服务商别名(,
openai,anthropic,google)ollama - 当选择不明确时显示交互式菜单
- 在中记住上次使用的模型
~/.claude/llm-skill-config.json
3. Input Processing
3. 输入处理
- Accepts stdin/piped input
- Processes file paths (detects: .txt, .md, .json, .pdf, images)
- Handles inline text prompts
- Supports multimedia files with appropriate encoding
- 支持标准输入/管道输入
- 处理文件路径(支持的格式:.txt、.md、.json、.pdf、图片)
- 处理内嵌文本提示
- 支持对多媒体文件进行适当编码
4. Execution Modes
4. 执行模式
Non-Interactive (Default)
非交互模式(默认)
bash
llm "Your prompt here"
llm --model gpt-4o "Process this text"
llm < file.txt
cat document.md | llm "Summarize"bash
llm "Your prompt here"
llm --model gpt-4o "Process this text"
llm < file.txt
cat document.md | llm "Summarize"Interactive Mode
交互模式
bash
llm --interactive
llm -i
llm --model claude-opus --interactivebash
llm --interactive
llm -i
llm --model claude-opus --interactive5. Configuration
5. 配置
Persistent config location:
~/.claude/llm-skill-config.jsonjson
{
"last_model": "claude-sonnet-4.5",
"default_provider": "anthropic",
"available_providers": ["openai", "anthropic", "google", "ollama"]
}持久化配置文件路径:
~/.claude/llm-skill-config.jsonjson
{
"last_model": "claude-sonnet-4.5",
"default_provider": "anthropic",
"available_providers": ["openai", "anthropic", "google", "ollama"]
}Implementation Details
实现细节
Core Files
核心文件
- - Main skill orchestration
llm_skill.py - - Provider detection & config
providers.py - - Model definitions & aliases
models.py - - Execution logic (interactive/non-interactive)
executor.py - - Input type detection
input_handler.py
- - 技能主协调文件
llm_skill.py - - 服务商检测与配置文件
providers.py - - 模型定义与别名文件
models.py - - 执行逻辑(交互/非交互模式)
executor.py - - 输入类型检测文件
input_handler.py
Key Functions
关键函数
detect_providers()
detect_providers()detect_providers()
detect_providers()- Scans environment for provider API keys
- Returns dict of available providers
- 扫描环境变量中的服务商API密钥
- 返回可用服务商的字典
get_model_selector(input_text, provider=None)
get_model_selector(input_text, provider=None)get_model_selector(input_text, provider=None)
get_model_selector(input_text, provider=None)- Returns selected model, showing menu if needed
- Respects config preference
last_model
- 返回选中的模型,必要时显示选择菜单
- 遵循配置偏好
last_model
load_input(input_source)
load_input(input_source)load_input(input_source)
load_input(input_source)- Handles stdin, file paths, or inline text
- Returns content string
- 处理标准输入、文件路径或内嵌文本
- 返回内容字符串
execute_llm(content, model, interactive=False)
execute_llm(content, model, interactive=False)execute_llm(content, model, interactive=False)
execute_llm(content, model, interactive=False)- Calls CLI with appropriate parameters
llm - Manages stdin/stdout for interactive mode
- 使用适当的参数调用CLI
llm - 管理交互模式下的标准输入/输出
Usage in Claude Code
在Claude Code中的使用方式
When user invokes this skill, Claude should:
- Parse input for model specification (e.g., )
--model gpt-4o - Call skill with content and optional model parameter
- Wait for provider/model selection if needed
- Execute and return results
- For interactive mode, maintain conversation loop
当用户调用该技能时,Claude应:
- 解析输入中的模型指定(如)
--model gpt-4o - 携带内容和可选的模型参数调用技能
- 必要时等待服务商/模型选择
- 执行并返回结果
- 对于交互模式,维持对话循环
Error Handling
错误处理
- If no providers available: Suggest installing API keys
- If model not found: Show available models for chosen provider
- If llm CLI not installed: Suggest installation via
pip install llm - If file not readable: Fall back to treating as inline text
- 若无可用服务商:建议安装API密钥
- 若模型未找到:显示所选服务商的可用模型
- 若未安装llm CLI:建议通过进行安装
pip install llm - 若文件无法读取:退回到将其视为内嵌文本
Configuration
配置
Users can pre-configure preferences:
json
{
"last_model": "claude-sonnet-4.5",
"default_provider": "anthropic",
"interactive_mode": false,
"available_providers": ["openai", "anthropic"]
}用户可以预先配置偏好:
json
{
"last_model": "claude-sonnet-4.5",
"default_provider": "anthropic",
"interactive_mode": false,
"available_providers": ["openai", "anthropic"]
}Slash Command Integration
斜杠命令集成
Support command:
/llm/llm process this text
/llm --interactive
/llm --model gpt-4o analyze this支持命令:
/llm/llm process this text
/llm --interactive
/llm --model gpt-4o analyze this