claudish-usage
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseClaudish Usage Skill
Claudish 使用技能指南
Version: 2.0.0
Purpose: Guide AI agents on how to use Claudish CLI to run Claude Code with any AI model
Status: Production Ready
版本: 2.0.0
用途: 指导AI代理如何使用Claudish CLI,借助任意AI模型运行Claude Code
状态: 已就绪可用于生产环境
⚠️ CRITICAL RULES - READ FIRST
⚠️ 重要规则 - 请先阅读
🚫 NEVER Run Claudish from Main Context
🚫 绝不能在主上下文环境中直接运行Claudish
Claudish MUST ONLY be run through sub-agents unless the user explicitly requests direct execution.
Why:
- Running Claudish directly pollutes main context with 10K+ tokens (full conversation + reasoning)
- Destroys context window efficiency
- Makes main conversation unmanageable
When you can run Claudish directly:
- ✅ User explicitly says "run claudish directly" or "don't use a sub-agent"
- ✅ User is debugging and wants to see full output
- ✅ User specifically requests main context execution
When you MUST use sub-agent:
- ✅ User says "use Grok to implement X" (delegate to sub-agent)
- ✅ User says "ask GPT-5.3 to review X" (delegate to sub-agent)
- ✅ User mentions any model name without "directly" (delegate to sub-agent)
- ✅ Any production task (always delegate)
必须仅通过子代理运行Claudish,除非用户明确要求直接执行。
原因:
- 直接运行Claudish会导致主上下文被10K+令牌(完整对话+推理内容)污染
- 降低上下文窗口的使用效率
- 使主对话变得难以管理
可直接运行Claudish的场景:
- ✅ 用户明确说明“直接运行claudish”或“不要使用子代理”
- ✅ 用户正在调试并希望查看完整输出
- ✅ 用户专门要求在主上下文环境中执行
必须使用子代理的场景:
- ✅ 用户说“使用Grok实现X功能”(委托给子代理)
- ✅ 用户说“让GPT-5.3审核X内容”(委托给子代理)
- ✅ 用户提及任何模型名称但未说明“直接”(委托给子代理)
- ✅ 任何生产环境任务(始终委托给子代理)
📋 Workflow Decision Tree
📋 工作流决策树
User Request
↓
Does it mention Claudish/OpenRouter/model name? → NO → Don't use this skill
↓ YES
↓
Does user say "directly" or "in main context"? → YES → Run in main context (rare)
↓ NO
↓
Find appropriate agent or create one → Delegate to sub-agent (default)用户请求
↓
是否提及Claudish/OpenRouter/模型名称? → 否 → 不使用本技能
↓ 是
↓
用户是否说明“直接”或“在主上下文环境中”? → 是 → 在主上下文环境中运行(罕见场景)
↓ 否
↓
找到合适的代理或创建新代理 → 委托给子代理(默认操作)🤖 Agent Selection Guide
🤖 代理选择指南
Step 1: Find the Right Agent
步骤1:找到合适的代理
When user requests Claudish task, follow this process:
- Check for existing agents that support proxy mode or external model delegation
- If no suitable agent exists:
- Suggest creating a new proxy-mode agent for this task type
- Offer to proceed with generic agent if user declines
general-purpose
- If user declines agent creation:
- Warn about context pollution
- Ask if they want to proceed anyway
当用户请求Claudish相关任务时,请遵循以下流程:
- 检查是否存在支持代理模式或外部模型委托的现有代理
- 如果没有合适的代理:
- 建议创建一个适用于该任务类型的代理模式专用代理
- 如果用户拒绝,可使用通用的代理
general-purpose
- 如果用户拒绝创建代理:
- 警告用户这会导致上下文污染
- 询问用户是否仍要继续
Step 2: Agent Type Selection Matrix
步骤2:代理类型选择矩阵
| Task Type | Recommended Agent | Fallback | Notes |
|---|---|---|---|
| Code implementation | Create coding agent with proxy mode | | Best: custom agent for project-specific patterns |
| Code review | Use existing code review agent + proxy | | Check if plugin has review agent first |
| Architecture planning | Use existing architect agent + proxy | | Look for |
| Testing | Use existing test agent + proxy | | Look for |
| Refactoring | Create refactoring agent with proxy | | Complex refactors benefit from specialized agent |
| Documentation | | - | Simple task, generic agent OK |
| Analysis | Use existing analysis agent + proxy | | Check for |
| Other | | - | Default for unknown task types |
| 任务类型 | 推荐代理 | 备选方案 | 说明 |
|---|---|---|---|
| 代码实现 | 创建支持代理模式的编码代理 | | 最佳选择:针对项目特定模式定制的代理 |
| 代码审核 | 使用现有代码审核代理+代理模式 | | 先检查插件是否有审核代理 |
| 架构规划 | 使用现有架构师代理+代理模式 | | 寻找 |
| 测试 | 使用现有测试代理+代理模式 | | 寻找 |
| 重构 | 创建支持代理模式的重构代理 | | 复杂重构任务受益于专用代理 |
| 文档编写 | | - | 简单任务,通用代理即可 |
| 分析 | 使用现有分析代理+代理模式 | | 寻找 |
| 其他 | | - | 未知任务类型的默认选择 |
Step 3: Agent Creation Offer (When No Agent Exists)
步骤3:无可用代理时的代理创建提议
Template response:
I notice you want to use [Model Name] for [task type].
RECOMMENDATION: Create a specialized [task type] agent with proxy mode support.
This would:
✅ Provide better task-specific guidance
✅ Reusable for future [task type] tasks
✅ Optimized prompting for [Model Name]
Options:
1. Create specialized agent (recommended) - takes 2-3 minutes
2. Use generic general-purpose agent - works but less optimized
3. Run directly in main context (NOT recommended - pollutes context)
Which would you prefer?模板回复:
我注意到您希望使用[模型名称]来完成[任务类型]任务。
建议:创建一个支持代理模式的专用[任务类型]代理。
这样做的好处:
✅ 提供更贴合任务的指导
✅ 可重复用于未来的[任务类型]任务
✅ 针对[模型名称]优化了提示词
选项:
1. 创建专用代理(推荐)- 耗时2-3分钟
2. 使用通用的general-purpose代理 - 可正常工作但优化程度较低
3. 在主上下文环境中直接运行(不推荐 - 会污染上下文)
您更倾向于哪种方式?Step 4: Common Agents by Plugin
步骤4:按插件分类的常见代理
Frontend Plugin:
- - Use for UI implementation with external models
typescript-frontend-dev - - Use for architecture planning with external models
frontend-architect - - Use for code review (can delegate to external models)
senior-code-reviewer - - Use for test planning/implementation
test-architect
Bun Backend Plugin:
- - Use for API implementation with external models
backend-developer - - Use for API design with external models
api-architect
Code Analysis Plugin:
- - Use for investigation tasks with external models
codebase-detective
No Plugin:
- - Default fallback for any task
general-purpose
前端插件:
- - 用于借助外部模型实现UI
typescript-frontend-dev - - 用于借助外部模型进行架构规划
frontend-architect - - 用于代码审核(可委托给外部模型)
senior-code-reviewer - - 用于测试规划/实现
test-architect
Bun后端插件:
- - 用于借助外部模型实现API
backend-developer - - 用于借助外部模型设计API
api-architect
代码分析插件:
- - 用于借助外部模型进行调查任务
codebase-detective
无插件:
- - 任何任务的默认备选代理
general-purpose
Step 5: Example Agent Selection
步骤5:代理选择示例
Example 1: User says "use Grok to implement authentication"
Task: Code implementation (authentication)
Plugin: Bun Backend (if backend) or Frontend (if UI)
Decision:
1. Check for backend-developer or typescript-frontend-dev agent
2. Found backend-developer? → Use it with Grok proxy
3. Not found? → Offer to create custom auth agent
4. User declines? → Use general-purpose with file-based patternExample 2: User says "ask GPT-5.3 to review my API design"
Task: Code review (API design)
Plugin: Bun Backend
Decision:
1. Check for api-architect or senior-code-reviewer agent
2. Found? → Use it with GPT-5.3 proxy
3. Not found? → Use general-purpose with review instructions
4. Never run directly in main contextExample 3: User says "use Gemini to refactor this component"
Task: Refactoring (component)
Plugin: Frontend
Decision:
1. No specialized refactoring agent exists
2. Offer to create component-refactoring agent
3. User declines? → Use typescript-frontend-dev with proxy
4. Still no agent? → Use general-purpose with file-based pattern示例1:用户说“使用Grok实现认证功能”
任务:代码实现(认证)
插件:Bun后端(如果是后端任务)或前端(如果是UI任务)
决策:
1. 检查是否存在backend-developer或typescript-frontend-dev代理
2. 找到backend-developer? → 结合Grok代理使用
3. 未找到? → 提议创建自定义认证代理
4. 用户拒绝? → 使用general-purpose代理并采用基于文件的模式示例2:用户说“让GPT-5.3审核我的API设计”
任务:代码审核(API设计)
插件:Bun后端
决策:
1. 检查是否存在api-architect或senior-code-reviewer代理
2. 找到? → 结合GPT-5.3代理使用
3. 未找到? → 使用general-purpose代理并遵循审核指令
4. 绝不能在主上下文环境中直接运行示例3:用户说“使用Gemini重构这个组件”
任务:重构(组件)
插件:前端
决策:
1. 不存在专用的重构代理
2. 提议创建component-refactoring代理
3. 用户拒绝? → 使用typescript-frontend-dev代理并结合代理模式
4. 仍无合适代理? → 使用general-purpose代理并采用基于文件的模式Overview
概述
Claudish is a CLI tool that allows running Claude Code with any AI model via prefix-based routing. Supports OpenRouter (100+ models), direct Google Gemini API, direct OpenAI API, and local models (Ollama, LM Studio, vLLM, MLX).
Key Principle: ALWAYS use Claudish through sub-agents with file-based instructions to avoid context window pollution.
Claudish是一款CLI工具,允许通过基于前缀的路由,借助任意AI模型运行Claude Code。支持OpenRouter(100+模型)、Google Gemini直接API、OpenAI直接API以及本地模型(Ollama、LM Studio、vLLM、MLX)。
核心原则: 始终通过子代理并结合基于文件的指令使用Claudish,以避免上下文窗口污染。
What is Claudish?
什么是Claudish?
Claudish (Claude-ish) is a proxy tool that:
- ✅ Runs Claude Code with any AI model via prefix-based routing
- ✅ Supports OpenRouter, Gemini, OpenAI, and local models
- ✅ Uses local API-compatible proxy server
- ✅ Supports 100% of Claude Code features
- ✅ Provides cost tracking and model selection
- ✅ Enables multi-model workflows
Claudish(Claude-ish)是一款代理工具,具备以下功能:
- ✅ 通过基于前缀的路由,借助任意AI模型运行Claude Code
- ✅ 支持OpenRouter、Gemini、OpenAI和本地模型
- ✅ 使用本地API兼容的代理服务器
- ✅ 支持100%的Claude Code功能
- ✅ 提供成本追踪和模型选择功能
- ✅ 支持多模型工作流
Model Routing
模型路由
| Prefix | Backend | Example |
|---|---|---|
| (none) | OpenRouter | |
| Google Gemini | |
| OpenAI | |
| Ollama | |
| LM Studio | |
| Custom | |
Use Cases:
- Run tasks with different AI models (Grok for speed, GPT-5.3 for reasoning, Gemini for large context)
- Use direct APIs for lower latency (Gemini, OpenAI)
- Use local models for free, private inference (Ollama, LM Studio)
- Compare model performance on same task
- Reduce costs with cheaper models for simple tasks
| 前缀 | 后端服务 | 示例 |
|---|---|---|
| (无) | OpenRouter | |
| Google Gemini | |
| OpenAI | |
| Ollama | |
| LM Studio | |
| 自定义 | |
使用场景:
- 使用不同的AI模型完成任务(Grok追求速度,GPT-5.3擅长推理,Gemini支持大上下文)
- 使用直接API以降低延迟(Gemini、OpenAI)
- 使用本地模型实现免费、私密的推理(Ollama、LM Studio)
- 对比不同模型在同一任务上的表现
- 使用低成本模型完成简单任务以降低开销
Requirements
要求
System Requirements
系统要求
- Claudish CLI - Install with: or
npm install -g claudishbun install -g claudish - Claude Code - Must be installed
- At least one API key (see below)
- Claudish CLI - 安装命令:或
npm install -g claudishbun install -g claudish - Claude Code - 必须已安装
- 至少一个API密钥(见下文)
Environment Variables
环境变量
bash
undefinedbash
undefinedAPI Keys (at least one required)
API密钥(至少需要一个)
export OPENROUTER_API_KEY='sk-or-v1-...' # OpenRouter (100+ models)
export GEMINI_API_KEY='...' # Direct Gemini API (g/ prefix)
export OPENAI_API_KEY='sk-...' # Direct OpenAI API (oai/ prefix)
export OPENROUTER_API_KEY='sk-or-v1-...' # OpenRouter(100+模型)
export GEMINI_API_KEY='...' # Google Gemini直接API(g/前缀)
export OPENAI_API_KEY='sk-...' # OpenAI直接API(oai/前缀)
Placeholder (required to prevent Claude Code dialog)
占位符(必填,用于避免Claude Code弹窗)
export ANTHROPIC_API_KEY='sk-ant-api03-placeholder'
export ANTHROPIC_API_KEY='sk-ant-api03-placeholder'
Custom endpoints (optional)
自定义端点(可选)
export GEMINI_BASE_URL='https://...' # Custom Gemini endpoint
export OPENAI_BASE_URL='https://...' # Custom OpenAI/Azure endpoint
export OLLAMA_BASE_URL='http://...' # Custom Ollama server
export LMSTUDIO_BASE_URL='http://...' # Custom LM Studio server
export GEMINI_BASE_URL='https://...' # 自定义Gemini端点
export OPENAI_BASE_URL='https://...' # 自定义OpenAI/Azure端点
export OLLAMA_BASE_URL='http://...' # 自定义Ollama服务器
export LMSTUDIO_BASE_URL='http://...' # 自定义LM Studio服务器
Default model (optional)
默认模型(可选)
export CLAUDISH_MODEL='openai/gpt-5.3' # Default model
**Get API Keys:**
- OpenRouter: https://openrouter.ai/keys (free tier available)
- Gemini: https://aistudio.google.com/apikey
- OpenAI: https://platform.openai.com/api-keys
- Local models: No API key neededexport CLAUDISH_MODEL='openai/gpt-5.3' # 默认模型
**获取API密钥:**
- OpenRouter: https://openrouter.ai/keys(提供免费层级)
- Gemini: https://aistudio.google.com/apikey
- OpenAI: https://platform.openai.com/api-keys
- 本地模型:无需API密钥Quick Start Guide
快速入门指南
Step 1: Install Claudish
步骤1:安装Claudish
bash
undefinedbash
undefinedWith npm (works everywhere)
使用npm(全平台兼容)
npm install -g claudish
npm install -g claudish
With Bun (faster)
使用Bun(速度更快)
bun install -g claudish
bun install -g claudish
Verify installation
验证安装
claudish --version
undefinedclaudish --version
undefinedStep 2: Get Available Models
步骤2:获取可用模型
bash
undefinedbash
undefinedList ALL OpenRouter models grouped by provider
按提供商分组列出所有OpenRouter模型
claudish --models
claudish --models
Fuzzy search models by name, ID, or description
通过名称、ID或描述模糊搜索模型
claudish --models gemini
claudish --models "grok code"
claudish --models gemini
claudish --models "grok code"
Show top recommended programming models (curated list)
显示推荐的顶级编程模型(精选列表)
claudish --top-models
claudish --top-models
JSON output for parsing
输出JSON格式以便解析
claudish --models --json
claudish --top-models --json
claudish --models --json
claudish --top-models --json
Force update from OpenRouter API
强制从OpenRouter API更新模型列表
claudish --models --force-update
undefinedclaudish --models --force-update
undefinedStep 3: Run Claudish
步骤3:运行Claudish
Interactive Mode (default):
bash
undefined交互模式(默认):
bash
undefinedShows model selector, persistent session
显示模型选择器,支持持久会话
claudish
**Single-shot Mode:**
```bashclaudish
**单次任务模式:**
```bashOne task and exit (requires --model)
完成单个任务后退出(需要--model参数)
claudish --model x-ai/grok-code-fast-1 "implement user authentication"
**With stdin for large prompts:**
```bashclaudish --model x-ai/grok-code-fast-1 "实现用户认证功能"
**通过标准输入传入大提示词:**
```bashRead prompt from stdin (useful for git diffs, code review)
从标准输入读取提示词(适用于git diff、代码审核等场景)
git diff | claudish --stdin --model openai/gpt-5-codex "Review these changes"
undefinedgit diff | claudish --stdin --model openai/gpt-5-codex "审核这些变更"
undefinedRecommended Models
推荐模型
Top Models for Development (v3.1.1):
| Model | Provider | Best For |
|---|---|---|
| OpenAI | Default - Most advanced reasoning |
| MiniMax | Budget-friendly, fast |
| Z.AI | Balanced performance |
| 1M context window | |
| MoonShot | Extended thinking |
| DeepSeek | Code specialist |
| Alibaba | Vision + reasoning |
Direct API Options (lower latency):
| Model | Backend | Best For |
|---|---|---|
| Gemini | Fast tasks, large context |
| OpenAI | General purpose |
| Local | Free, private |
Get Latest Models:
bash
undefined开发场景顶级模型(v3.1.1):
| 模型 | 提供商 | 最佳适用场景 |
|---|---|---|
| OpenAI | 默认选择 - 最先进的推理能力 |
| MiniMax | 高性价比,速度快 |
| Z.AI | 性能均衡 |
| 1M上下文窗口 | |
| MoonShot | 增强推理能力 |
| DeepSeek | 代码领域专家 |
| 阿里巴巴 | 视觉+推理能力 |
直接API选项(更低延迟):
| 模型 | 后端服务 | 最佳适用场景 |
|---|---|---|
| Gemini | 快速任务,大上下文 |
| OpenAI | 通用场景 |
| 本地 | 免费,私密 |
获取最新模型:
bash
undefinedList all models (auto-updates every 2 days)
列出所有模型(每2天自动更新)
claudish --models
claudish --models
Search for specific models
搜索特定模型
claudish --models grok
claudish --models "gemini flash"
claudish --models grok
claudish --models "gemini flash"
Show curated top models
显示精选的顶级模型
claudish --top-models
claudish --top-models
Force immediate update
立即强制更新
claudish --models --force-update
undefinedclaudish --models --force-update
undefinedNEW: Direct Agent Selection (v2.1.0)
新增功能:直接代理选择(v2.1.0)
Use flag to invoke agents directly without the file-based pattern:
--agentbash
undefined使用标志直接调用代理,无需使用基于文件的模式:
--agentbash
undefinedUse specific agent (prepends @agent- automatically)
使用特定代理(自动添加@agent-前缀)
claudish --model x-ai/grok-code-fast-1 --agent frontend:developer "implement React component"
claudish --model x-ai/grok-code-fast-1 --agent frontend:developer "实现React组件"
Claude receives: "Use the @agent-frontend:developer agent to: implement React component"
Claude会收到:"Use the @agent-frontend:developer agent to: implement React component"
List available agents in project
列出项目中的可用代理
claudish --list-agents
**When to use `--agent` vs file-based pattern:**
**Use `--agent` when:**
- Single, simple task that needs agent specialization
- Direct conversation with one agent
- Testing agent behavior
- CLI convenience
**Use file-based pattern when:**
- Complex multi-step workflows
- Multiple agents needed
- Large codebases
- Production tasks requiring review
- Need isolation from main conversation
**Example comparisons:**
**Simple task (use `--agent`):**
```bash
claudish --model x-ai/grok-code-fast-1 --agent frontend:developer "create button component"Complex task (use file-based):
typescript
// multi-phase-workflow.md
Phase 1: Use api-architect to design API
Phase 2: Use backend-developer to implement
Phase 3: Use test-architect to add tests
Phase 4: Use senior-code-reviewer to review
then:
claudish --model x-ai/grok-code-fast-1 --stdin < multi-phase-workflow.mdclaudish --list-agents
**何时使用`--agent` vs 基于文件的模式:**
**使用`--agent`的场景:**
- 单个简单任务,需要代理的专业能力
- 与单个代理直接对话
- 测试代理行为
- CLI使用便捷性优先
**使用基于文件模式的场景:**
- 复杂的多步骤工作流
- 需要多个代理协作
- 大型代码库
- 需要审核的生产环境任务
- 需要与主对话隔离
**示例对比:**
**简单任务(使用`--agent`):**
```bash
claudish --model x-ai/grok-code-fast-1 --agent frontend:developer "创建按钮组件"复杂任务(使用基于文件的模式):
typescript
// multi-phase-workflow.md
阶段1:使用api-architect设计API
阶段2:使用backend-developer实现
阶段3:使用test-architect添加测试
阶段4:使用senior-code-reviewer审核
然后执行:
claudish --model x-ai/grok-code-fast-1 --stdin < multi-phase-workflow.mdBest Practice: File-Based Sub-Agent Pattern
最佳实践:基于文件的子代理模式
⚠️ CRITICAL: Don't Run Claudish Directly from Main Conversation
⚠️ 重要提示:绝不能在主对话中直接运行Claudish
Why: Running Claudish directly in main conversation pollutes context window with:
- Entire conversation transcript
- All tool outputs
- Model reasoning (can be 10K+ tokens)
Solution: Use file-based sub-agent pattern
原因: 在主对话中直接运行Claudish会导致上下文窗口被以下内容污染:
- 完整的对话记录
- 所有工具输出
- 模型推理内容(可能超过10K令牌)
解决方案: 使用基于文件的子代理模式
File-Based Pattern (Recommended)
基于文件的模式(推荐)
Step 1: Create instruction file
markdown
undefined步骤1:创建指令文件
markdown
undefined/tmp/claudish-task-{timestamp}.md
/tmp/claudish-task-{timestamp}.md
Task
任务
Implement user authentication with JWT tokens
使用JWT令牌实现用户认证
Requirements
要求
- Use bcrypt for password hashing
- Generate JWT with 24h expiration
- Add middleware for protected routes
- 使用bcrypt进行密码哈希
- 生成有效期为24小时的JWT
- 添加受保护路由的中间件
Deliverables
交付成果
Write implementation to: /tmp/claudish-result-{timestamp}.md
将实现内容写入:/tmp/claudish-result-{timestamp}.md
Output Format
输出格式
markdown
undefinedmarkdown
undefinedImplementation
实现内容
[code here]
[代码内容]
Files Created/Modified
创建/修改的文件
- path/to/file1.ts
- path/to/file2.ts
- path/to/file1.ts
- path/to/file2.ts
Tests
测试
[test code if applicable]
[测试代码(如有)]
Notes
说明
[any important notes]
undefinedStep 2: Run Claudish with file instruction
bash
undefined[重要说明]
undefined步骤2:通过文件指令运行Claudish
bash
undefinedRead instruction from file, write result to file
从文件读取指令,将结果写入文件
claudish --model x-ai/grok-code-fast-1 --stdin < /tmp/claudish-task-{timestamp}.md > /tmp/claudish-result-{timestamp}.md
**Step 3: Read result file and provide summary**
```typescript
// In your agent/command:
const result = await Read({ file_path: "/tmp/claudish-result-{timestamp}.md" });
// Parse result
const filesModified = extractFilesModified(result);
const summary = extractSummary(result);
// Provide short feedback to main agent
return `✅ Task completed. Modified ${filesModified.length} files. ${summary}`;claudish --model x-ai/grok-code-fast-1 --stdin < /tmp/claudish-task-{timestamp}.md > /tmp/claudish-result-{timestamp}.md
**步骤3:读取结果文件并提供摘要**
```typescript
// 在代理/命令中:
const result = await Read({ file_path: "/tmp/claudish-result-{timestamp}.md" });
// 解析结果
const filesModified = extractFilesModified(result);
const summary = extractSummary(result);
// 向主代理提供简短反馈
return `✅ 任务完成。已修改${filesModified.length}个文件。${summary}`;Complete Example: Using Claudish in Sub-Agent
完整示例:在子代理中使用Claudish
typescript
/**
* Example: Run code review with Grok via Claudish sub-agent
*/
async function runCodeReviewWithGrok(files: string[]) {
const timestamp = Date.now();
const instructionFile = `/tmp/claudish-review-instruction-${timestamp}.md`;
const resultFile = `/tmp/claudish-review-result-${timestamp}.md`;
// Step 1: Create instruction file
const instruction = `# Code Review Tasktypescript
/**
* 示例:通过Claudish子代理运行代码审核
*/
async function runCodeReviewWithGrok(files: string[]) {
const timestamp = Date.now();
const instructionFile = `/tmp/claudish-review-instruction-${timestamp}.md`;
const resultFile = `/tmp/claudish-review-result-${timestamp}.md`;
// 步骤1:创建指令文件
const instruction = `# 代码审核任务Files to Review
待审核文件
${files.map(f => ).join('\n')}
- ${f}${files.map(f => ).join('\n')}
- ${f}Review Criteria
审核标准
- Code quality and maintainability
- Potential bugs or issues
- Performance considerations
- Security vulnerabilities
- 代码质量和可维护性
- 潜在的bug或问题
- 性能考量
- 安全漏洞
Output Format
输出格式
Write your review to: ${resultFile}
Use this format:
```markdown
将审核结果写入:${resultFile}
使用以下格式:
```markdown
Summary
摘要
[Brief overview]
[简要概述]
Issues Found
发现的问题
Critical
严重
- [issue 1]
- [问题1]
Medium
中等
- [issue 2]
- [问题2]
Low
轻微
- [issue 3]
- [问题3]
Recommendations
建议
- [recommendation 1]
- [建议1]
Files Reviewed
审核的文件
-
``` `;await Write({ file_path: instructionFile, content: instruction });// Step 2: Run Claudish with stdin await Bash();
claudish --model x-ai/grok-code-fast-1 --stdin < ${instructionFile}// Step 3: Read result const result = await Read({ file_path: resultFile });// Step 4: Parse and return summary const summary = extractSummary(result); const issueCount = extractIssueCount(result);// Step 5: Clean up temp files await Bash();rm ${instructionFile} ${resultFile}// Step 6: Return concise feedback return { success: true, summary, issueCount, fullReview: result // Available if needed, but not in main context }; }
function extractSummary(review: string): string {
const match = review.match(/## Summary\s*\n(.*?)(?=\n##|$)/s);
return match ? match[1].trim() : "Review completed";
}
function extractIssueCount(review: string): { critical: number; medium: number; low: number } {
const critical = (review.match(/### Critical\s*\n(.?)(?=\n###|$)/s)?.[1].match(/^-/gm) || []).length;
const medium = (review.match(/### Medium\s\n(.?)(?=\n###|$)/s)?.[1].match(/^-/gm) || []).length;
const low = (review.match(/### Low\s\n(.*?)(?=\n###|$)/s)?.[1].match(/^-/gm) || []).length;
return { critical, medium, low };
}
undefined-
[文件1]:[状态] ``` `;await Write({ file_path: instructionFile, content: instruction });// 步骤2:通过标准输入运行Claudish await Bash();
claudish --model x-ai/grok-code-fast-1 --stdin < ${instructionFile}// 步骤3:读取结果 const result = await Read({ file_path: resultFile });// 步骤4:解析并返回摘要 const summary = extractSummary(result); const issueCount = extractIssueCount(result);// 步骤5:清理临时文件 await Bash();rm ${instructionFile} ${resultFile}// 步骤6:返回简洁的反馈 return { success: true, summary, issueCount, fullReview: result // 如有需要可获取,但不传入主上下文 }; }
function extractSummary(review: string): string {
const match = review.match(/## Summary\s*\n(.*?)(?=\n##|$)/s);
return match ? match[1].trim() : "审核完成";
}
function extractIssueCount(review: string): { critical: number; medium: number; low: number } {
const critical = (review.match(/### Critical\s*\n(.?)(?=\n###|$)/s)?.[1].match(/^-/gm) || []).length;
const medium = (review.match(/### Medium\s\n(.?)(?=\n###|$)/s)?.[1].match(/^-/gm) || []).length;
const low = (review.match(/### Low\s\n(.*?)(?=\n###|$)/s)?.[1].match(/^-/gm) || []).length;
return { critical, medium, low };
}
undefinedSub-Agent Delegation Pattern
子代理委托模式
When running Claudish from an agent, use the Task tool to create a sub-agent:
当从代理中运行Claudish时,使用Task工具创建子代理:
Pattern 1: Simple Task Delegation
模式1:简单任务委托
typescript
/**
* Example: Delegate implementation to Grok via Claudish
*/
async function implementFeatureWithGrok(featureDescription: string) {
// Use Task tool to create sub-agent
const result = await Task({
subagent_type: "general-purpose",
description: "Implement feature with Grok",
prompt: `
Use Claudish CLI to implement this feature with Grok model:
${featureDescription}
INSTRUCTIONS:
1. Search for available models:
claudish --models grok
2. Run implementation with Grok:
claudish --model x-ai/grok-code-fast-1 "${featureDescription}"
3. Return ONLY:
- List of files created/modified
- Brief summary (2-3 sentences)
- Any errors encountered
DO NOT return the full conversation transcript or implementation details.
Keep your response under 500 tokens.
`
});
return result;
}typescript
/**
* 示例:通过Claudish委托给Grok实现功能
*/
async function implementFeatureWithGrok(featureDescription: string) {
// 使用Task工具创建子代理
const result = await Task({
subagent_type: "general-purpose",
description: "使用Grok实现功能",
prompt: `
使用Claudish CLI借助Grok模型实现以下功能:
${featureDescription}
指令:
1. 搜索可用模型:
claudish --models grok
2. 使用Grok运行实现:
claudish --model x-ai/grok-code-fast-1 "${featureDescription}"
3. 仅返回:
- 创建/修改的文件列表
- 简短摘要(2-3句话)
- 遇到的任何错误
不要返回完整的对话记录或实现细节。
确保回复不超过500令牌。
`
});
return result;
}Pattern 2: File-Based Task Delegation
模式2:基于文件的任务委托
typescript
/**
* Example: Use file-based instruction pattern in sub-agent
*/
async function analyzeCodeWithGemini(codebasePath: string) {
const timestamp = Date.now();
const instructionFile = `/tmp/claudish-analyze-${timestamp}.md`;
const resultFile = `/tmp/claudish-analyze-result-${timestamp}.md`;
// Create instruction file
const instruction = `# Codebase Analysis Tasktypescript
/**
* 示例:在子代理中使用基于文件的指令模式
*/
async function analyzeCodeWithGemini(codebasePath: string) {
const timestamp = Date.now();
const instructionFile = `/tmp/claudish-analyze-${timestamp}.md`;
const resultFile = `/tmp/claudish-analyze-result-${timestamp}.md`;
// 创建指令文件
const instruction = `# 代码库分析任务Codebase Path
代码库路径
${codebasePath}
${codebasePath}
Analysis Required
需要完成的分析
- Architecture overview
- Key patterns used
- Potential improvements
- Security considerations
- 架构概述
- 使用的关键模式
- 潜在的改进点
- 安全考量
Output
输出
Write analysis to: ${resultFile}
Keep analysis concise (under 1000 words).
`;
await Write({ file_path: instructionFile, content: instruction });
// Delegate to sub-agent
const result = await Task({
subagent_type: "general-purpose",
description: "Analyze codebase with Gemini",
prompt: `
Use Claudish to analyze codebase with Gemini model.
Instruction file: ${instructionFile}
Result file: ${resultFile}
STEPS:
- Read instruction file: ${instructionFile}
- Run: claudish --model google/gemini-2.5-flash --stdin < ${instructionFile}
- Wait for completion
- Read result file: ${resultFile}
- Return ONLY a 2-3 sentence summary
DO NOT include the full analysis in your response.
The full analysis is in ${resultFile} if needed.
`
});
// Read full result if needed
const fullAnalysis = await Read({ file_path: resultFile });
// Clean up
await Bash();
rm ${instructionFile} ${resultFile}return {
summary: result,
fullAnalysis
};
}
undefined将分析结果写入:${resultFile}
保持分析简洁(不超过1000字)。
`;
await Write({ file_path: instructionFile, content: instruction });
// 委托给子代理
const result = await Task({
subagent_type: "general-purpose",
description: "使用Gemini分析代码库",
prompt: `
使用Claudish借助Gemini模型分析代码库。
指令文件:${instructionFile}
结果文件:${resultFile}
步骤:
- 读取指令文件:${instructionFile}
- 运行:claudish --model google/gemini-2.5-flash --stdin < ${instructionFile}
- 等待完成
- 读取结果文件:${resultFile}
- 仅返回2-3句话的摘要
不要在回复中包含完整的分析内容。
完整分析内容在${resultFile}中,如有需要可查看。
`
});
// 如有需要读取完整结果
const fullAnalysis = await Read({ file_path: resultFile });
// 清理文件
await Bash();
rm ${instructionFile} ${resultFile}return {
summary: result,
fullAnalysis
};
}
undefinedPattern 3: Multi-Model Comparison
模式3:多模型对比
typescript
/**
* Example: Run same task with multiple models and compare
*/
async function compareModels(task: string, models: string[]) {
const results = [];
for (const model of models) {
const timestamp = Date.now();
const resultFile = `/tmp/claudish-${model.replace('/', '-')}-${timestamp}.md`;
// Run task with each model
await Task({
subagent_type: "general-purpose",
description: `Run task with ${model}`,
prompt: `
Use Claudish to run this task with ${model}:
${task}
STEPS:
1. Run: claudish --model ${model} --json "${task}"
2. Parse JSON output
3. Return ONLY:
- Cost (from total_cost_usd)
- Duration (from duration_ms)
- Token usage (from usage.input_tokens and usage.output_tokens)
- Brief quality assessment (1-2 sentences)
DO NOT return full output.
`
});
results.push({
model,
resultFile
});
}
return results;
}typescript
/**
* 示例:使用多个模型运行同一任务并对比结果
*/
async function compareModels(task: string, models: string[]) {
const results = [];
for (const model of models) {
const timestamp = Date.now();
const resultFile = `/tmp/claudish-${model.replace('/', '-')}-${timestamp}.md`;
// 使用每个模型运行任务
await Task({
subagent_type: "general-purpose",
description: `使用${model}运行任务`,
prompt: `
使用Claudish借助${model}运行以下任务:
${task}
步骤:
1. 运行:claudish --model ${model} --json "${task}"
2. 解析JSON输出
3. 仅返回:
- 成本(来自total_cost_usd)
- 耗时(来自duration_ms)
- 令牌使用量(来自usage.input_tokens和usage.output_tokens)
- 简短的质量评估(1-2句话)
不要返回完整输出。
`
});
results.push({
model,
resultFile
});
}
return results;
}Common Workflows
常见工作流
Workflow 1: Quick Code Generation with Grok
工作流1:使用Grok快速生成代码
bash
undefinedbash
undefinedFast, agentic coding with visible reasoning
快速、智能的编码,可查看推理过程
claudish --model x-ai/grok-code-fast-1 "add error handling to api routes"
undefinedclaudish --model x-ai/grok-code-fast-1 "为API路由添加错误处理"
undefinedWorkflow 2: Complex Refactoring with GPT-5.3
工作流2:使用GPT-5.3进行复杂重构
bash
undefinedbash
undefinedAdvanced reasoning for complex tasks
针对复杂任务的高级推理能力
claudish --model openai/gpt-5 "refactor authentication system to use OAuth2"
undefinedclaudish --model openai/gpt-5 "将认证系统重构为使用OAuth2"
undefinedWorkflow 3: UI Implementation with Qwen (Vision)
工作流3:使用Qwen(视觉模型)实现UI
bash
undefinedbash
undefinedVision-language model for UI tasks
视觉语言模型适用于UI任务
claudish --model qwen/qwen3-vl-235b-a22b-instruct "implement dashboard from figma design"
undefinedclaudish --model qwen/qwen3-vl-235b-a22b-instruct "根据Figma设计实现仪表盘"
undefinedWorkflow 4: Code Review with Gemini
工作流4:使用Gemini进行代码审核
bash
undefinedbash
undefinedState-of-the-art reasoning for thorough review
最先进的推理能力,适用于全面审核
git diff | claudish --stdin --model google/gemini-2.5-flash "Review these changes for bugs and improvements"
undefinedgit diff | claudish --stdin --model google/gemini-2.5-flash "审核这些变更中的bug和改进点"
undefinedWorkflow 5: Multi-Model Consensus
工作流5:多模型一致性验证
bash
undefinedbash
undefinedRun same task with multiple models
使用多个模型运行同一任务
for model in "x-ai/grok-code-fast-1" "google/gemini-2.5-flash" "openai/gpt-5"; do
echo "=== Testing with $model ==="
claudish --model "$model" "find security vulnerabilities in auth.ts"
done
undefinedfor model in "x-ai/grok-code-fast-1" "google/gemini-2.5-flash" "openai/gpt-5"; do
echo "=== 使用$model测试 ==="
claudish --model "$model" "查找auth.ts中的安全漏洞"
done
undefinedClaudish CLI Flags Reference
Claudish CLI 参数参考
Essential Flags
核心参数
| Flag | Description | Example |
|---|---|---|
| OpenRouter model to use | |
| Read prompt from stdin | |
| List all models or search | |
| Show top recommended models | |
| JSON output (implies --quiet) | |
| Print AI agent usage guide | |
| 参数 | 说明 | 示例 |
|---|---|---|
| 要使用的OpenRouter模型 | |
| 从标准输入读取提示词 | |
| 列出所有模型或搜索 | |
| 显示推荐的顶级模型 | |
| 输出JSON格式(自动启用--quiet) | |
| 打印AI代理使用指南 | |
Advanced Flags
高级参数
| Flag | Description | Default |
|---|---|---|
| Interactive mode | Auto (no prompt = interactive) |
| Suppress log messages | Quiet in single-shot |
| Show log messages | Verbose in interactive |
| Enable debug logging to file | Disabled |
| Proxy server port | Random (3000-9000) |
| Require permission prompts | Auto-approve enabled |
| Disable sandbox | Disabled |
| Proxy to real Anthropic API (debug) | Disabled |
| Force refresh model cache | Auto (>2 days) |
| 参数 | 说明 | 默认值 |
|---|---|---|
| 交互模式 | 自动(无提示词时为交互模式) |
| 抑制日志消息 | 单次任务模式下默认启用 |
| 显示日志消息 | 交互模式下默认启用 |
| 启用调试日志写入文件 | 禁用 |
| 代理服务器端口 | 随机(3000-9000) |
| 需要权限确认提示 | 自动批准启用 |
| 禁用沙箱 | 禁用 |
| 代理到真实的Anthropic API(调试用) | 禁用 |
| 强制刷新模型缓存 | 自动(超过2天) |
Output Modes
输出模式
-
Quiet Mode (default in single-shot)bash
claudish --model grok "task" # Clean output, no [claudish] logs -
Verbose Modebash
claudish --verbose "task" # Shows all [claudish] logs for debugging -
JSON Modebash
claudish --json "task" # Structured output: {result, cost, usage, duration}
-
安静模式(单次任务模式默认)bash
claudish --model grok "任务" # 简洁输出,无[claudish]日志 -
详细模式bash
claudish --verbose "任务" # 显示所有[claudish]日志以便调试 -
JSON模式bash
claudish --json "任务" # 结构化输出:{result, cost, usage, duration}
Cost Tracking
成本追踪
Claudish automatically tracks costs in the status line:
directory • model-id • $cost • ctx%Example:
my-project • x-ai/grok-code-fast-1 • $0.12 • 67%Shows:
- 💰 Cost: $0.12 USD spent in current session
- 📊 Context: 67% of context window remaining
JSON Output Cost:
bash
claudish --json "task" | jq '.total_cost_usd'Claudish会在状态行中自动追踪成本:
directory • model-id • $cost • ctx%示例:
my-project • x-ai/grok-code-fast-1 • $0.12 • 67%显示内容:
- 💰 成本:当前会话已花费0.12美元
- 📊 上下文:剩余67%的上下文窗口
JSON输出中的成本信息:
bash
claudish --json "任务" | jq '.total_cost_usd'Output: 0.068
输出:0.068
undefinedundefinedError Handling
错误处理
Error 1: OPENROUTER_API_KEY Not Set
错误1:未设置OPENROUTER_API_KEY
Error:
Error: OPENROUTER_API_KEY environment variable is requiredFix:
bash
export OPENROUTER_API_KEY='sk-or-v1-...'错误信息:
Error: OPENROUTER_API_KEY environment variable is required修复方法:
bash
export OPENROUTER_API_KEY='sk-or-v1-...'Or add to ~/.zshrc or ~/.bashrc
或添加到~/.zshrc或~/.bashrc中
undefinedundefinedError 2: Claudish Not Installed
错误2:未安装Claudish
Error:
command not found: claudishFix:
bash
npm install -g claudish错误信息:
command not found: claudish修复方法:
bash
npm install -g claudishOr: bun install -g claudish
或:bun install -g claudish
undefinedundefinedError 3: Model Not Found
错误3:模型未找到
Error:
Model 'invalid/model' not foundFix:
bash
undefined错误信息:
Model 'invalid/model' not found修复方法:
bash
undefinedList available models
列出可用模型
claudish --models
claudish --models
Use valid model ID
使用有效的模型ID
claudish --model x-ai/grok-code-fast-1 "task"
undefinedclaudish --model x-ai/grok-code-fast-1 "任务"
undefinedError 4: OpenRouter API Error
错误4:OpenRouter API错误
Error:
OpenRouter API error: 401 UnauthorizedFix:
- Check API key is correct
- Verify API key at https://openrouter.ai/keys
- Check API key has credits (free tier or paid)
错误信息:
OpenRouter API error: 401 Unauthorized修复方法:
- 检查API密钥是否正确
- 在https://openrouter.ai/keys验证API密钥
- 检查API密钥是否有可用额度(免费层级或付费)
Error 5: Port Already in Use
错误5:端口已被占用
Error:
Error: Port 3000 already in useFix:
bash
undefined错误信息:
Error: Port 3000 already in use修复方法:
bash
undefinedLet Claudish pick random port (default)
让Claudish自动选择随机端口(默认)
claudish --model grok "task"
claudish --model grok "任务"
Or specify different port
或指定不同的端口
claudish --port 8080 --model grok "task"
undefinedclaudish --port 8080 --model grok "任务"
undefinedBest Practices
最佳实践
1. ✅ Use File-Based Instructions
1. ✅ 使用基于文件的指令
Why: Avoids context window pollution
How:
bash
undefined原因: 避免上下文窗口污染
实现方式:
bash
undefinedWrite instruction to file
将指令写入文件
echo "Implement feature X" > /tmp/task.md
echo "实现X功能" > /tmp/task.md
Run with stdin
通过标准输入运行
claudish --stdin --model grok < /tmp/task.md > /tmp/result.md
claudish --stdin --model grok < /tmp/task.md > /tmp/result.md
Read result
读取结果
cat /tmp/result.md
undefinedcat /tmp/result.md
undefined2. ✅ Choose Right Model for Task
2. ✅ 为任务选择合适的模型
Fast Coding:
Complex Reasoning: or
Vision/UI:
x-ai/grok-code-fast-1google/gemini-2.5-flashopenai/gpt-5qwen/qwen3-vl-235b-a22b-instruct快速编码:
复杂推理: 或
视觉/UI任务:
x-ai/grok-code-fast-1google/gemini-2.5-flashopenai/gpt-5qwen/qwen3-vl-235b-a22b-instruct3. ✅ Use --json for Automation
3. ✅ 使用--json实现自动化
Why: Structured output, easier parsing
How:
bash
RESULT=$(claudish --json "task" | jq -r '.result')
COST=$(claudish --json "task" | jq -r '.total_cost_usd')原因: 结构化输出,便于解析
实现方式:
bash
RESULT=$(claudish --json "任务" | jq -r '.result')
COST=$(claudish --json "任务" | jq -r '.total_cost_usd')4. ✅ Delegate to Sub-Agents
4. ✅ 委托给子代理
Why: Keeps main conversation context clean
How:
typescript
await Task({
subagent_type: "general-purpose",
description: "Task with Claudish",
prompt: "Use claudish --model grok '...' and return summary only"
});原因: 保持主对话上下文整洁
实现方式:
typescript
await Task({
subagent_type: "general-purpose",
description: "使用Claudish完成任务",
prompt: "使用claudish --model grok '...'并仅返回摘要"
});5. ✅ Update Models Regularly
5. ✅ 定期更新模型列表
Why: Get latest model recommendations
How:
bash
undefined原因: 获取最新的模型推荐
实现方式:
bash
undefinedAuto-updates every 2 days
每2天自动更新
claudish --models
claudish --models
Search for specific models
搜索特定模型
claudish --models deepseek
claudish --models deepseek
Force update now
立即强制更新
claudish --models --force-update
undefinedclaudish --models --force-update
undefined6. ✅ Use --stdin for Large Prompts
6. ✅ 使用--stdin传入大提示词
Why: Avoid command line length limits
How:
bash
git diff | claudish --stdin --model grok "Review changes"原因: 避免命令行长度限制
实现方式:
bash
git diff | claudish --stdin --model grok "审核变更"Anti-Patterns (Avoid These)
反模式(需避免)
❌❌❌ NEVER Run Claudish Directly in Main Conversation (CRITICAL)
❌❌❌ 绝不能在主对话中直接运行Claudish(重要提示)
This is the #1 mistake. Never do this unless user explicitly requests it.
WRONG - Destroys context window:
typescript
// ❌ NEVER DO THIS - Pollutes main context with 10K+ tokens
await Bash("claudish --model grok 'implement feature'");
// ❌ NEVER DO THIS - Full conversation in main context
await Bash("claudish --model gemini 'review code'");
// ❌ NEVER DO THIS - Even with --json, output is huge
const result = await Bash("claudish --json --model gpt-5 'refactor'");RIGHT - Always use sub-agents:
typescript
// ✅ ALWAYS DO THIS - Delegate to sub-agent
const result = await Task({
subagent_type: "general-purpose", // or specific agent
description: "Implement feature with Grok",
prompt: `
Use Claudish to implement the feature with Grok model.
CRITICAL INSTRUCTIONS:
1. Create instruction file: /tmp/claudish-task-${Date.now()}.md
2. Write detailed task requirements to file
3. Run: claudish --model x-ai/grok-code-fast-1 --stdin < /tmp/claudish-task-*.md
4. Read result file and return ONLY a 2-3 sentence summary
DO NOT return full implementation or conversation.
Keep response under 300 tokens.
`
});
// ✅ Even better - Use specialized agent if available
const result = await Task({
subagent_type: "backend-developer", // or frontend-dev, etc.
description: "Implement with external model",
prompt: `
Use Claudish with x-ai/grok-code-fast-1 model to implement authentication.
Follow file-based instruction pattern.
Return summary only.
`
});When you CAN run directly (rare exceptions):
typescript
// ✅ Only when user explicitly requests
// User: "Run claudish directly in main context for debugging"
if (userExplicitlyRequestedDirect) {
await Bash("claudish --model grok 'task'");
}这是最常见的错误。除非用户明确要求,否则绝不要这样做。
错误做法 - 会破坏上下文窗口:
typescript
// ❌ 绝不要这样做 - 会用10K+令牌污染主上下文
await Bash("claudish --model grok '实现功能'");
// ❌ 绝不要这样做 - 完整对话会进入主上下文
await Bash("claudish --model gemini '审核代码'");
// ❌ 绝不要这样做 - 即使使用--json,输出也会非常大
const result = await Bash("claudish --json --model gpt-5 '重构'");正确做法 - 始终使用子代理:
typescript
// ✅ 始终这样做 - 委托给子代理
const result = await Task({
subagent_type: "general-purpose", // 或特定代理
description: "使用Grok实现功能",
prompt: `
使用Claudish借助Grok模型实现以下功能。
重要指令:
1. 创建指令文件:/tmp/claudish-task-${Date.now()}.md
2. 将详细的任务要求写入文件
3. 运行:claudish --model x-ai/grok-code-fast-1 --stdin < /tmp/claudish-task-*.md
4. 读取结果文件并仅返回2-3句话的摘要
不要返回完整的实现内容或对话记录。
保持回复不超过300令牌。
`
});
// ✅ 更好的做法 - 如果有可用的专用代理则使用
const result = await Task({
subagent_type: "backend-developer", // 或frontend-dev等
description: "使用外部模型实现功能",
prompt: `
使用Claudish和x-ai/grok-code-fast-1模型实现认证功能。
遵循基于文件的指令模式。
仅返回摘要。
`
});可直接运行的场景(罕见例外):
typescript
// ✅ 仅当用户明确要求时
// 用户:"为了调试,在主上下文环境中直接运行claudish"
if (userExplicitlyRequestedDirect) {
await Bash("claudish --model grok '任务'");
}❌ Don't Ignore Model Selection
❌ 不要忽略模型选择
Wrong:
bash
undefined错误做法:
bash
undefinedAlways using default model
始终使用默认模型
claudish "any task"
**Right:**
```bashclaudish "任意任务"
**正确做法:**
```bashChoose appropriate model
选择合适的模型
claudish --model x-ai/grok-code-fast-1 "quick fix"
claudish --model google/gemini-2.5-flash "complex analysis"
undefinedclaudish --model x-ai/grok-code-fast-1 "快速修复"
claudish --model google/gemini-2.5-flash "复杂分析"
undefined❌ Don't Parse Text Output
❌ 不要解析文本输出
Wrong:
bash
OUTPUT=$(claudish --model grok "task")
COST=$(echo "$OUTPUT" | grep cost | awk '{print $2}')Right:
bash
undefined错误做法:
bash
OUTPUT=$(claudish --model grok "任务")
COST=$(echo "$OUTPUT" | grep cost | awk '{print $2}')正确做法:
bash
undefinedUse JSON output
使用JSON输出
COST=$(claudish --json --model grok "task" | jq -r '.total_cost_usd')
undefinedCOST=$(claudish --json --model grok "任务" | jq -r '.total_cost_usd')
undefined❌ Don't Hardcode Model Lists
❌ 不要硬编码模型列表
Wrong:
typescript
const MODELS = ["x-ai/grok-code-fast-1", "openai/gpt-5"];Right:
typescript
// Query dynamically
const { stdout } = await Bash("claudish --models --json");
const models = JSON.parse(stdout).models.map(m => m.id);错误做法:
typescript
const MODELS = ["x-ai/grok-code-fast-1", "openai/gpt-5"];正确做法:
typescript
undefined✅ Do Accept Custom Models From Users
动态查询
Problem: User provides a custom model ID that's not in --top-models
Wrong (rejecting custom models):
typescript
const availableModels = ["x-ai/grok-code-fast-1", "openai/gpt-5"];
const userModel = "custom/provider/model-123";
if (!availableModels.includes(userModel)) {
throw new Error("Model not in my shortlist"); // ❌ DON'T DO THIS
}Right (accept any valid model ID):
typescript
// Claudish accepts ANY valid OpenRouter model ID, even if not in --top-models
const userModel = "custom/provider/model-123";
// Validate it's a non-empty string with provider format
if (!userModel.includes("/")) {
console.warn("Model should be in format: provider/model-name");
}
// Use it directly - Claudish will validate with OpenRouter
await Bash(`claudish --model ${userModel} "task"`);Why: Users may have access to:
- Beta/experimental models
- Private/custom fine-tuned models
- Newly released models not yet in rankings
- Regional/enterprise models
- Cost-saving alternatives
Always accept user-provided model IDs unless they're clearly invalid (empty, wrong format).
const { stdout } = await Bash("claudish --models --json");
const models = JSON.parse(stdout).models.map(m => m.id);
undefined✅ Do Handle User-Preferred Models
✅ 接受用户提供的自定义模型
Scenario: User says "use my custom model X" and expects it to be remembered
Solution 1: Environment Variable (Recommended)
typescript
// Set for the session
process.env.CLAUDISH_MODEL = userPreferredModel;
// Or set permanently in user's shell profile
await Bash(`echo 'export CLAUDISH_MODEL="${userPreferredModel}"' >> ~/.zshrc`);Solution 2: Session Cache
typescript
// Store in a temporary session file
const sessionFile = "/tmp/claudish-user-preferences.json";
const prefs = {
preferredModel: userPreferredModel,
lastUsed: new Date().toISOString()
};
await Write({ file_path: sessionFile, content: JSON.stringify(prefs, null, 2) });
// Load in subsequent commands
const { stdout } = await Read({ file_path: sessionFile });
const prefs = JSON.parse(stdout);
const model = prefs.preferredModel || defaultModel;Solution 3: Prompt Once, Remember for Session
typescript
// In a multi-step workflow, ask once
if (!process.env.CLAUDISH_MODEL) {
const { stdout } = await Bash("claudish --models --json");
const models = JSON.parse(stdout).models;
const response = await AskUserQuestion({
question: "Select model (or enter custom model ID):",
options: models.map((m, i) => ({ label: m.name, value: m.id })).concat([
{ label: "Enter custom model...", value: "custom" }
])
});
if (response === "custom") {
const customModel = await AskUserQuestion({
question: "Enter OpenRouter model ID (format: provider/model):"
});
process.env.CLAUDISH_MODEL = customModel;
} else {
process.env.CLAUDISH_MODEL = response;
}
}
// Use the selected model for all subsequent calls
const model = process.env.CLAUDISH_MODEL;
await Bash(`claudish --model ${model} "task 1"`);
await Bash(`claudish --model ${model} "task 2"`);Guidance for Agents:
- ✅ Accept any model ID user provides (unless obviously malformed)
- ✅ Don't filter based on your "shortlist" - let Claudish handle validation
- ✅ Offer to set CLAUDISH_MODEL environment variable for session persistence
- ✅ Explain that --top-models shows curated recommendations, --models shows all
- ✅ Validate format (should contain "/") but not restrict to known models
- ❌ Never reject a user's custom model with "not in my shortlist"
问题: 用户提供了一个不在--top-models列表中的自定义模型ID
错误做法(拒绝自定义模型):
typescript
const availableModels = ["x-ai/grok-code-fast-1", "openai/gpt-5"];
const userModel = "custom/provider/model-123";
if (!availableModels.includes(userModel)) {
throw new Error("模型不在我的精选列表中"); // ❌ 不要这样做
}正确做法(接受任何有效的模型ID):
typescript
// Claudish接受任何有效的OpenRouter模型ID,即使不在--top-models列表中
const userModel = "custom/provider/model-123";❌ Don't Skip Error Handling
验证格式是否正确(应包含"/")
Wrong:
typescript
const result = await Bash("claudish --model grok 'task'");Right:
typescript
try {
const result = await Bash("claudish --model grok 'task'");
} catch (error) {
console.error("Claudish failed:", error.message);
// Fallback to embedded Claude or handle error
}if (!userModel.includes("/")) {
console.warn("模型格式应为:provider/model-name");
}
Agent Integration Examples
直接使用 - Claudish会通过OpenRouter验证
Example 1: Code Review Agent
—
typescript
/**
* Agent: code-reviewer (using Claudish with multiple models)
*/
async function reviewCodeWithMultipleModels(files: string[]) {
const models = [
"x-ai/grok-code-fast-1", // Fast initial scan
"google/gemini-2.5-flash", // Deep analysis
"openai/gpt-5" // Final validation
];
const reviews = [];
for (const model of models) {
const timestamp = Date.now();
const instructionFile = `/tmp/review-${model.replace('/', '-')}-${timestamp}.md`;
const resultFile = `/tmp/review-result-${model.replace('/', '-')}-${timestamp}.md`;
// Create instruction
const instruction = createReviewInstruction(files, resultFile);
await Write({ file_path: instructionFile, content: instruction });
// Run review with model
await Bash(`claudish --model ${model} --stdin < ${instructionFile}`);
// Read result
const result = await Read({ file_path: resultFile });
// Extract summary
reviews.push({
model,
summary: extractSummary(result),
issueCount: extractIssueCount(result)
});
// Clean up
await Bash(`rm ${instructionFile} ${resultFile}`);
}
return reviews;
}await Bash();
claudish --model ${userModel} "任务"
**原因:** 用户可能有权访问:
- 测试版/实验性模型
- 私有/自定义微调模型
- 尚未列入排名的新发布模型
- 区域/企业专属模型
- 更具成本效益的替代模型
**始终接受用户提供的模型ID**,除非明显无效(空值、格式错误)。Example 2: Feature Implementation Command
✅ 处理用户偏好的模型
typescript
/**
* Command: /implement-with-model
* Usage: /implement-with-model "feature description"
*/
async function implementWithModel(featureDescription: string) {
// Step 1: Get available models
const { stdout } = await Bash("claudish --models --json");
const models = JSON.parse(stdout).models;
// Step 2: Let user select model
const selectedModel = await promptUserForModel(models);
// Step 3: Create instruction file
const timestamp = Date.now();
const instructionFile = `/tmp/implement-${timestamp}.md`;
const resultFile = `/tmp/implement-result-${timestamp}.md`;
const instruction = `# Feature Implementation场景: 用户说“使用我的自定义模型X”并希望系统记住该选择
解决方案1:环境变量(推荐)
typescript
undefinedDescription
为当前会话设置
${featureDescription}
process.env.CLAUDISH_MODEL = userPreferredModel;
Requirements
或永久添加到用户的shell配置文件中
- Write clean, maintainable code
- Add comprehensive tests
- Include error handling
- Follow project conventions
await Bash();
echo 'export CLAUDISH_MODEL="${userPreferredModel}"' >> ~/.zshrc
**解决方案2:会话缓存**
```typescriptOutput
存储到临时会话文件中
Write implementation details to: ${resultFile}
Include:
-
Files created/modified
-
Code snippets
-
Test coverage
-
Documentation updates `;await Write({ file_path: instructionFile, content: instruction });// Step 4: Run implementation await Bash();
claudish --model ${selectedModel} --stdin < ${instructionFile}// Step 5: Read and present results const result = await Read({ file_path: resultFile });// Step 6: Clean up await Bash();rm ${instructionFile} ${resultFile}return result; }
undefinedconst sessionFile = "/tmp/claudish-user-preferences.json";
const prefs = {
preferredModel: userPreferredModel,
lastUsed: new Date().toISOString()
};
await Write({ file_path: sessionFile, content: JSON.stringify(prefs, null, 2) });
Troubleshooting
在后续命令中加载
Issue: Slow Performance
—
Symptoms: Claudish takes long time to respond
Solutions:
- Use faster model: or
x-ai/grok-code-fast-1minimax/minimax-m2 - Reduce prompt size (use --stdin with concise instructions)
- Check internet connection to OpenRouter
const { stdout } = await Read({ file_path: sessionFile });
const prefs = JSON.parse(stdout);
const model = prefs.preferredModel || defaultModel;
**解决方案3:询问一次,会话内记住**
```typescriptIssue: High Costs
在多步骤工作流中,询问一次
Symptoms: Unexpected API costs
Solutions:
- Use budget-friendly models (check pricing with or
--models)--top-models - Enable cost tracking:
--cost-tracker - Use --json to monitor costs:
claudish --json "task" | jq '.total_cost_usd'
if (!process.env.CLAUDISH_MODEL) {
const { stdout } = await Bash("claudish --models --json");
const models = JSON.parse(stdout).models;
const response = await AskUserQuestion({
question: "选择模型(或输入自定义模型ID):",
options: models.map((m, i) => ({ label: m.name, value: m.id })).concat([
{ label: "输入自定义模型...", value: "custom" }
])
});
if (response === "custom") {
const customModel = await AskUserQuestion({
question: "输入OpenRouter模型ID(格式:provider/model):"
});
process.env.CLAUDISH_MODEL = customModel;
} else {
process.env.CLAUDISH_MODEL = response;
}
}
Issue: Context Window Exceeded
后续所有调用都使用选定的模型
Symptoms: Error about token limits
Solutions:
- Use model with larger context (Gemini: 1000K, Grok: 256K)
- Break task into smaller subtasks
- Use file-based pattern to avoid conversation history
const model = process.env.CLAUDISH_MODEL;
await Bash();
await Bash();
claudish --model ${model} "任务1"claudish --model ${model} "任务2"
**代理指导原则:**
1. ✅ **接受用户提供的任何模型ID**(除非明显格式错误)
2. ✅ **不要过滤**基于你的“精选列表” - 让Claudish处理验证
3. ✅ **提议设置CLAUDISH_MODEL**环境变量以便会话内持久化
4. ✅ **解释**--top-models显示精选推荐,--models显示所有模型
5. ✅ **验证格式**(应包含"/")但不要限制为已知模型
6. ❌ **绝不要**以“不在我的精选列表中”为由拒绝用户的自定义模型Issue: Model Not Available
❌ 不要跳过错误处理
Symptoms: "Model not found" error
Solutions:
- Update model cache:
claudish --models --force-update - Check OpenRouter website for model availability
- Use alternative model from same category
错误做法:
typescript
const result = await Bash("claudish --model grok '任务'");正确做法:
typescript
try {
const result = await Bash("claudish --model grok '任务'");
} catch (error) {
console.error("Claudish执行失败:", error.message);
# 回退到内置Claude或处理错误
}Additional Resources
代理集成示例
—
示例1:代码审核代理
Documentation:
- Full README: (in repository root)
mcp/claudish/README.md - AI Agent Guide: Print with
claudish --help-ai - Model Integration: (in repository root)
skills/claudish-integration/SKILL.md
External Links:
- Claudish GitHub: https://github.com/MadAppGang/claude-code
- OpenRouter: https://openrouter.ai
- OpenRouter Models: https://openrouter.ai/models
- OpenRouter API Docs: https://openrouter.ai/docs
Version Information:
bash
claudish --versionGet Help:
bash
claudish --help # CLI usage
claudish --help-ai # AI agent usage guideMaintained by: MadAppGang
Last Updated: January 5, 2026
Skill Version: 2.0.0
typescript
/**
* 代理:code-reviewer(使用Claudish结合多个模型)
*/
async function reviewCodeWithMultipleModels(files: string[]) {
const models = [
"x-ai/grok-code-fast-1", // 快速初始扫描
"google/gemini-2.5-flash", // 深度分析
"openai/gpt-5" // 最终验证
];
const reviews = [];
for (const model of models) {
const timestamp = Date.now();
const instructionFile = `/tmp/review-${model.replace('/', '-')}-${timestamp}.md`;
const resultFile = `/tmp/review-result-${model.replace('/', '-')}-${timestamp}.md`;
# 创建指令
const instruction = createReviewInstruction(files, resultFile);
await Write({ file_path: instructionFile, content: instruction });
# 使用模型进行审核
await Bash(`claudish --model ${model} --stdin < ${instructionFile}`);
# 读取结果
const result = await Read({ file_path: resultFile });
# 提取摘要
reviews.push({
model,
summary: extractSummary(result),
issueCount: extractIssueCount(result)
});
# 清理文件
await Bash(`rm ${instructionFile} ${resultFile}`);
}
return reviews;
}—
示例2:功能实现命令
—
typescript
/**
* 命令:/implement-with-model
* 使用方法:/implement-with-model "功能描述"
*/
async function implementWithModel(featureDescription: string) {
# 步骤1:获取可用模型
const { stdout } = await Bash("claudish --models --json");
const models = JSON.parse(stdout).models;
# 步骤2:让用户选择模型
const selectedModel = await promptUserForModel(models);
# 步骤3:创建指令文件
const timestamp = Date.now();
const instructionFile = `/tmp/implement-${timestamp}.md`;
const resultFile = `/tmp/implement-result-${timestamp}.md`;
const instruction = `# 功能实现—
描述
—
${featureDescription}
—
要求
—
- 编写干净、可维护的代码
- 添加全面的测试
- 包含错误处理
- 遵循项目规范
—
输出
—
将实现细节写入:${resultFile}
包含:
-
创建/修改的文件
-
代码片段
-
测试覆盖率
-
文档更新 `;await Write({ file_path: instructionFile, content: instruction });
步骤4:运行实现
await Bash();claudish --model ${selectedModel} --stdin < ${instructionFile}步骤5:读取并展示结果
const result = await Read({ file_path: resultFile });步骤6:清理文件
await Bash();rm ${instructionFile} ${resultFile}return result; }
undefined—
故障排除
—
问题:性能缓慢
—
症状: Claudish响应时间长
解决方案:
- 使用更快的模型:或
x-ai/grok-code-fast-1minimax/minimax-m2 - 减小提示词大小(使用--stdin并提供简洁的指令)
- 检查与OpenRouter的网络连接
—
问题:成本过高
—
症状: API成本超出预期
解决方案:
- 使用高性价比模型(通过或
--models查看定价)--top-models - 启用成本追踪:
--cost-tracker - 使用--json监控成本:
claudish --json "任务" | jq '.total_cost_usd'
—
问题:上下文窗口超出限制
—
症状: 出现令牌限制相关错误
解决方案:
- 使用更大上下文窗口的模型(Gemini:1000K,Grok:256K)
- 将任务拆分为更小的子任务
- 使用基于文件的模式避免包含对话历史
—
问题:模型不可用
—
症状: "Model not found"错误
解决方案:
- 更新模型缓存:
claudish --models --force-update - 在OpenRouter网站上检查模型可用性
- 使用同一类别中的替代模型
—
额外资源
—
文档:
- 完整README:(位于仓库根目录)
mcp/claudish/README.md - AI代理指南:使用打印
claudish --help-ai - 模型集成:(位于仓库根目录)
skills/claudish-integration/SKILL.md
外部链接:
- Claudish GitHub:https://github.com/MadAppGang/claude-code
- OpenRouter:https://openrouter.ai
- OpenRouter模型:https://openrouter.ai/models
- OpenRouter API文档:https://openrouter.ai/docs
版本信息:
bash
claudish --version获取帮助:
bash
claudish --help # CLI使用帮助
claudish --help-ai # AI代理使用指南维护者: MadAppGang
最后更新: 2026年1月5日
技能版本: 2.0.0