openclaude-multi-llm
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseOpenClaude Multi-LLM Skill
OpenClaude 多LLM Skill
Skill by ara.so — Daily 2026 Skills collection.
OpenClaude is a fork of Claude Code that routes all LLM calls through an OpenAI-compatible shim (), letting you use any model that speaks the OpenAI Chat Completions API — GPT-4o, DeepSeek, Gemini via OpenRouter, Ollama, Groq, Mistral, Azure, and more — while keeping every Claude Code tool intact (Bash, FileRead, FileWrite, FileEdit, Glob, Grep, WebFetch, Agent, MCP, Tasks, LSP, NotebookEdit).
openaiShim.ts由 ara.so 开发的Skill — 2026年度每日Skill合集。
OpenClaude是Claude Code的分支版本,它通过一个兼容OpenAI协议的垫片()路由所有LLM调用,让你可以使用任何支持OpenAI聊天补全API的模型——包括GPT-4o、DeepSeek、通过OpenRouter接入的Gemini、Ollama、Groq、Mistral、Azure等——同时完整保留Claude Code的所有工具能力(Bash、FileRead、FileWrite、FileEdit、Glob、Grep、WebFetch、Agent、MCP、Tasks、LSP、NotebookEdit)。
openaiShim.tsInstallation
安装
npm (recommended)
npm(推荐方式)
bash
npm install -g @gitlawb/openclaudebash
npm install -g @gitlawb/openclaudeCLI command installed: openclaude
安装后的CLI命令:openclaude
undefinedundefinedFrom source (requires Bun)
源码安装(需要Bun环境)
bash
git clone https://node.gitlawb.com/z6MkqDnb7Siv3Cwj7pGJq4T5EsUisECqR8KpnDLwcaZq5TPr/openclaude.git
cd openclaude
bun install
bun run buildbash
git clone https://node.gitlawb.com/z6MkqDnb7Siv3Cwj7pGJq4T5EsUisECqR8KpnDLwcaZq5TPr/openclaude.git
cd openclaude
bun install
bun run buildoptionally link globally
可选:全局链接
npm link
undefinednpm link
undefinedRun without build
无需构建直接运行
bash
bun run dev # run directly with Bun, no build stepbash
bun run dev # 直接通过Bun运行,无需构建步骤Activation — Required Environment Variables
激活 — 必选环境变量
You must set to enable the shim. Without it, the tool falls back to the Anthropic SDK.
CLAUDE_CODE_USE_OPENAI=1| Variable | Required | Purpose |
|---|---|---|
| Yes | Set to |
| Yes* | API key (*omit for local Ollama/LM Studio) |
| Yes | Model identifier |
| No | Custom endpoint (default: |
| Codex only | ChatGPT/Codex access token |
| Codex only | Path to Codex CLI |
OPENAI_MODELANTHROPIC_MODEL你必须设置来启用垫片,未设置的话工具会回退使用Anthropic SDK。
CLAUDE_CODE_USE_OPENAI=1| 变量 | 必填 | 用途 |
|---|---|---|
| 是 | 设为 |
| 是* | API密钥(*本地使用Ollama/LM Studio时可省略) |
| 是 | 模型标识符 |
| 否 | 自定义接口端点(默认: |
| 仅Codex需要 | ChatGPT/Codex访问令牌 |
| 仅Codex需要 | Codex CLI |
如果同时设置了和,优先级更高。
OPENAI_MODELANTHROPIC_MODELOPENAI_MODELProvider Configuration Examples
提供商配置示例
OpenAI
OpenAI
bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=$OPENAI_API_KEY
export OPENAI_MODEL=gpt-4o
openclaudebash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=$OPENAI_API_KEY
export OPENAI_MODEL=gpt-4o
openclaudeDeepSeek
DeepSeek
bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=$DEEPSEEK_API_KEY
export OPENAI_BASE_URL=https://api.deepseek.com/v1
export OPENAI_MODEL=deepseek-chat
openclaudebash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=$DEEPSEEK_API_KEY
export OPENAI_BASE_URL=https://api.deepseek.com/v1
export OPENAI_MODEL=deepseek-chat
openclaudeGoogle Gemini (via OpenRouter)
Google Gemini(通过OpenRouter接入)
bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=$OPENROUTER_API_KEY
export OPENAI_BASE_URL=https://openrouter.ai/api/v1
export OPENAI_MODEL=google/gemini-2.0-flash
openclaudebash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=$OPENROUTER_API_KEY
export OPENAI_BASE_URL=https://openrouter.ai/api/v1
export OPENAI_MODEL=google/gemini-2.0-flash
openclaudeOllama (local, no API key needed)
Ollama(本地运行,无需API密钥)
bash
ollama pull llama3.3:70b
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_BASE_URL=http://localhost:11434/v1
export OPENAI_MODEL=llama3.3:70b
openclaudebash
ollama pull llama3.3:70b
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_BASE_URL=http://localhost:11434/v1
export OPENAI_MODEL=llama3.3:70b
openclaudeGroq
Groq
bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=$GROQ_API_KEY
export OPENAI_BASE_URL=https://api.groq.com/openai/v1
export OPENAI_MODEL=llama-3.3-70b-versatile
openclaudebash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=$GROQ_API_KEY
export OPENAI_BASE_URL=https://api.groq.com/openai/v1
export OPENAI_MODEL=llama-3.3-70b-versatile
openclaudeMistral
Mistral
bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=$MISTRAL_API_KEY
export OPENAI_BASE_URL=https://api.mistral.ai/v1
export OPENAI_MODEL=mistral-large-latest
openclaudebash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=$MISTRAL_API_KEY
export OPENAI_BASE_URL=https://api.mistral.ai/v1
export OPENAI_MODEL=mistral-large-latest
openclaudeAzure OpenAI
Azure OpenAI
bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=$AZURE_OPENAI_KEY
export OPENAI_BASE_URL=https://your-resource.openai.azure.com/openai/deployments/your-deployment/v1
export OPENAI_MODEL=gpt-4o
openclaudebash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=$AZURE_OPENAI_KEY
export OPENAI_BASE_URL=https://your-resource.openai.azure.com/openai/deployments/your-deployment/v1
export OPENAI_MODEL=gpt-4o
openclaudeCodex (ChatGPT backend)
Codex(ChatGPT后端)
bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_MODEL=codexplan # or codexspark for faster loopsbash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_MODEL=codexplan # 更快的循环可使用codexsparkreads ~/.codex/auth.json automatically if present
如果存在~/.codex/auth.json会自动读取
or set: export CODEX_API_KEY=$CODEX_TOKEN
也可手动设置:export CODEX_API_KEY=$CODEX_TOKEN
openclaude
undefinedopenclaude
undefinedLM Studio (local)
LM Studio(本地运行)
bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_BASE_URL=http://localhost:1234/v1
export OPENAI_MODEL=your-model-name
openclaudebash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_BASE_URL=http://localhost:1234/v1
export OPENAI_MODEL=your-model-name
openclaudeTogether AI
Together AI
bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=$TOGETHER_API_KEY
export OPENAI_BASE_URL=https://api.together.xyz/v1
export OPENAI_MODEL=meta-llama/Llama-3.3-70B-Instruct-Turbo
openclaudebash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=$TOGETHER_API_KEY
export OPENAI_BASE_URL=https://api.together.xyz/v1
export OPENAI_MODEL=meta-llama/Llama-3.3-70B-Instruct-Turbo
openclaudeArchitecture — How the Shim Works
架构 — 垫片工作原理
The shim file is (724 lines). It duck-types the Anthropic SDK interface so the rest of Claude Code is unaware it's talking to a different provider.
src/services/api/openaiShim.tsClaude Code Tool System
│
▼
Anthropic SDK interface (duck-typed)
│
▼
openaiShim.ts ← format translation layer
│
▼
OpenAI Chat Completions API
│
▼
Any compatible model垫片文件为(共724行),它通过鸭子类型实现了Anthropic SDK的接口,因此Claude Code的其余部分完全感知不到它正在和不同的提供商交互。
src/services/api/openaiShim.tsClaude Code 工具系统
│
▼
Anthropic SDK 接口(鸭子类型实现)
│
▼
openaiShim.ts ← 格式转换层
│
▼
OpenAI 聊天补全API
│
▼
任意兼容模型What the shim translates
垫片的转换内容
- Anthropic message content blocks → OpenAI array
messages - Anthropic /
tool_useblocks → OpenAItool_result/function_callsmessagestool - OpenAI SSE streaming chunks → Anthropic stream events
- Anthropic system prompt arrays → OpenAI role messages
system
- Anthropic消息内容块 → OpenAI 数组
messages - Anthropic /
tool_use块 → OpenAItool_result/function_calls消息tool - OpenAI SSE流块 → Anthropic流事件
- Anthropic系统提示词数组 → OpenAI 角色消息
system
Files changed from upstream
相对于上游版本修改的文件
src/services/api/openaiShim.ts ← NEW: the shim (724 lines)
src/services/api/client.ts ← routes to shim when CLAUDE_CODE_USE_OPENAI=1
src/utils/model/providers.ts ← added 'openai' provider type
src/utils/model/configs.ts ← added openai model mappings
src/utils/model/model.ts ← respects OPENAI_MODEL for defaults
src/utils/auth.ts ← recognizes OpenAI as valid 3rd-party providersrc/services/api/openaiShim.ts ← 新增:垫片文件(724行)
src/services/api/client.ts ← 当CLAUDE_CODE_USE_OPENAI=1时路由到垫片
src/utils/model/providers.ts ← 新增'openai'提供商类型
src/utils/model/configs.ts ← 新增openai模型映射
src/utils/model/model.ts ← 默认值优先读取OPENAI_MODEL
src/utils/auth.ts ← 识别OpenAI为合法第三方提供商Developer Workflow — Key Commands
开发者工作流 — 核心命令
bash
undefinedbash
undefinedRun in dev mode (no build)
开发模式运行(无需构建)
bun run dev
bun run dev
Build distribution
构建发布版本
bun run build
bun run build
Launch with persisted profile (.openclaude-profile.json)
使用持久化配置文件启动(.openclaude-profile.json)
bun run dev:profile
bun run dev:profile
Launch with OpenAI profile (requires OPENAI_API_KEY in shell)
使用OpenAI配置启动(需要shell中已设置OPENAI_API_KEY)
bun run dev:openai
bun run dev:openai
Launch with Ollama profile (localhost:11434, llama3.1:8b default)
使用Ollama配置启动(默认localhost:11434,llama3.1:8b模型)
bun run dev:ollama
bun run dev:ollama
Launch with Codex profile
使用Codex配置启动
bun run dev:codex
bun run dev:codex
Quick startup sanity check
快速启动可用性检查
bun run smoke
bun run smoke
Validate provider env + reachability
验证提供商环境变量与可达性
bun run doctor:runtime
bun run doctor:runtime
Machine-readable runtime diagnostics
机器可读的运行时诊断信息
bun run doctor:runtime:json
bun run doctor:runtime:json
Persist diagnostics report to reports/doctor-runtime.json
将诊断报告持久化保存到reports/doctor-runtime.json
bun run doctor:report
bun run doctor:report
Full local hardening check (typecheck + smoke + runtime doctor)
完整本地加固检查(类型检查+可用性检查+运行时诊断)
bun run hardening:check
bun run hardening:check
Strict hardening (includes project-wide typecheck)
严格加固检查(包含全项目类型检查)
bun run hardening:strict
---bun run hardening:strict
---Profile Bootstrap — One-Time Setup
配置文件引导 — 一次性设置
Profiles save provider config to so you don't repeat env exports.
.openclaude-profile.jsonbash
undefined配置文件会将提供商配置保存到,无需重复导出环境变量。
.openclaude-profile.jsonbash
undefinedAuto-detect provider (ollama if running, otherwise openai)
自动检测提供商(如果Ollama正在运行则用Ollama,否则用OpenAI)
bun run profile:init
bun run profile:init
Bootstrap for OpenAI
为OpenAI生成配置
bun run profile:init -- --provider openai --api-key $OPENAI_API_KEY
bun run profile:init -- --provider openai --api-key $OPENAI_API_KEY
Bootstrap for Ollama with custom model
为使用自定义模型的Ollama生成配置
bun run profile:init -- --provider ollama --model llama3.1:8b
bun run profile:init -- --provider ollama --model llama3.1:8b
Bootstrap for Codex
为Codex生成配置
bun run profile:init -- --provider codex --model codexspark
bun run profile:codex
After bootstrapping, run the app via the persisted profile:
```bash
bun run dev:profilebun run profile:init -- --provider codex --model codexspark
bun run profile:codex
引导完成后,即可通过持久化配置文件运行应用:
```bash
bun run dev:profileTypeScript Integration — Using the Shim Directly
TypeScript集成 — 直接使用垫片
If you want to use the shim in your own TypeScript code:
typescript
// src/services/api/client.ts pattern — routing to the shim
import { openaiShim } from './openaiShim.js';
const useOpenAI = process.env.CLAUDE_CODE_USE_OPENAI === '1';
const client = useOpenAI
? openaiShim({
apiKey: process.env.OPENAI_API_KEY,
baseURL: process.env.OPENAI_BASE_URL ?? 'https://api.openai.com/v1',
model: process.env.OPENAI_MODEL ?? 'gpt-4o',
})
: new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });typescript
// Streaming usage pattern (mirrors Anthropic SDK interface)
const stream = await client.messages.stream({
model: process.env.OPENAI_MODEL!,
max_tokens: 32000,
system: 'You are a helpful coding assistant.',
messages: [
{ role: 'user', content: 'Refactor this function for readability.' }
],
tools: myTools, // Anthropic-format tool definitions — shim translates them
});
for await (const event of stream) {
// events arrive in Anthropic format regardless of underlying provider
if (event.type === 'content_block_delta') {
process.stdout.write(event.delta.text ?? '');
}
}如果你想在自己的TypeScript代码中使用垫片:
typescript
// src/services/api/client.ts 模式 — 路由到垫片
import { openaiShim } from './openaiShim.js';
const useOpenAI = process.env.CLAUDE_CODE_USE_OPENAI === '1';
const client = useOpenAI
? openaiShim({
apiKey: process.env.OPENAI_API_KEY,
baseURL: process.env.OPENAI_BASE_URL ?? 'https://api.openai.com/v1',
model: process.env.OPENAI_MODEL ?? 'gpt-4o',
})
: new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });typescript
// 流式使用模式(与Anthropic SDK接口完全一致)
const stream = await client.messages.stream({
model: process.env.OPENAI_MODEL!,
max_tokens: 32000,
system: 'You are a helpful coding assistant.',
messages: [
{ role: 'user', content: 'Refactor this function for readability.' }
],
tools: myTools, // Anthropic格式的工具定义 — 垫片会自动转换
});
for await (const event of stream) {
// 无论底层提供商是什么,事件都以Anthropic格式返回
if (event.type === 'content_block_delta') {
process.stdout.write(event.delta.text ?? '');
}
}Model Quality Reference
模型质量参考
| Model | Tool Calling | Code Quality | Speed |
|---|---|---|---|
| GPT-4o | Excellent | Excellent | Fast |
| DeepSeek-V3 | Great | Great | Fast |
| Gemini 2.0 Flash | Great | Good | Very Fast |
| Llama 3.3 70B | Good | Good | Medium |
| Mistral Large | Good | Good | Fast |
| GPT-4o-mini | Good | Good | Very Fast |
| Qwen 2.5 72B | Good | Good | Medium |
| Models < 7B | Limited | Limited | Very Fast |
For agentic multi-step tool use, prefer models with strong native function/tool calling (GPT-4o, DeepSeek-V3, Gemini 2.0 Flash).
| 模型 | Tool Calling | 代码质量 | 速度 |
|---|---|---|---|
| GPT-4o | 优秀 | 优秀 | 快 |
| DeepSeek-V3 | 很好 | 很好 | 快 |
| Gemini 2.0 Flash | 很好 | 好 | 非常快 |
| Llama 3.3 70B | 好 | 好 | 中等 |
| Mistral Large | 好 | 好 | 快 |
| GPT-4o-mini | 好 | 好 | 非常快 |
| Qwen 2.5 72B | 好 | 好 | 中等 |
| 参数小于7B的模型 | 有限 | 有限 | 非常快 |
对于多步骤Agent工具调用场景,优先选择原生函数/工具调用能力强的模型(GPT-4o、DeepSeek-V3、Gemini 2.0 Flash)。
What Works vs. What Doesn't
支持与不支持的功能
Fully supported
完全支持
- All tools: Bash, FileRead, FileWrite, FileEdit, Glob, Grep, WebFetch, WebSearch, Agent, MCP, LSP, NotebookEdit, Tasks
- Streaming (real-time token output)
- Multi-step tool chains
- Vision/images (base64 and URL) for models that support them
- Slash commands: ,
/commit,/review,/compact,/diff/doctor - Sub-agents (AgentTool spawns sub-agents using the same provider)
- Persistent memory
- 所有工具:Bash、FileRead、FileWrite、FileEdit、Glob、Grep、WebFetch、WebSearch、Agent、MCP、LSP、NotebookEdit、Tasks
- 流式输出(实时token输出)
- 多步骤工具链
- 支持视觉/图片能力的模型可使用base64和URL格式的图片输入
- 斜杠命令:、
/commit、/review、/compact、/diff/doctor - 子Agent(AgentTool使用相同提供商启动子Agent)
- 持久化记忆
Not supported (Anthropic-specific features)
不支持(Anthropic专属特性)
- Extended thinking / reasoning mode
- Prompt caching (Anthropic cache headers skipped)
- Anthropic beta feature headers
- Token output defaults to 32K max (gracefully capped if model is lower)
- 扩展思考/推理模式
- 提示词缓存(跳过Anthropic缓存头)
- Anthropic beta功能头
- Token输出默认最高32K(如果模型上限更低会自动适配)
Troubleshooting
问题排查
doctor:runtime
fails with placeholder key error
doctor:runtimedoctor:runtime
报错提示占位符密钥错误
doctor:runtimeError: OPENAI_API_KEY looks like a placeholder (SUA_CHAVE)Set a real key:
export OPENAI_API_KEY=$YOUR_ACTUAL_KEYError: OPENAI_API_KEY looks like a placeholder (SUA_CHAVE)设置真实的密钥:
export OPENAI_API_KEY=$YOUR_ACTUAL_KEYOllama connection refused
Ollama连接被拒绝
Ensure Ollama is running before launching:
bash
ollama serve &
ollama pull llama3.3:70b
bun run dev:ollama启动前确保Ollama正在运行:
bash
ollama serve &
ollama pull llama3.3:70b
bun run dev:ollamaTool calls not working / model ignores tools
工具调用不工作/模型忽略工具
Switch to a model with strong tool calling support (GPT-4o, DeepSeek-V3). Models under 7B parameters often fail at multi-step agentic tool use.
切换到工具调用能力强的模型(GPT-4o、DeepSeek-V3),7B参数以下的模型通常无法胜任多步骤Agent工具调用任务。
Azure endpoint format
Azure端点格式错误
The for Azure must include the deployment path:
OPENAI_BASE_URLhttps://<resource>.openai.azure.com/openai/deployments/<deployment>/v1Azure的必须包含部署路径:
OPENAI_BASE_URLhttps://<resource>.openai.azure.com/openai/deployments/<deployment>/v1Codex auth not found
找不到Codex认证信息
If doesn't exist, set the token directly:
~/.codex/auth.jsonbash
export CODEX_API_KEY=$YOUR_CODEX_TOKENOr point to a custom auth file:
bash
export CODEX_AUTH_JSON_PATH=/path/to/auth.json如果不存在,可直接设置令牌:
~/.codex/auth.jsonbash
export CODEX_API_KEY=$YOUR_CODEX_TOKEN或者指向自定义的认证文件:
bash
export CODEX_AUTH_JSON_PATH=/path/to/auth.jsonRun diagnostics for any issue
任何问题都可运行诊断命令
bash
bun run doctor:runtime # human-readable
bun run doctor:runtime:json # machine-readable JSON
bun run doctor:report # saves to reports/doctor-runtime.jsonbash
bun run doctor:runtime # 人类可读格式
bun run doctor:runtime:json # 机器可读JSON格式
bun run doctor:report # 保存到reports/doctor-runtime.json