openclaude-multi-llm

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

OpenClaude Multi-LLM Skill

OpenClaude 多LLM Skill

Skill by ara.so — Daily 2026 Skills collection.
OpenClaude is a fork of Claude Code that routes all LLM calls through an OpenAI-compatible shim (
openaiShim.ts
), letting you use any model that speaks the OpenAI Chat Completions API — GPT-4o, DeepSeek, Gemini via OpenRouter, Ollama, Groq, Mistral, Azure, and more — while keeping every Claude Code tool intact (Bash, FileRead, FileWrite, FileEdit, Glob, Grep, WebFetch, Agent, MCP, Tasks, LSP, NotebookEdit).

ara.so 开发的Skill — 2026年度每日Skill合集。
OpenClaude是Claude Code的分支版本,它通过一个兼容OpenAI协议的垫片(
openaiShim.ts
)路由所有LLM调用,让你可以使用任何支持OpenAI聊天补全API的模型——包括GPT-4o、DeepSeek、通过OpenRouter接入的Gemini、Ollama、Groq、Mistral、Azure等——同时完整保留Claude Code的所有工具能力(Bash、FileRead、FileWrite、FileEdit、Glob、Grep、WebFetch、Agent、MCP、Tasks、LSP、NotebookEdit)。

Installation

安装

npm (recommended)

npm(推荐方式)

bash
npm install -g @gitlawb/openclaude
bash
npm install -g @gitlawb/openclaude

CLI command installed: openclaude

安装后的CLI命令:openclaude

undefined
undefined

From source (requires Bun)

源码安装(需要Bun环境)

bash
git clone https://node.gitlawb.com/z6MkqDnb7Siv3Cwj7pGJq4T5EsUisECqR8KpnDLwcaZq5TPr/openclaude.git
cd openclaude
bun install
bun run build
bash
git clone https://node.gitlawb.com/z6MkqDnb7Siv3Cwj7pGJq4T5EsUisECqR8KpnDLwcaZq5TPr/openclaude.git
cd openclaude
bun install
bun run build

optionally link globally

可选:全局链接

npm link
undefined
npm link
undefined

Run without build

无需构建直接运行

bash
bun run dev       # run directly with Bun, no build step

bash
bun run dev       # 直接通过Bun运行,无需构建步骤

Activation — Required Environment Variables

激活 — 必选环境变量

You must set
CLAUDE_CODE_USE_OPENAI=1
to enable the shim. Without it, the tool falls back to the Anthropic SDK.
VariableRequiredPurpose
CLAUDE_CODE_USE_OPENAI
YesSet to
1
to activate OpenAI provider
OPENAI_API_KEY
Yes*API key (*omit for local Ollama/LM Studio)
OPENAI_MODEL
YesModel identifier
OPENAI_BASE_URL
NoCustom endpoint (default:
https://api.openai.com/v1
)
CODEX_API_KEY
Codex onlyChatGPT/Codex access token
CODEX_AUTH_JSON_PATH
Codex onlyPath to Codex CLI
auth.json
OPENAI_MODEL
takes priority over
ANTHROPIC_MODEL
if both are set.

你必须设置
CLAUDE_CODE_USE_OPENAI=1
来启用垫片,未设置的话工具会回退使用Anthropic SDK。
变量必填用途
CLAUDE_CODE_USE_OPENAI
设为
1
即可激活OpenAI提供商
OPENAI_API_KEY
是*API密钥(*本地使用Ollama/LM Studio时可省略)
OPENAI_MODEL
模型标识符
OPENAI_BASE_URL
自定义接口端点(默认:
https://api.openai.com/v1
CODEX_API_KEY
仅Codex需要ChatGPT/Codex访问令牌
CODEX_AUTH_JSON_PATH
仅Codex需要Codex CLI
auth.json
的文件路径
如果同时设置了
OPENAI_MODEL
ANTHROPIC_MODEL
OPENAI_MODEL
优先级更高。

Provider Configuration Examples

提供商配置示例

OpenAI

OpenAI

bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=$OPENAI_API_KEY
export OPENAI_MODEL=gpt-4o
openclaude
bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=$OPENAI_API_KEY
export OPENAI_MODEL=gpt-4o
openclaude

DeepSeek

DeepSeek

bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=$DEEPSEEK_API_KEY
export OPENAI_BASE_URL=https://api.deepseek.com/v1
export OPENAI_MODEL=deepseek-chat
openclaude
bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=$DEEPSEEK_API_KEY
export OPENAI_BASE_URL=https://api.deepseek.com/v1
export OPENAI_MODEL=deepseek-chat
openclaude

Google Gemini (via OpenRouter)

Google Gemini(通过OpenRouter接入)

bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=$OPENROUTER_API_KEY
export OPENAI_BASE_URL=https://openrouter.ai/api/v1
export OPENAI_MODEL=google/gemini-2.0-flash
openclaude
bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=$OPENROUTER_API_KEY
export OPENAI_BASE_URL=https://openrouter.ai/api/v1
export OPENAI_MODEL=google/gemini-2.0-flash
openclaude

Ollama (local, no API key needed)

Ollama(本地运行,无需API密钥)

bash
ollama pull llama3.3:70b

export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_BASE_URL=http://localhost:11434/v1
export OPENAI_MODEL=llama3.3:70b
openclaude
bash
ollama pull llama3.3:70b

export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_BASE_URL=http://localhost:11434/v1
export OPENAI_MODEL=llama3.3:70b
openclaude

Groq

Groq

bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=$GROQ_API_KEY
export OPENAI_BASE_URL=https://api.groq.com/openai/v1
export OPENAI_MODEL=llama-3.3-70b-versatile
openclaude
bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=$GROQ_API_KEY
export OPENAI_BASE_URL=https://api.groq.com/openai/v1
export OPENAI_MODEL=llama-3.3-70b-versatile
openclaude

Mistral

Mistral

bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=$MISTRAL_API_KEY
export OPENAI_BASE_URL=https://api.mistral.ai/v1
export OPENAI_MODEL=mistral-large-latest
openclaude
bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=$MISTRAL_API_KEY
export OPENAI_BASE_URL=https://api.mistral.ai/v1
export OPENAI_MODEL=mistral-large-latest
openclaude

Azure OpenAI

Azure OpenAI

bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=$AZURE_OPENAI_KEY
export OPENAI_BASE_URL=https://your-resource.openai.azure.com/openai/deployments/your-deployment/v1
export OPENAI_MODEL=gpt-4o
openclaude
bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=$AZURE_OPENAI_KEY
export OPENAI_BASE_URL=https://your-resource.openai.azure.com/openai/deployments/your-deployment/v1
export OPENAI_MODEL=gpt-4o
openclaude

Codex (ChatGPT backend)

Codex(ChatGPT后端)

bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_MODEL=codexplan   # or codexspark for faster loops
bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_MODEL=codexplan   # 更快的循环可使用codexspark

reads ~/.codex/auth.json automatically if present

如果存在~/.codex/auth.json会自动读取

or set: export CODEX_API_KEY=$CODEX_TOKEN

也可手动设置:export CODEX_API_KEY=$CODEX_TOKEN

openclaude
undefined
openclaude
undefined

LM Studio (local)

LM Studio(本地运行)

bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_BASE_URL=http://localhost:1234/v1
export OPENAI_MODEL=your-model-name
openclaude
bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_BASE_URL=http://localhost:1234/v1
export OPENAI_MODEL=your-model-name
openclaude

Together AI

Together AI

bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=$TOGETHER_API_KEY
export OPENAI_BASE_URL=https://api.together.xyz/v1
export OPENAI_MODEL=meta-llama/Llama-3.3-70B-Instruct-Turbo
openclaude

bash
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=$TOGETHER_API_KEY
export OPENAI_BASE_URL=https://api.together.xyz/v1
export OPENAI_MODEL=meta-llama/Llama-3.3-70B-Instruct-Turbo
openclaude

Architecture — How the Shim Works

架构 — 垫片工作原理

The shim file is
src/services/api/openaiShim.ts
(724 lines). It duck-types the Anthropic SDK interface so the rest of Claude Code is unaware it's talking to a different provider.
Claude Code Tool System
Anthropic SDK interface (duck-typed)
openaiShim.ts  ← format translation layer
OpenAI Chat Completions API
Any compatible model
垫片文件为
src/services/api/openaiShim.ts
(共724行),它通过鸭子类型实现了Anthropic SDK的接口,因此Claude Code的其余部分完全感知不到它正在和不同的提供商交互。
Claude Code 工具系统
Anthropic SDK 接口(鸭子类型实现)
openaiShim.ts  ← 格式转换层
OpenAI 聊天补全API
任意兼容模型

What the shim translates

垫片的转换内容

  • Anthropic message content blocks → OpenAI
    messages
    array
  • Anthropic
    tool_use
    /
    tool_result
    blocks → OpenAI
    function_calls
    /
    tool
    messages
  • OpenAI SSE streaming chunks → Anthropic stream events
  • Anthropic system prompt arrays → OpenAI
    system
    role messages
  • Anthropic消息内容块 → OpenAI
    messages
    数组
  • Anthropic
    tool_use
    /
    tool_result
    块 → OpenAI
    function_calls
    /
    tool
    消息
  • OpenAI SSE流块 → Anthropic流事件
  • Anthropic系统提示词数组 → OpenAI
    system
    角色消息

Files changed from upstream

相对于上游版本修改的文件

src/services/api/openaiShim.ts   ← NEW: the shim (724 lines)
src/services/api/client.ts       ← routes to shim when CLAUDE_CODE_USE_OPENAI=1
src/utils/model/providers.ts     ← added 'openai' provider type
src/utils/model/configs.ts       ← added openai model mappings
src/utils/model/model.ts         ← respects OPENAI_MODEL for defaults
src/utils/auth.ts                ← recognizes OpenAI as valid 3rd-party provider

src/services/api/openaiShim.ts   ← 新增:垫片文件(724行)
src/services/api/client.ts       ← 当CLAUDE_CODE_USE_OPENAI=1时路由到垫片
src/utils/model/providers.ts     ← 新增'openai'提供商类型
src/utils/model/configs.ts       ← 新增openai模型映射
src/utils/model/model.ts         ← 默认值优先读取OPENAI_MODEL
src/utils/auth.ts                ← 识别OpenAI为合法第三方提供商

Developer Workflow — Key Commands

开发者工作流 — 核心命令

bash
undefined
bash
undefined

Run in dev mode (no build)

开发模式运行(无需构建)

bun run dev
bun run dev

Build distribution

构建发布版本

bun run build
bun run build

Launch with persisted profile (.openclaude-profile.json)

使用持久化配置文件启动(.openclaude-profile.json)

bun run dev:profile
bun run dev:profile

Launch with OpenAI profile (requires OPENAI_API_KEY in shell)

使用OpenAI配置启动(需要shell中已设置OPENAI_API_KEY)

bun run dev:openai
bun run dev:openai

Launch with Ollama profile (localhost:11434, llama3.1:8b default)

使用Ollama配置启动(默认localhost:11434,llama3.1:8b模型)

bun run dev:ollama
bun run dev:ollama

Launch with Codex profile

使用Codex配置启动

bun run dev:codex
bun run dev:codex

Quick startup sanity check

快速启动可用性检查

bun run smoke
bun run smoke

Validate provider env + reachability

验证提供商环境变量与可达性

bun run doctor:runtime
bun run doctor:runtime

Machine-readable runtime diagnostics

机器可读的运行时诊断信息

bun run doctor:runtime:json
bun run doctor:runtime:json

Persist diagnostics report to reports/doctor-runtime.json

将诊断报告持久化保存到reports/doctor-runtime.json

bun run doctor:report
bun run doctor:report

Full local hardening check (typecheck + smoke + runtime doctor)

完整本地加固检查(类型检查+可用性检查+运行时诊断)

bun run hardening:check
bun run hardening:check

Strict hardening (includes project-wide typecheck)

严格加固检查(包含全项目类型检查)

bun run hardening:strict

---
bun run hardening:strict

---

Profile Bootstrap — One-Time Setup

配置文件引导 — 一次性设置

Profiles save provider config to
.openclaude-profile.json
so you don't repeat env exports.
bash
undefined
配置文件会将提供商配置保存到
.openclaude-profile.json
,无需重复导出环境变量。
bash
undefined

Auto-detect provider (ollama if running, otherwise openai)

自动检测提供商(如果Ollama正在运行则用Ollama,否则用OpenAI)

bun run profile:init
bun run profile:init

Bootstrap for OpenAI

为OpenAI生成配置

bun run profile:init -- --provider openai --api-key $OPENAI_API_KEY
bun run profile:init -- --provider openai --api-key $OPENAI_API_KEY

Bootstrap for Ollama with custom model

为使用自定义模型的Ollama生成配置

bun run profile:init -- --provider ollama --model llama3.1:8b
bun run profile:init -- --provider ollama --model llama3.1:8b

Bootstrap for Codex

为Codex生成配置

bun run profile:init -- --provider codex --model codexspark bun run profile:codex

After bootstrapping, run the app via the persisted profile:

```bash
bun run dev:profile

bun run profile:init -- --provider codex --model codexspark bun run profile:codex

引导完成后,即可通过持久化配置文件运行应用:

```bash
bun run dev:profile

TypeScript Integration — Using the Shim Directly

TypeScript集成 — 直接使用垫片

If you want to use the shim in your own TypeScript code:
typescript
// src/services/api/client.ts pattern — routing to the shim
import { openaiShim } from './openaiShim.js';

const useOpenAI = process.env.CLAUDE_CODE_USE_OPENAI === '1';

const client = useOpenAI
  ? openaiShim({
      apiKey: process.env.OPENAI_API_KEY,
      baseURL: process.env.OPENAI_BASE_URL ?? 'https://api.openai.com/v1',
      model: process.env.OPENAI_MODEL ?? 'gpt-4o',
    })
  : new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });
typescript
// Streaming usage pattern (mirrors Anthropic SDK interface)
const stream = await client.messages.stream({
  model: process.env.OPENAI_MODEL!,
  max_tokens: 32000,
  system: 'You are a helpful coding assistant.',
  messages: [
    { role: 'user', content: 'Refactor this function for readability.' }
  ],
  tools: myTools, // Anthropic-format tool definitions — shim translates them
});

for await (const event of stream) {
  // events arrive in Anthropic format regardless of underlying provider
  if (event.type === 'content_block_delta') {
    process.stdout.write(event.delta.text ?? '');
  }
}

如果你想在自己的TypeScript代码中使用垫片:
typescript
// src/services/api/client.ts 模式 — 路由到垫片
import { openaiShim } from './openaiShim.js';

const useOpenAI = process.env.CLAUDE_CODE_USE_OPENAI === '1';

const client = useOpenAI
  ? openaiShim({
      apiKey: process.env.OPENAI_API_KEY,
      baseURL: process.env.OPENAI_BASE_URL ?? 'https://api.openai.com/v1',
      model: process.env.OPENAI_MODEL ?? 'gpt-4o',
    })
  : new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });
typescript
// 流式使用模式(与Anthropic SDK接口完全一致)
const stream = await client.messages.stream({
  model: process.env.OPENAI_MODEL!,
  max_tokens: 32000,
  system: 'You are a helpful coding assistant.',
  messages: [
    { role: 'user', content: 'Refactor this function for readability.' }
  ],
  tools: myTools, // Anthropic格式的工具定义 — 垫片会自动转换
});

for await (const event of stream) {
  // 无论底层提供商是什么,事件都以Anthropic格式返回
  if (event.type === 'content_block_delta') {
    process.stdout.write(event.delta.text ?? '');
  }
}

Model Quality Reference

模型质量参考

ModelTool CallingCode QualitySpeed
GPT-4oExcellentExcellentFast
DeepSeek-V3GreatGreatFast
Gemini 2.0 FlashGreatGoodVery Fast
Llama 3.3 70BGoodGoodMedium
Mistral LargeGoodGoodFast
GPT-4o-miniGoodGoodVery Fast
Qwen 2.5 72BGoodGoodMedium
Models < 7BLimitedLimitedVery Fast
For agentic multi-step tool use, prefer models with strong native function/tool calling (GPT-4o, DeepSeek-V3, Gemini 2.0 Flash).

模型Tool Calling代码质量速度
GPT-4o优秀优秀
DeepSeek-V3很好很好
Gemini 2.0 Flash很好非常快
Llama 3.3 70B中等
Mistral Large
GPT-4o-mini非常快
Qwen 2.5 72B中等
参数小于7B的模型有限有限非常快
对于多步骤Agent工具调用场景,优先选择原生函数/工具调用能力强的模型(GPT-4o、DeepSeek-V3、Gemini 2.0 Flash)。

What Works vs. What Doesn't

支持与不支持的功能

Fully supported

完全支持

  • All tools: Bash, FileRead, FileWrite, FileEdit, Glob, Grep, WebFetch, WebSearch, Agent, MCP, LSP, NotebookEdit, Tasks
  • Streaming (real-time token output)
  • Multi-step tool chains
  • Vision/images (base64 and URL) for models that support them
  • Slash commands:
    /commit
    ,
    /review
    ,
    /compact
    ,
    /diff
    ,
    /doctor
  • Sub-agents (AgentTool spawns sub-agents using the same provider)
  • Persistent memory
  • 所有工具:Bash、FileRead、FileWrite、FileEdit、Glob、Grep、WebFetch、WebSearch、Agent、MCP、LSP、NotebookEdit、Tasks
  • 流式输出(实时token输出)
  • 多步骤工具链
  • 支持视觉/图片能力的模型可使用base64和URL格式的图片输入
  • 斜杠命令:
    /commit
    /review
    /compact
    /diff
    /doctor
  • 子Agent(AgentTool使用相同提供商启动子Agent)
  • 持久化记忆

Not supported (Anthropic-specific features)

不支持(Anthropic专属特性)

  • Extended thinking / reasoning mode
  • Prompt caching (Anthropic cache headers skipped)
  • Anthropic beta feature headers
  • Token output defaults to 32K max (gracefully capped if model is lower)

  • 扩展思考/推理模式
  • 提示词缓存(跳过Anthropic缓存头)
  • Anthropic beta功能头
  • Token输出默认最高32K(如果模型上限更低会自动适配)

Troubleshooting

问题排查

doctor:runtime
fails with placeholder key error

doctor:runtime
报错提示占位符密钥错误

Error: OPENAI_API_KEY looks like a placeholder (SUA_CHAVE)
Set a real key:
export OPENAI_API_KEY=$YOUR_ACTUAL_KEY
Error: OPENAI_API_KEY looks like a placeholder (SUA_CHAVE)
设置真实的密钥:
export OPENAI_API_KEY=$YOUR_ACTUAL_KEY

Ollama connection refused

Ollama连接被拒绝

Ensure Ollama is running before launching:
bash
ollama serve &
ollama pull llama3.3:70b
bun run dev:ollama
启动前确保Ollama正在运行:
bash
ollama serve &
ollama pull llama3.3:70b
bun run dev:ollama

Tool calls not working / model ignores tools

工具调用不工作/模型忽略工具

Switch to a model with strong tool calling support (GPT-4o, DeepSeek-V3). Models under 7B parameters often fail at multi-step agentic tool use.
切换到工具调用能力强的模型(GPT-4o、DeepSeek-V3),7B参数以下的模型通常无法胜任多步骤Agent工具调用任务。

Azure endpoint format

Azure端点格式错误

The
OPENAI_BASE_URL
for Azure must include the deployment path:
https://<resource>.openai.azure.com/openai/deployments/<deployment>/v1
Azure的
OPENAI_BASE_URL
必须包含部署路径:
https://<resource>.openai.azure.com/openai/deployments/<deployment>/v1

Codex auth not found

找不到Codex认证信息

If
~/.codex/auth.json
doesn't exist, set the token directly:
bash
export CODEX_API_KEY=$YOUR_CODEX_TOKEN
Or point to a custom auth file:
bash
export CODEX_AUTH_JSON_PATH=/path/to/auth.json
如果
~/.codex/auth.json
不存在,可直接设置令牌:
bash
export CODEX_API_KEY=$YOUR_CODEX_TOKEN
或者指向自定义的认证文件:
bash
export CODEX_AUTH_JSON_PATH=/path/to/auth.json

Run diagnostics for any issue

任何问题都可运行诊断命令

bash
bun run doctor:runtime       # human-readable
bun run doctor:runtime:json  # machine-readable JSON
bun run doctor:report        # saves to reports/doctor-runtime.json
bash
bun run doctor:runtime       # 人类可读格式
bun run doctor:runtime:json  # 机器可读JSON格式
bun run doctor:report        # 保存到reports/doctor-runtime.json