Loading...
Loading...
Use Claude Code's full tool system with any OpenAI-compatible LLM — GPT-4o, DeepSeek, Gemini, Ollama, and 200+ models via environment variable configuration.
npx skill4agent add aradotso/trending-skills openclaude-multi-llmSkill by ara.so — Daily 2026 Skills collection.
openaiShim.tsnpm install -g @gitlawb/openclaude
# CLI command installed: openclaudegit clone https://node.gitlawb.com/z6MkqDnb7Siv3Cwj7pGJq4T5EsUisECqR8KpnDLwcaZq5TPr/openclaude.git
cd openclaude
bun install
bun run build
# optionally link globally
npm linkbun run dev # run directly with Bun, no build stepCLAUDE_CODE_USE_OPENAI=1| Variable | Required | Purpose |
|---|---|---|
| Yes | Set to |
| Yes* | API key (*omit for local Ollama/LM Studio) |
| Yes | Model identifier |
| No | Custom endpoint (default: |
| Codex only | ChatGPT/Codex access token |
| Codex only | Path to Codex CLI |
OPENAI_MODELANTHROPIC_MODELexport CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=$OPENAI_API_KEY
export OPENAI_MODEL=gpt-4o
openclaudeexport CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=$DEEPSEEK_API_KEY
export OPENAI_BASE_URL=https://api.deepseek.com/v1
export OPENAI_MODEL=deepseek-chat
openclaudeexport CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=$OPENROUTER_API_KEY
export OPENAI_BASE_URL=https://openrouter.ai/api/v1
export OPENAI_MODEL=google/gemini-2.0-flash
openclaudeollama pull llama3.3:70b
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_BASE_URL=http://localhost:11434/v1
export OPENAI_MODEL=llama3.3:70b
openclaudeexport CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=$GROQ_API_KEY
export OPENAI_BASE_URL=https://api.groq.com/openai/v1
export OPENAI_MODEL=llama-3.3-70b-versatile
openclaudeexport CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=$MISTRAL_API_KEY
export OPENAI_BASE_URL=https://api.mistral.ai/v1
export OPENAI_MODEL=mistral-large-latest
openclaudeexport CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=$AZURE_OPENAI_KEY
export OPENAI_BASE_URL=https://your-resource.openai.azure.com/openai/deployments/your-deployment/v1
export OPENAI_MODEL=gpt-4o
openclaudeexport CLAUDE_CODE_USE_OPENAI=1
export OPENAI_MODEL=codexplan # or codexspark for faster loops
# reads ~/.codex/auth.json automatically if present
# or set: export CODEX_API_KEY=$CODEX_TOKEN
openclaudeexport CLAUDE_CODE_USE_OPENAI=1
export OPENAI_BASE_URL=http://localhost:1234/v1
export OPENAI_MODEL=your-model-name
openclaudeexport CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=$TOGETHER_API_KEY
export OPENAI_BASE_URL=https://api.together.xyz/v1
export OPENAI_MODEL=meta-llama/Llama-3.3-70B-Instruct-Turbo
openclaudesrc/services/api/openaiShim.tsClaude Code Tool System
│
▼
Anthropic SDK interface (duck-typed)
│
▼
openaiShim.ts ← format translation layer
│
▼
OpenAI Chat Completions API
│
▼
Any compatible modelmessagestool_usetool_resultfunction_callstoolsystemsrc/services/api/openaiShim.ts ← NEW: the shim (724 lines)
src/services/api/client.ts ← routes to shim when CLAUDE_CODE_USE_OPENAI=1
src/utils/model/providers.ts ← added 'openai' provider type
src/utils/model/configs.ts ← added openai model mappings
src/utils/model/model.ts ← respects OPENAI_MODEL for defaults
src/utils/auth.ts ← recognizes OpenAI as valid 3rd-party provider# Run in dev mode (no build)
bun run dev
# Build distribution
bun run build
# Launch with persisted profile (.openclaude-profile.json)
bun run dev:profile
# Launch with OpenAI profile (requires OPENAI_API_KEY in shell)
bun run dev:openai
# Launch with Ollama profile (localhost:11434, llama3.1:8b default)
bun run dev:ollama
# Launch with Codex profile
bun run dev:codex
# Quick startup sanity check
bun run smoke
# Validate provider env + reachability
bun run doctor:runtime
# Machine-readable runtime diagnostics
bun run doctor:runtime:json
# Persist diagnostics report to reports/doctor-runtime.json
bun run doctor:report
# Full local hardening check (typecheck + smoke + runtime doctor)
bun run hardening:check
# Strict hardening (includes project-wide typecheck)
bun run hardening:strict.openclaude-profile.json# Auto-detect provider (ollama if running, otherwise openai)
bun run profile:init
# Bootstrap for OpenAI
bun run profile:init -- --provider openai --api-key $OPENAI_API_KEY
# Bootstrap for Ollama with custom model
bun run profile:init -- --provider ollama --model llama3.1:8b
# Bootstrap for Codex
bun run profile:init -- --provider codex --model codexspark
bun run profile:codexbun run dev:profile// src/services/api/client.ts pattern — routing to the shim
import { openaiShim } from './openaiShim.js';
const useOpenAI = process.env.CLAUDE_CODE_USE_OPENAI === '1';
const client = useOpenAI
? openaiShim({
apiKey: process.env.OPENAI_API_KEY,
baseURL: process.env.OPENAI_BASE_URL ?? 'https://api.openai.com/v1',
model: process.env.OPENAI_MODEL ?? 'gpt-4o',
})
: new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });// Streaming usage pattern (mirrors Anthropic SDK interface)
const stream = await client.messages.stream({
model: process.env.OPENAI_MODEL!,
max_tokens: 32000,
system: 'You are a helpful coding assistant.',
messages: [
{ role: 'user', content: 'Refactor this function for readability.' }
],
tools: myTools, // Anthropic-format tool definitions — shim translates them
});
for await (const event of stream) {
// events arrive in Anthropic format regardless of underlying provider
if (event.type === 'content_block_delta') {
process.stdout.write(event.delta.text ?? '');
}
}| Model | Tool Calling | Code Quality | Speed |
|---|---|---|---|
| GPT-4o | Excellent | Excellent | Fast |
| DeepSeek-V3 | Great | Great | Fast |
| Gemini 2.0 Flash | Great | Good | Very Fast |
| Llama 3.3 70B | Good | Good | Medium |
| Mistral Large | Good | Good | Fast |
| GPT-4o-mini | Good | Good | Very Fast |
| Qwen 2.5 72B | Good | Good | Medium |
| Models < 7B | Limited | Limited | Very Fast |
/commit/review/compact/diff/doctordoctor:runtimeError: OPENAI_API_KEY looks like a placeholder (SUA_CHAVE)export OPENAI_API_KEY=$YOUR_ACTUAL_KEYollama serve &
ollama pull llama3.3:70b
bun run dev:ollamaOPENAI_BASE_URLhttps://<resource>.openai.azure.com/openai/deployments/<deployment>/v1~/.codex/auth.jsonexport CODEX_API_KEY=$YOUR_CODEX_TOKENexport CODEX_AUTH_JSON_PATH=/path/to/auth.jsonbun run doctor:runtime # human-readable
bun run doctor:runtime:json # machine-readable JSON
bun run doctor:report # saves to reports/doctor-runtime.json