Loading...
Loading...
Use when the user asks to run Codex CLI (codex exec, codex resume) or references OpenAI Codex for code analysis, refactoring, or automated editing. Uses GPT-5.2 by default for state-of-the-art software engineering.
npx skill4agent add davila7/claude-code-templates codexgpt-5.2AskUserQuestionxhighhighmediumlow--sandbox read-only-m, --model <MODEL>--config model_reasoning_effort="<high|medium|low>"--sandbox <read-only|workspace-write|danger-full-access>--full-auto-C, --cd <DIR>--skip-git-repo-checkcodex exec --skip-git-repo-check resume --lastecho "your prompt here" | codex exec --skip-git-repo-check resume --last 2>/dev/null2>/dev/nullcodex exec| Use case | Sandbox mode | Key flags |
|---|---|---|
| Read-only review or analysis | | |
| Apply local edits | | |
| Permit network or broad access | | |
| Resume recent session | Inherited from original | |
| Run from another directory | Match task needs | |
| Model | Best for | Context window | Key features |
|---|---|---|---|
| Max model: Ultra-complex reasoning, deep problem analysis | 400K input / 128K output | 76.3% SWE-bench, adaptive reasoning, $1.25/$10.00 |
| Flagship model: Software engineering, agentic coding workflows | 400K input / 128K output | 76.3% SWE-bench, adaptive reasoning, $1.25/$10.00 |
| Cost-efficient coding (4x more usage allowance) | 400K input / 128K output | Near SOTA performance, $0.25/$2.00 |
| Ultra-complex reasoning, deep problem analysis | 400K input / 128K output | Adaptive thinking depth, runs 2x slower on hardest tasks |
xhighhighmediumlowcodexAskUserQuestioncodex exec resume --lastecho "new prompt" | codex exec resume --last 2>/dev/nullcodex --versioncodex exec--full-auto--sandbox danger-full-access--skip-git-repo-checkAskUserQuestiongpt-5.2gpt-5.2codex --version/model~/.codex/config.toml