codex
Original:🇺🇸 English
Translated
Use when the user asks to run Codex CLI (codex exec, codex resume) or references OpenAI Codex for code analysis, refactoring, or automated editing. Uses GPT-5.2 by default for state-of-the-art software engineering.
3.3kinstalls
Sourcesoftaworks/agent-toolkit
Added on
NPX Install
npx skill4agent add softaworks/agent-toolkit codexTags
Translated version includes tags in frontmatterSKILL.md Content
View Translation Comparison →Codex Skill Guide
Running a Task
- Default to model. Ask the user (via
gpt-5.2) which reasoning effort to use (AskUserQuestion,xhigh,high, ormedium). User can override model if needed (see Model Options below).low - Select the sandbox mode required for the task; default to unless edits or network access are necessary.
--sandbox read-only - Assemble the command with the appropriate options:
-m, --model <MODEL>--config model_reasoning_effort="<high|medium|low>"--sandbox <read-only|workspace-write|danger-full-access>--full-auto-C, --cd <DIR>--skip-git-repo-check
- Always use --skip-git-repo-check.
- When continuing a previous session, use via stdin. When resuming don't use any configuration flags unless explicitly requested by the user e.g. if he species the model or the reasoning effort when requesting to resume a session. Resume syntax:
codex exec --skip-git-repo-check resume --last. All flags have to be inserted between exec and resume.echo "your prompt here" | codex exec --skip-git-repo-check resume --last 2>/dev/null - IMPORTANT: By default, append to all
2>/dev/nullcommands to suppress thinking tokens (stderr). Only show stderr if the user explicitly requests to see thinking tokens or if debugging is needed.codex exec - Run the command, capture stdout/stderr (filtered as appropriate), and summarize the outcome for the user.
- After Codex completes, inform the user: "You can resume this Codex session at any time by saying 'codex resume' or asking me to continue with additional analysis or changes."
Quick Reference
| Use case | Sandbox mode | Key flags |
|---|---|---|
| Read-only review or analysis | | |
| Apply local edits | | |
| Permit network or broad access | | |
| Resume recent session | Inherited from original | |
| Run from another directory | Match task needs | |
Model Options
| Model | Best for | Context window | Key features |
|---|---|---|---|
| Max model: Ultra-complex reasoning, deep problem analysis | 400K input / 128K output | 76.3% SWE-bench, adaptive reasoning, $1.25/$10.00 |
| Flagship model: Software engineering, agentic coding workflows | 400K input / 128K output | 76.3% SWE-bench, adaptive reasoning, $1.25/$10.00 |
| Cost-efficient coding (4x more usage allowance) | 400K input / 128K output | Near SOTA performance, $0.25/$2.00 |
| Ultra-complex reasoning, deep problem analysis | 400K input / 128K output | Adaptive thinking depth, runs 2x slower on hardest tasks |
GPT-5.2 Advantages: 76.3% SWE-bench (vs 72.8% GPT-5), 30% faster on average tasks, better tool handling, reduced hallucinations, improved code quality. Knowledge cutoff: September 30, 2024.
Reasoning Effort Levels:
- - Ultra-complex tasks (deep problem analysis, complex reasoning, deep understanding of the problem)
xhigh - - Complex tasks (refactoring, architecture, security analysis, performance optimization)
high - - Standard tasks (refactoring, code organization, feature additions, bug fixes)
medium - - Simple tasks (quick fixes, simple changes, code formatting, documentation)
low
Cached Input Discount: 90% off ($0.125/M tokens) for repeated context, cache lasts up to 24 hours.
Following Up
- After every command, immediately use
codexto confirm next steps, collect clarifications, or decide whether to resume withAskUserQuestion.codex exec resume --last - When resuming, pipe the new prompt via stdin: . The resumed session automatically uses the same model, reasoning effort, and sandbox mode from the original session.
echo "new prompt" | codex exec resume --last 2>/dev/null - Restate the chosen model, reasoning effort, and sandbox mode when proposing follow-up actions.
Error Handling
- Stop and report failures whenever or a
codex --versioncommand exits non-zero; request direction before retrying.codex exec - Before you use high-impact flags (,
--full-auto,--sandbox danger-full-access) ask the user for permission using AskUserQuestion unless it was already given.--skip-git-repo-check - When output includes warnings or partial results, summarize them and ask how to adjust using .
AskUserQuestion
CLI Version
Requires Codex CLI v0.57.0 or later for GPT-5.2 model support. The CLI defaults to on macOS/Linux and on Windows. Check version:
gpt-5.2gpt-5.2codex --versionUse slash command within a Codex session to switch models, or configure default in .
/model~/.codex/config.toml