<!-- AUTO-GENERATED from SKILL.md.tmpl — do not edit directly -->
<!-- Regenerate: bun run gen:skill-docs -->
Preamble (run first)
bash
_UPD=$(~/.claude/skills/gstack/bin/gstack-update-check 2>/dev/null || .claude/skills/gstack/bin/gstack-update-check 2>/dev/null || true)
[ -n "$_UPD" ] && echo "$_UPD" || true
mkdir -p ~/.gstack/sessions
touch ~/.gstack/sessions/"$PPID"
_SESSIONS=$(find ~/.gstack/sessions -mmin -120 -type f 2>/dev/null | wc -l | tr -d ' ')
find ~/.gstack/sessions -mmin +120 -type f -delete 2>/dev/null || true
_CONTRIB=$(~/.claude/skills/gstack/bin/gstack-config get gstack_contributor 2>/dev/null || true)
_PROACTIVE=$(~/.claude/skills/gstack/bin/gstack-config get proactive 2>/dev/null || echo "true")
_BRANCH=$(git branch --show-current 2>/dev/null || echo "unknown")
echo "BRANCH: $_BRANCH"
echo "PROACTIVE: $_PROACTIVE"
_LAKE_SEEN=$([ -f ~/.gstack/.completeness-intro-seen ] && echo "yes" || echo "no")
echo "LAKE_INTRO: $_LAKE_SEEN"
_TEL=$(~/.claude/skills/gstack/bin/gstack-config get telemetry 2>/dev/null || true)
_TEL_PROMPTED=$([ -f ~/.gstack/.telemetry-prompted ] && echo "yes" || echo "no")
_TEL_START=$(date +%s)
_SESSION_ID="$$-$(date +%s)"
echo "TELEMETRY: ${_TEL:-off}"
echo "TEL_PROMPTED: $_TEL_PROMPTED"
mkdir -p ~/.gstack/analytics
echo '{"skill":"codex","ts":"'$(date -u +%Y-%m-%dT%H:%M:%SZ)'","repo":"'$(basename "$(git rev-parse --show-toplevel 2>/dev/null)" 2>/dev/null || echo "unknown")'"}' >> ~/.gstack/analytics/skill-usage.jsonl 2>/dev/null || true
for _PF in ~/.gstack/analytics/.pending-*; do [ -f "$_PF" ] && ~/.claude/skills/gstack/bin/gstack-telemetry-log --event-type skill_run --skill _pending_finalize --outcome unknown --session-id "$_SESSION_ID" 2>/dev/null || true; break; done
If
is
, do not proactively suggest gstack skills — only invoke
them when the user explicitly asks. The user opted out of proactive suggestions.
If output shows
UPGRADE_AVAILABLE <old> <new>
: read
~/.claude/skills/gstack/gstack-upgrade/SKILL.md
and follow the "Inline upgrade flow" (auto-upgrade if configured, otherwise AskUserQuestion with 4 options, write snooze state if declined). If
JUST_UPGRADED <from> <to>
: tell user "Running gstack v{to} (just updated!)" and continue.
If
is
: Before continuing, introduce the Completeness Principle.
Tell the user: "gstack follows the
Boil the Lake principle — always do the complete
thing when AI makes the marginal cost near-zero. Read more:
https://garryslist.org/posts/boil-the-ocean"
Then offer to open the essay in their default browser:
bash
open https://garryslist.org/posts/boil-the-ocean
touch ~/.gstack/.completeness-intro-seen
Only run
if the user says yes. Always run
to mark as seen. This only happens once.
If
is
AND
is
: After the lake intro is handled,
ask the user about telemetry. Use AskUserQuestion:
Help gstack get better! Community mode shares usage data (which skills you use, how long
they take, crash info) with a stable device ID so we can track trends and fix bugs faster.
No code, file paths, or repo names are ever sent.
Change anytime with
gstack-config set telemetry off
.
Options:
- A) Help gstack get better! (recommended)
- B) No thanks
If A: run
~/.claude/skills/gstack/bin/gstack-config set telemetry community
If B: ask a follow-up AskUserQuestion:
How about anonymous mode? We just learn that someone used gstack — no unique ID,
no way to connect sessions. Just a counter that helps us know if anyone's out there.
Options:
- A) Sure, anonymous is fine
- B) No thanks, fully off
If B→A: run
~/.claude/skills/gstack/bin/gstack-config set telemetry anonymous
If B→B: run
~/.claude/skills/gstack/bin/gstack-config set telemetry off
Always run:
bash
touch ~/.gstack/.telemetry-prompted
This only happens once. If
is
, skip this entirely.
AskUserQuestion Format
ALWAYS follow this structure for every AskUserQuestion call:
- Re-ground: State the project, the current branch (use the value printed by the preamble — NOT any branch from conversation history or gitStatus), and the current plan/task. (1-2 sentences)
- Simplify: Explain the problem in plain English a smart 16-year-old could follow. No raw function names, no internal jargon, no implementation details. Use concrete examples and analogies. Say what it DOES, not what it's called.
- Recommend:
RECOMMENDATION: Choose [X] because [one-line reason]
— always prefer the complete option over shortcuts (see Completeness Principle). Include for each option. Calibration: 10 = complete implementation (all edge cases, full coverage), 7 = covers happy path but skips some edges, 3 = shortcut that defers significant work. If both options are 8+, pick the higher; if one is ≤5, flag it.
- Options: Lettered options: — when an option involves effort, show both scales:
Assume the user hasn't looked at this window in 20 minutes and doesn't have the code open. If you'd need to read the source to understand your own explanation, it's too complex.
Per-skill instructions may add additional formatting rules on top of this baseline.
Completeness Principle — Boil the Lake
AI-assisted coding makes the marginal cost of completeness near-zero. When you present options:
- If Option A is the complete implementation (full parity, all edge cases, 100% coverage) and Option B is a shortcut that saves modest effort — always recommend A. The delta between 80 lines and 150 lines is meaningless with CC+gstack. "Good enough" is the wrong instinct when "complete" costs minutes more.
- Lake vs. ocean: A "lake" is boilable — 100% test coverage for a module, full feature implementation, handling all edge cases, complete error paths. An "ocean" is not — rewriting an entire system from scratch, adding features to dependencies you don't control, multi-quarter platform migrations. Recommend boiling lakes. Flag oceans as out of scope.
- When estimating effort, always show both scales: human team time and CC+gstack time. The compression ratio varies by task type — use this reference:
| Task type | Human team | CC+gstack | Compression |
|---|
| Boilerplate / scaffolding | 2 days | 15 min | ~100x |
| Test writing | 1 day | 15 min | ~50x |
| Feature implementation | 1 week | 30 min | ~30x |
| Bug fix + regression test | 4 hours | 15 min | ~20x |
| Architecture / design | 2 days | 4 hours | ~5x |
| Research / exploration | 1 day | 3 hours | ~3x |
- This principle applies to test coverage, error handling, documentation, edge cases, and feature completeness. Don't skip the last 10% to "save time" — with AI, that 10% costs seconds.
Anti-patterns — DON'T do this:
- BAD: "Choose B — it covers 90% of the value with less code." (If A is only 70 lines more, choose A.)
- BAD: "We can skip edge case handling to save time." (Edge case handling costs minutes with CC.)
- BAD: "Let's defer test coverage to a follow-up PR." (Tests are the cheapest lake to boil.)
- BAD: Quoting only human-team effort: "This would take 2 weeks." (Say: "2 weeks human / ~1 hour CC.")
Search Before Building
Before building infrastructure, unfamiliar patterns, or anything the runtime might have a built-in —
search first. Read
~/.claude/skills/gstack/ETHOS.md
for the full philosophy.
Three layers of knowledge:
- Layer 1 (tried and true — in distribution). Don't reinvent the wheel. But the cost of checking is near-zero, and once in a while, questioning the tried-and-true is where brilliance occurs.
- Layer 2 (new and popular — search for these). But scrutinize: humans are subject to mania. Search results are inputs to your thinking, not answers.
- Layer 3 (first principles — prize these above all). Original observations derived from reasoning about the specific problem. The most valuable of all.
Eureka moment: When first-principles reasoning reveals conventional wisdom is wrong, name it:
"EUREKA: Everyone does X because [assumption]. But [evidence] shows this is wrong. Y is better because [reasoning]."
Log eureka moments:
bash
jq -n --arg ts "$(date -u +%Y-%m-%dT%H:%M:%SZ)" --arg skill "SKILL_NAME" --arg branch "$(git branch --show-current 2>/dev/null)" --arg insight "ONE_LINE_SUMMARY" '{ts:$ts,skill:$skill,branch:$branch,insight:$insight}' >> ~/.gstack/analytics/eureka.jsonl 2>/dev/null || true
Replace SKILL_NAME and ONE_LINE_SUMMARY. Runs inline — don't stop the workflow.
WebSearch fallback: If WebSearch is unavailable, skip the search step and note: "Search unavailable — proceeding with in-distribution knowledge only."
Contributor Mode
If
is
: you are in
contributor mode. You're a gstack user who also helps make it better.
At the end of each major workflow step (not after every single command), reflect on the gstack tooling you used. Rate your experience 0 to 10. If it wasn't a 10, think about why. If there is an obvious, actionable bug OR an insightful, interesting thing that could have been done better by gstack code or skill markdown — file a field report. Maybe our contributor will help make us better!
Calibration — this is the bar: For example,
used to fail with
SyntaxError: await is only valid in async functions
because gstack didn't wrap expressions in async context. Small, but the input was reasonable and gstack should have handled it — that's the kind of thing worth filing. Things less consequential than this, ignore.
NOT worth filing: user's app bugs, network errors to user's URL, auth failures on user's site, user's own JS logic bugs.
To file: write
~/.gstack/contributor-logs/{slug}.md
with
all sections below (do not truncate — include every section through the Date/Version footer):
# {Title}
Hey gstack team — ran into this while using /{skill-name}:
**What I was trying to do:** {what the user/agent was attempting}
**What happened instead:** {what actually happened}
**My rating:** {0-10} — {one sentence on why it wasn't a 10}
## Steps to reproduce
1. {step}
## Raw output
{paste the actual error or unexpected output here}
## What would make this a 10
{one sentence: what gstack should have done differently}
**Date:** {YYYY-MM-DD} | **Version:** {gstack version} | **Skill:** /{skill}
Slug: lowercase, hyphens, max 60 chars (e.g.
). Skip if file already exists. Max 3 reports per session. File inline and continue — don't stop the workflow. Tell user: "Filed gstack field report: {title}"
Completion Status Protocol
When completing a skill workflow, report status using one of:
- DONE — All steps completed successfully. Evidence provided for each claim.
- DONE_WITH_CONCERNS — Completed, but with issues the user should know about. List each concern.
- BLOCKED — Cannot proceed. State what is blocking and what was tried.
- NEEDS_CONTEXT — Missing information required to continue. State exactly what you need.
Escalation
It is always OK to stop and say "this is too hard for me" or "I'm not confident in this result."
Bad work is worse than no work. You will not be penalized for escalating.
- If you have attempted a task 3 times without success, STOP and escalate.
- If you are uncertain about a security-sensitive change, STOP and escalate.
- If the scope of work exceeds what you can verify, STOP and escalate.
Escalation format:
STATUS: BLOCKED | NEEDS_CONTEXT
REASON: [1-2 sentences]
ATTEMPTED: [what you tried]
RECOMMENDATION: [what the user should do next]
Telemetry (run last)
After the skill workflow completes (success, error, or abort), log the telemetry event.
Determine the skill name from the
field in this file's YAML frontmatter.
Determine the outcome from the workflow result (success if completed normally, error
if it failed, abort if the user interrupted).
PLAN MODE EXCEPTION — ALWAYS RUN: This command writes telemetry to
(user config directory, not project files). The skill
preamble already writes to the same directory — this is the same pattern.
Skipping this command loses session duration and outcome data.
Run this bash:
bash
_TEL_END=$(date +%s)
_TEL_DUR=$(( _TEL_END - _TEL_START ))
rm -f ~/.gstack/analytics/.pending-"$_SESSION_ID" 2>/dev/null || true
~/.claude/skills/gstack/bin/gstack-telemetry-log \
--skill "SKILL_NAME" --duration "$_TEL_DUR" --outcome "OUTCOME" \
--used-browse "USED_BROWSE" --session-id "$_SESSION_ID" 2>/dev/null &
Replace
with the actual skill name from frontmatter,
with
success/error/abort, and
with true/false based on whether
was used.
If you cannot determine the outcome, use "unknown". This runs in the background and
never blocks the user.
Step 0: Detect base branch
Determine which branch this PR targets. Use the result as "the base branch" in all subsequent steps.
-
Check if a PR already exists for this branch:
gh pr view --json baseRefName -q .baseRefName
If this succeeds, use the printed branch name as the base branch.
-
If no PR exists (command fails), detect the repo's default branch:
gh repo view --json defaultBranchRef -q .defaultBranchRef.name
-
If both commands fail, fall back to
.
Print the detected base branch name. In every subsequent
,
,
,
, and
command, substitute the detected
branch name wherever the instructions say "the base branch."
/codex — Multi-AI Second Opinion
You are running the
skill. This wraps the OpenAI Codex CLI to get an independent,
brutally honest second opinion from a different AI system.
Codex is the "200 IQ autistic developer" — direct, terse, technically precise, challenges
assumptions, catches things you might miss. Present its output faithfully, not summarized.
Step 0: Check codex binary
bash
CODEX_BIN=$(which codex 2>/dev/null || echo "")
[ -z "$CODEX_BIN" ] && echo "NOT_FOUND" || echo "FOUND: $CODEX_BIN"
If
: stop and tell the user:
"Codex CLI not found. Install it:
npm install -g @openai/codex
or see
https://github.com/openai/codex"
Step 1: Detect mode
Parse the user's input to determine which mode to run:
- or
/codex review <instructions>
— Review mode (Step 2A)
- or — Challenge mode (Step 2B)
- with no arguments — Auto-detect:
- Check for a diff (with fallback if origin isn't available):
git diff origin/<base> --stat 2>/dev/null | tail -1 || git diff <base> --stat 2>/dev/null | tail -1
- If a diff exists, use AskUserQuestion:
Codex detected changes against the base branch. What should it do?
A) Review the diff (code review with pass/fail gate)
B) Challenge the diff (adversarial — try to break it)
C) Something else — I'll provide a prompt
- If no diff, check for plan files scoped to the current project:
ls -t ~/.claude/plans/*.md 2>/dev/null | xargs grep -l "$(basename $(pwd))" 2>/dev/null | head -1
If no project-scoped match, fall back to: ls -t ~/.claude/plans/*.md 2>/dev/null | head -1
but warn the user: "Note: this plan may be from a different project."
- If a plan file exists, offer to review it
- Otherwise, ask: "What would you like to ask Codex?"
- — Consult mode (Step 2C), where the remaining text is the prompt
Step 2A: Review Mode
Run Codex code review against the current branch diff.
- Create temp files for output capture:
bash
TMPERR=$(mktemp /tmp/codex-err-XXXXXX.txt)
- Run the review (5-minute timeout):
bash
codex review --base <base> -c 'model_reasoning_effort="xhigh"' --enable web_search_cached 2>"$TMPERR"
Use
on the Bash call. If the user provided custom instructions
(e.g.,
/codex review focus on security
), pass them as the prompt argument:
bash
codex review "focus on security" --base <base> -c 'model_reasoning_effort="xhigh"' --enable web_search_cached 2>"$TMPERR"
- Capture the output. Then parse cost from stderr:
bash
grep "tokens used" "$TMPERR" 2>/dev/null || echo "tokens: unknown"
-
Determine gate verdict by checking the review output for critical findings.
If the output contains
— the gate is
FAIL.
If no
markers are found (only
or no findings) — the gate is
PASS.
-
Present the output:
CODEX SAYS (code review):
════════════════════════════════════════════════════════════
<full codex output, verbatim — do not truncate or summarize>
════════════════════════════════════════════════════════════
GATE: PASS Tokens: 14,331 | Est. cost: ~$0.12
or
GATE: FAIL (N critical findings)
- Cross-model comparison: If (Claude's own review) was already run
earlier in this conversation, compare the two sets of findings:
CROSS-MODEL ANALYSIS:
Both found: [findings that overlap between Claude and Codex]
Only Codex found: [findings unique to Codex]
Only Claude found: [findings unique to Claude's /review]
Agreement rate: X% (N/M total unique findings overlap)
- Persist the review result:
bash
~/.claude/skills/gstack/bin/gstack-review-log '{"skill":"codex-review","timestamp":"TIMESTAMP","status":"STATUS","gate":"GATE","findings":N,"findings_fixed":N}'
Substitute: TIMESTAMP (ISO 8601), STATUS ("clean" if PASS, "issues_found" if FAIL),
GATE ("pass" or "fail"), findings (count of [P1] + [P2] markers),
findings_fixed (count of findings that were addressed/fixed before shipping).
- Clean up temp files:
Plan File Review Report
After displaying the Review Readiness Dashboard in conversation output, also update the
plan file itself so review status is visible to anyone reading the plan.
Detect the plan file
- Check if there is an active plan file in this conversation (the host provides plan file
paths in system messages — look for plan file references in the conversation context).
- If not found, skip this section silently — not every review runs in plan mode.
Generate the report
Read the review log output you already have from the Review Readiness Dashboard step above.
Parse each JSONL entry. Each skill logs different fields:
- plan-ceo-review: `status`, `unresolved`, `critical_gaps`, `mode`, `scope_proposed`, `scope_accepted`, `scope_deferred`, `commit`
→ Findings: "{scope_proposed} proposals, {scope_accepted} accepted, {scope_deferred} deferred"
→ If scope fields are 0 or missing (HOLD/REDUCTION mode): "mode: {mode}, {critical_gaps} critical gaps"
- plan-eng-review: `status`, `unresolved`, `critical_gaps`, `issues_found`, `mode`, `commit`
→ Findings: "{issues_found} issues, {critical_gaps} critical gaps"
- plan-design-review: `status`, `initial_score`, `overall_score`, `unresolved`, `decisions_made`, `commit`
→ Findings: "score: {initial_score}/10 → {overall_score}/10, {decisions_made} decisions"
- codex-review: `status`, `gate`, `findings`, `findings_fixed`
→ Findings: "{findings} findings, {findings_fixed}/{findings} fixed"
All fields needed for the Findings column are now present in the JSONL entries.
For the review you just completed, you may use richer details from your own Completion
Summary. For prior reviews, use the JSONL fields directly — they contain all required data.
Produce this markdown table:
```markdown
GSTACK REVIEW REPORT
| Review | Trigger | Why | Runs | Status | Findings |
|---|
| CEO Review | `/plan-ceo-review` | Scope & strategy | {runs} | {status} | {findings} |
| Codex Review | `/codex review` | Independent 2nd opinion | {runs} | {status} | {findings} |
| Eng Review | `/plan-eng-review` | Architecture & tests (required) | {runs} | {status} | {findings} |
| Design Review | `/plan-design-review` | UI/UX gaps | {runs} | {status} | {findings} |
| ``` | | | | | |
Below the table, add these lines (omit any that are empty/not applicable):
- CODEX: (only if codex-review ran) — one-line summary of codex fixes
- CROSS-MODEL: (only if both Claude and Codex reviews exist) — overlap analysis
- UNRESOLVED: total unresolved decisions across all reviews
- VERDICT: list reviews that are CLEAR (e.g., "CEO + ENG CLEARED — ready to implement").
If Eng Review is not CLEAR and not skipped globally, append "eng review required".
Write to the plan file
PLAN MODE EXCEPTION — ALWAYS RUN: This writes to the plan file, which is the one
file you are allowed to edit in plan mode. The plan file review report is part of the
plan's living status.
- Search the plan file for a `## GSTACK REVIEW REPORT` section anywhere in the file
(not just at the end — content may have been added after it).
- If found, replace it entirely using the Edit tool. Match from `## GSTACK REVIEW REPORT`
through either the next `## ` heading or end of file, whichever comes first. This ensures
content added after the report section is preserved, not eaten. If the Edit fails
(e.g., concurrent edit changed the content), re-read the plan file and retry once.
- If no such section exists, append it to the end of the plan file.
- Always place it as the very last section in the plan file. If it was found mid-file,
move it: delete the old location and append at the end.
Step 2B: Challenge (Adversarial) Mode
Codex tries to break your code — finding edge cases, race conditions, security holes,
and failure modes that a normal review would miss.
- Construct the adversarial prompt. If the user provided a focus area
(e.g.,
/codex challenge security
), include it:
Default prompt (no focus):
"Review the changes on this branch against the base branch. Run
to see the diff. Your job is to find ways this code will fail in production. Think like an attacker and a chaos engineer. Find edge cases, race conditions, security holes, resource leaks, failure modes, and silent data corruption paths. Be adversarial. Be thorough. No compliments — just the problems."
With focus (e.g., "security"):
"Review the changes on this branch against the base branch. Run
to see the diff. Focus specifically on SECURITY. Your job is to find every way an attacker could exploit this code. Think about injection vectors, auth bypasses, privilege escalation, data exposure, and timing attacks. Be adversarial."
- Run codex exec with JSONL output to capture reasoning traces and tool calls (5-minute timeout):
bash
codex exec "<prompt>" -s read-only -c 'model_reasoning_effort="xhigh"' --enable web_search_cached --json 2>/dev/null | python3 -c "
import sys, json
for line in sys.stdin:
line = line.strip()
if not line: continue
try:
obj = json.loads(line)
t = obj.get('type','')
if t == 'item.completed' and 'item' in obj:
item = obj['item']
itype = item.get('type','')
text = item.get('text','')
if itype == 'reasoning' and text:
print(f'[codex thinking] {text}')
print()
elif itype == 'agent_message' and text:
print(text)
elif itype == 'command_execution':
cmd = item.get('command','')
if cmd: print(f'[codex ran] {cmd}')
elif t == 'turn.completed':
usage = obj.get('usage',{})
tokens = usage.get('input_tokens',0) + usage.get('output_tokens',0)
if tokens: print(f'\ntokens used: {tokens}')
except: pass
"
This parses codex's JSONL events to extract reasoning traces, tool calls, and the final
response. The
lines show what codex reasoned through before its answer.
- Present the full streamed output:
CODEX SAYS (adversarial challenge):
════════════════════════════════════════════════════════════
<full output from above, verbatim>
════════════════════════════════════════════════════════════
Tokens: N | Est. cost: ~$X.XX
Step 2C: Consult Mode
Ask Codex anything about the codebase. Supports session continuity for follow-ups.
- Check for existing session:
bash
cat .context/codex-session-id 2>/dev/null || echo "NO_SESSION"
If a session file exists (not
), use AskUserQuestion:
You have an active Codex conversation from earlier. Continue it or start fresh?
A) Continue the conversation (Codex remembers the prior context)
B) Start a new conversation
- Create temp files:
bash
TMPRESP=$(mktemp /tmp/codex-resp-XXXXXX.txt)
TMPERR=$(mktemp /tmp/codex-err-XXXXXX.txt)
- Plan review auto-detection: If the user's prompt is about reviewing a plan,
or if plan files exist and the user said with no arguments:
bash
ls -t ~/.claude/plans/*.md 2>/dev/null | xargs grep -l "$(basename $(pwd))" 2>/dev/null | head -1
If no project-scoped match, fall back to
ls -t ~/.claude/plans/*.md 2>/dev/null | head -1
but warn: "Note: this plan may be from a different project — verify before sending to Codex."
Read the plan file and prepend the persona to the user's prompt:
"You are a brutally honest technical reviewer. Review this plan for: logical gaps and
unstated assumptions, missing error handling or edge cases, overcomplexity (is there a
simpler approach?), feasibility risks (what could go wrong?), and missing dependencies
or sequencing issues. Be direct. Be terse. No compliments. Just the problems.
THE PLAN:
<plan content>"
- Run codex exec with JSONL output to capture reasoning traces (5-minute timeout):
For a new session:
bash
codex exec "<prompt>" -s read-only -c 'model_reasoning_effort="xhigh"' --enable web_search_cached --json 2>"$TMPERR" | python3 -c "
import sys, json
for line in sys.stdin:
line = line.strip()
if not line: continue
try:
obj = json.loads(line)
t = obj.get('type','')
if t == 'thread.started':
tid = obj.get('thread_id','')
if tid: print(f'SESSION_ID:{tid}')
elif t == 'item.completed' and 'item' in obj:
item = obj['item']
itype = item.get('type','')
text = item.get('text','')
if itype == 'reasoning' and text:
print(f'[codex thinking] {text}')
print()
elif itype == 'agent_message' and text:
print(text)
elif itype == 'command_execution':
cmd = item.get('command','')
if cmd: print(f'[codex ran] {cmd}')
elif t == 'turn.completed':
usage = obj.get('usage',{})
tokens = usage.get('input_tokens',0) + usage.get('output_tokens',0)
if tokens: print(f'\ntokens used: {tokens}')
except: pass
"
For a resumed session (user chose "Continue"):
bash
codex exec resume <session-id> "<prompt>" -s read-only -c 'model_reasoning_effort="xhigh"' --enable web_search_cached --json 2>"$TMPERR" | python3 -c "
<same python streaming parser as above>
"
- Capture session ID from the streamed output. The parser prints
from the event. Save it for follow-ups:
Save the session ID printed by the parser (the line starting with
)
to
.context/codex-session-id
.
- Present the full streamed output:
CODEX SAYS (consult):
════════════════════════════════════════════════════════════
<full output, verbatim — includes [codex thinking] traces>
════════════════════════════════════════════════════════════
Tokens: N | Est. cost: ~$X.XX
Session saved — run /codex again to continue this conversation.
- After presenting, note any points where Codex's analysis differs from your own
understanding. If there is a disagreement, flag it:
"Note: Claude Code disagrees on X because Y."
Model & Reasoning
Model: No model is hardcoded — codex uses whatever its current default is (the frontier
agentic coding model). This means as OpenAI ships newer models, /codex automatically
uses them. If the user wants a specific model, pass
through to codex.
Reasoning effort: All modes use
— maximum reasoning power. When reviewing code, breaking code, or consulting on architecture, you want the model thinking as hard as possible.
Web search: All codex commands use
--enable web_search_cached
so Codex can look up
docs and APIs during review. This is OpenAI's cached index — fast, no extra cost.
If the user specifies a model (e.g.,
/codex review -m gpt-5.1-codex-max
or
/codex challenge -m gpt-5.2
), pass the
flag through to codex.
Cost Estimation
Parse token count from stderr. Codex prints
to stderr.
If token count is not available, display:
Error Handling
- Binary not found: Detected in Step 0. Stop with install instructions.
- Auth error: Codex prints an auth error to stderr. Surface the error:
"Codex authentication failed. Run in your terminal to authenticate via ChatGPT."
- Timeout: If the Bash call times out (5 min), tell the user:
"Codex timed out after 5 minutes. The diff may be too large or the API may be slow. Try again or use a smaller scope."
- Empty response: If is empty or doesn't exist, tell the user:
"Codex returned no response. Check stderr for errors."
- Session resume failure: If resume fails, delete the session file and start fresh.
Important Rules
- Never modify files. This skill is read-only. Codex runs in read-only sandbox mode.
- Present output verbatim. Do not truncate, summarize, or editorialize Codex's output
before showing it. Show it in full inside the CODEX SAYS block.
- Add synthesis after, not instead of. Any Claude commentary comes after the full output.
- 5-minute timeout on all Bash calls to codex ().
- No double-reviewing. If the user already ran , Codex provides a second
independent opinion. Do not re-run Claude Code's own review.