consult
Original:🇺🇸 English
Translated
Cross-tool AI consultation. Use when user asks to 'consult gemini', 'ask codex', 'get second opinion', 'cross-check with claude', 'consult another AI', 'ask opencode', 'copilot opinion', or wants a second opinion from a different AI tool.
4installs
Sourceavifenesh/agentsys
Added on
NPX Install
npx skill4agent add avifenesh/agentsys consultTags
Translated version includes tags in frontmatterSKILL.md Content
View Translation Comparison →consult
Cross-tool AI consultation: query another AI CLI tool and return the response.
When to Use
Invoke this skill when:
- User wants a second opinion from a different AI tool
- User asks to consult, ask, or cross-check with gemini/codex/claude/opencode/copilot
- User needs to compare responses across AI tools
- User wants to validate a decision with an external AI
Arguments
Parse from :
$ARGUMENTS| Flag | Values | Default | Description |
|---|---|---|---|
| gemini, codex, claude, opencode, copilot | (picker) | Target tool |
| low, medium, high, max | medium | Thinking effort level |
| any model name | (from effort) | Override model selection |
| diff, file=PATH, none | none | Auto-include context |
| (flag) or SESSION_ID | false | Resume previous session |
Question text is everything in except the flags above.
$ARGUMENTSProvider Configurations
Claude
Command: claude -p "QUESTION" --output-format json --model "MODEL" --max-turns TURNS --allowedTools "Read,Glob,Grep"
Session resume: --resume "SESSION_ID"Models: haiku, sonnet, opus
| Effort | Model | Max Turns |
|---|---|---|
| low | haiku | 1 |
| medium | sonnet | 3 |
| high | opus | 5 |
| max | opus | 10 |
Parse output:
Session ID:
Continuable: Yes
JSON.parse(stdout).resultJSON.parse(stdout).session_idGemini
Command: gemini -p "QUESTION" --output-format json -m "MODEL"
Session resume: --resume "SESSION_ID"Models: gemini-2.5-flash, gemini-2.5-pro, gemini-3-flash, gemini-3-pro
| Effort | Model |
|---|---|
| low | gemini-2.5-flash |
| medium | gemini-3-flash |
| high | gemini-3-pro |
| max | gemini-3-pro |
Parse output:
Session ID:
Continuable: Yes (via )
JSON.parse(stdout).responseJSON.parse(stdout).session_id--resumeCodex
Command: codex exec "QUESTION" --json -m "MODEL" -c model_reasoning_effort="LEVEL"
Session resume: codex exec resume SESSION_ID "QUESTION" --json
Session resume (latest): codex exec resume --last "QUESTION" --jsonNote: is the non-interactive/headless mode. There is no flag. The TUI mode is (no subcommand).
codex exec-qcodexModels: gpt-5.3-codex-spark, gpt-5-codex, gpt-5.1-codex, gpt-5.2-codex, gpt-5.3-codex, gpt-5.1-codex-max
| Effort | Model | Reasoning |
|---|---|---|
| low | gpt-5.3-codex-spark | low |
| medium | gpt-5.2-codex | medium |
| high | gpt-5.3-codex | high |
| max | gpt-5.3-codex | xhigh |
Parse output: or raw text
Session ID: Codex prints a resume hint at session end (e.g., ). Extract the session ID from stdout or from if available.
Continuable: Yes. Sessions are stored as JSONL rollout files at . Non-interactive resume uses . Use instead of a session ID to resume the most recent session.
JSON.parse(stdout).messagecodex resume SESSION_IDJSON.parse(stdout).session_id~/.codex/sessions/codex exec resume SESSION_ID "follow-up prompt" --json--lastOpenCode
Command: opencode run "QUESTION" --format json --model "MODEL" --variant "VARIANT"
Session resume: opencode run "QUESTION" --format json --model "MODEL" --variant "VARIANT" --continue (most recent) or --session "SESSION_ID"
With thinking: add --thinking flagModels: 75+ via providers (format: provider/model). Top picks: claude-sonnet-4-5, claude-opus-4-5, gpt-5.2, gpt-5.1-codex, gemini-3-pro, minimax-m2.1
| Effort | Model | Variant |
|---|---|---|
| low | (user-selected or default) | low |
| medium | (user-selected or default) | medium |
| high | (user-selected or default) | high |
| max | (user-selected or default) | high + --thinking |
Parse output: Parse JSON events from stdout, extract final text response
Session ID: Extract from JSON output if available, or use to auto-resume the most recent session.
Continuable: Yes (via or ). Sessions are stored in a SQLite database in the OpenCode data directory. Use for a specific session, or for the most recent.
--continue--continue--session--session SESSION_ID--continueCopilot
Command: copilot -p "QUESTION"Models: claude-sonnet-4-5 (default), claude-opus-4-6, claude-haiku-4-5, claude-sonnet-4, gpt-5
| Effort | Notes |
|---|---|
| all | No effort control available. Model selectable via --model flag. |
Parse output: Raw text from stdout
Continuable: No
Input Validation
Before building commands, validate all user-provided arguments:
- --tool: MUST be one of: gemini, codex, claude, opencode, copilot. Reject all other values.
- --effort: MUST be one of: low, medium, high, max. Default to medium.
- --model: Allow any string, but quote it in the command.
- --context=file=PATH: MUST resolve within the project directory. Reject absolute paths outside cwd. Additional checks:
- Block UNC paths (Windows): Reject paths starting with or
\\(network shares)// - Resolve canonical path: Use the Read tool to read the file (do NOT use shell commands). Before reading, resolve the path: join , then normalize (collapse
cwd + PATH,., resolve symlinks).. - Verify containment: The resolved canonical path MUST start with the current working directory. If it escapes (via , symlinks, or junction points), reject with:
..[ERROR] Path escapes project directory: {PATH} - No shell access: Read file content using the Read tool only. Never pass user-provided paths to shell commands (prevents injection via path values)
- Block UNC paths (Windows): Reject paths starting with
Command Building
Given the parsed arguments, build the complete CLI command. All user-provided values MUST be quoted in the shell command to prevent injection.
Step 1: Resolve Model
If is specified, use it directly. Otherwise, use the effort-based model from the provider table above.
--modelStep 2: Build Command String
Use the command template from the provider's configuration section. Substitute QUESTION, MODEL, TURNS, LEVEL, and VARIANT with resolved values.
If continuing a session:
- Claude or Gemini: append to the command.
--resume SESSION_ID - Codex: use instead of the standard command. Use
codex exec resume SESSION_ID "QUESTION" --jsoninstead of a session ID for the most recent session.--last - OpenCode: append to the command. If no session_id is saved, use
--session SESSION_IDinstead (resumes most recent session). If OpenCode at max effort: append--continue.--thinking
Step 3: Context Packaging
If : Run and prepend output to the question.
If : Read the file using the Read tool and prepend its content to the question.
--context=diffgit diff 2>/dev/null--context=file=PATHStep 4: Safe Question Passing
User-provided question text MUST NOT be interpolated into shell command strings. Shell escaping is insufficient -- , backticks, and other expansion sequences can execute arbitrary commands even inside double quotes.
$()Required approach -- pass question via stdin or temp file:
-
Write the question to a temporary file using the Write tool (e.g.,)
{AI_STATE_DIR}/consult/question.tmpPlatform state directory:- Claude Code:
.claude/ - OpenCode:
.opencode/ - Codex CLI:
.codex/
- Claude Code:
-
Build the command using the temp file as input instead of inline text:
| Provider | Safe command pattern |
|---|---|
| Claude | |
| Claude (resume) | |
| Gemini | |
| Gemini (resume) | |
| Codex | |
| Codex (resume) | |
| Codex (resume latest) | |
| OpenCode | |
| OpenCode (resume by ID) | |
| OpenCode (resume latest) | |
| Copilot | |
- Delete the temp file after the command completes (success or failure). Always clean up to prevent accumulation.
Model and session ID values are controlled strings (from pickers or saved state) and safe to quote directly in the command. Only the question contains arbitrary user text and requires the temp file approach. The temp file path () uses a platform-controlled directory and fixed filename -- no user input in the path.
{AI_STATE_DIR}/consult/question.tmpProvider Detection
Cross-platform tool detection:
- Windows: -- returns 0 if found
where.exe TOOL 2>nul - Unix: -- returns 0 if found
which TOOL 2>/dev/null
Check each tool (claude, gemini, codex, opencode, copilot) and return only the available ones.
Session Management
Save Session
After successful consultation, save to :
{AI_STATE_DIR}/consult/last-session.jsonjson
{
"tool": "claude",
"model": "opus",
"effort": "high",
"session_id": "abc-123-def-456",
"timestamp": "2026-02-10T12:00:00Z",
"question": "original question text",
"continuable": true
}AI_STATE_DIR- Claude Code:
.claude/ - OpenCode:
.opencode/ - Codex CLI:
.codex/
Load Session
For , read the session file and restore:
--continue- tool (from saved state)
- session_id (for --resume flag)
- model (reuse same model)
If session file not found, warn and proceed as fresh consultation.
Output Sanitization
Before including any consulted tool's response in the output, scan the response text and redact matches for these patterns:
| Pattern | Description | Replacement |
|---|---|---|
| Anthropic API keys | |
| OpenAI project keys | |
| Anthropic API keys (ant prefix) | |
| Google API keys | |
| GitHub personal access tokens | |
| GitHub OAuth tokens | |
| GitHub fine-grained PATs | |
| Key assignment in env output | |
| Key assignment in env output | |
| Key assignment in env output | |
| Key assignment in env output | |
| AWS access keys | |
| AWS session tokens | |
| Authorization headers | |
Apply redaction to the full response text before inserting into the result JSON. If any redaction occurs, append a note:
[WARN] Sensitive tokens were redacted from the response.Output Format
Return a plain JSON object to stdout (no markers or wrappers):
json
{
"tool": "gemini",
"model": "gemini-3-pro",
"effort": "high",
"duration_ms": 12300,
"response": "The AI's response text here...",
"session_id": "abc-123",
"continuable": true
}Install Instructions
When a tool is not found, return these install commands:
| Tool | Install |
|---|---|
| Claude | |
| Gemini | See https://gemini.google.com/cli for install instructions |
| Codex | |
| OpenCode | |
| Copilot | |
Error Handling
| Error | Response |
|---|---|
| Tool not installed | Return install instructions from table above |
| Tool execution timeout | Return |
| JSON parse error | Return raw text as response |
| Empty output | Return |
| Session file missing | Proceed without session resume |
| API key missing | Return tool-specific env var instructions |
Integration
This skill is invoked by:
- for
consult-agentcommand/consult - Direct invocation:
Skill('consult', '"question" --tool=gemini --effort=high')
Example:
Skill('consult', '"Is this approach correct?" --tool=gemini --effort=high --model=gemini-3-pro')