consult

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

consult

咨询

Cross-tool AI consultation: query another AI CLI tool and return the response.
跨工具AI咨询:调用其他AI CLI工具并返回响应。

When to Use

使用场景

Invoke this skill when:
  • User wants a second opinion from a different AI tool
  • User asks to consult, ask, or cross-check with gemini/codex/claude/opencode/copilot
  • User needs to compare responses across AI tools
  • User wants to validate a decision with an external AI
在以下场景调用此技能:
  • 用户希望从不同AI工具获取第二意见
  • 用户要求咨询、询问或用gemini/codex/claude/opencode/copilot进行交叉验证
  • 用户需要对比不同AI工具的响应
  • 用户希望通过外部AI验证决策

Arguments

参数

Parse from
$ARGUMENTS
:
FlagValuesDefaultDescription
--tool
gemini, codex, claude, opencode, copilot(picker)Target tool
--effort
low, medium, high, maxmediumThinking effort level
--model
any model name(from effort)Override model selection
--context
diff, file=PATH, nonenoneAuto-include context
--continue
(flag) or SESSION_IDfalseResume previous session
Question text is everything in
$ARGUMENTS
except the flags above.
$ARGUMENTS
中解析:
标志取值默认值说明
--tool
gemini, codex, claude, opencode, copilot(选择器)目标工具
--effort
low, medium, high, maxmedium思考力度等级
--model
任意模型名称(基于effort)覆盖模型选择
--context
diff, file=PATH, nonenone自动包含上下文
--continue
(标志) 或 SESSION_IDfalse恢复之前的会话
问题文本为
$ARGUMENTS
中除上述标志外的所有内容。

Provider Configurations

提供商配置

Claude

Claude

Command: claude -p "QUESTION" --output-format json --model "MODEL" --max-turns TURNS --allowedTools "Read,Glob,Grep"
Session resume: --resume "SESSION_ID"
Models: haiku, sonnet, opus
EffortModelMax Turns
lowhaiku1
mediumsonnet3
highopus5
maxopus10
Parse output:
JSON.parse(stdout).result
Session ID:
JSON.parse(stdout).session_id
Continuable: Yes
Command: claude -p "QUESTION" --output-format json --model "MODEL" --max-turns TURNS --allowedTools "Read,Glob,Grep"
Session resume: --resume "SESSION_ID"
Models: haiku, sonnet, opus
力度模型最大轮次
lowhaiku1
mediumsonnet3
highopus5
maxopus10
解析输出:
JSON.parse(stdout).result
会话ID:
JSON.parse(stdout).session_id
可续会话: 是

Gemini

Gemini

Command: gemini -p "QUESTION" --output-format json -m "MODEL"
Session resume: --resume "SESSION_ID"
Models: gemini-2.5-flash, gemini-2.5-pro, gemini-3-flash, gemini-3-pro
EffortModel
lowgemini-2.5-flash
mediumgemini-3-flash
highgemini-3-pro
maxgemini-3-pro
Parse output:
JSON.parse(stdout).response
Session ID:
JSON.parse(stdout).session_id
Continuable: Yes (via
--resume
)
Command: gemini -p "QUESTION" --output-format json -m "MODEL"
Session resume: --resume "SESSION_ID"
Models: gemini-2.5-flash, gemini-2.5-pro, gemini-3-flash, gemini-3-pro
力度模型
lowgemini-2.5-flash
mediumgemini-3-flash
highgemini-3-pro
maxgemini-3-pro
解析输出:
JSON.parse(stdout).response
会话ID:
JSON.parse(stdout).session_id
可续会话: 是(通过
--resume

Codex

Codex

Command: codex exec "QUESTION" --json -m "MODEL" -c model_reasoning_effort="LEVEL"
Session resume: codex exec resume SESSION_ID "QUESTION" --json
Session resume (latest): codex exec resume --last "QUESTION" --json
Note:
codex exec
is the non-interactive/headless mode. There is no
-q
flag. The TUI mode is
codex
(no subcommand).
Models: gpt-5.3-codex-spark, gpt-5-codex, gpt-5.1-codex, gpt-5.2-codex, gpt-5.3-codex, gpt-5.1-codex-max
EffortModelReasoning
lowgpt-5.3-codex-sparklow
mediumgpt-5.2-codexmedium
highgpt-5.3-codexhigh
maxgpt-5.3-codexxhigh
Parse output:
JSON.parse(stdout).message
or raw text Session ID: Codex prints a resume hint at session end (e.g.,
codex resume SESSION_ID
). Extract the session ID from stdout or from
JSON.parse(stdout).session_id
if available. Continuable: Yes. Sessions are stored as JSONL rollout files at
~/.codex/sessions/
. Non-interactive resume uses
codex exec resume SESSION_ID "follow-up prompt" --json
. Use
--last
instead of a session ID to resume the most recent session.
Command: codex exec "QUESTION" --json -m "MODEL" -c model_reasoning_effort="LEVEL"
Session resume: codex exec resume SESSION_ID "QUESTION" --json
Session resume (latest): codex exec resume --last "QUESTION" --json
注意:
codex exec
是非交互式/无头模式,没有
-q
标志。TUI模式为
codex
(无子命令)。
Models: gpt-5.3-codex-spark, gpt-5-codex, gpt-5.1-codex, gpt-5.2-codex, gpt-5.3-codex, gpt-5.1-codex-max
力度模型推理等级
lowgpt-5.3-codex-sparklow
mediumgpt-5.2-codexmedium
highgpt-5.3-codexhigh
maxgpt-5.3-codexxhigh
解析输出:
JSON.parse(stdout).message
或原始文本 会话ID: Codex会在会话结束时打印恢复提示(例如
codex resume SESSION_ID
)。从stdout或
JSON.parse(stdout).session_id
(如果可用)中提取会话ID。 可续会话: 是。会话以JSONL滚动文件存储在
~/.codex/sessions/
。非交互式恢复使用
codex exec resume SESSION_ID "follow-up prompt" --json
。使用
--last
代替会话ID恢复最近的会话。

OpenCode

OpenCode

Command: opencode run "QUESTION" --format json --model "MODEL" --variant "VARIANT"
Session resume: opencode run "QUESTION" --format json --model "MODEL" --variant "VARIANT" --continue (most recent) or --session "SESSION_ID"
With thinking: add --thinking flag
Models: 75+ via providers (format: provider/model). Top picks: claude-sonnet-4-5, claude-opus-4-5, gpt-5.2, gpt-5.1-codex, gemini-3-pro, minimax-m2.1
EffortModelVariant
low(user-selected or default)low
medium(user-selected or default)medium
high(user-selected or default)high
max(user-selected or default)high + --thinking
Parse output: Parse JSON events from stdout, extract final text response Session ID: Extract from JSON output if available, or use
--continue
to auto-resume the most recent session. Continuable: Yes (via
--continue
or
--session
). Sessions are stored in a SQLite database in the OpenCode data directory. Use
--session SESSION_ID
for a specific session, or
--continue
for the most recent.
Command: opencode run "QUESTION" --format json --model "MODEL" --variant "VARIANT"
Session resume: opencode run "QUESTION" --format json --model "MODEL" --variant "VARIANT" --continue (最近会话) 或 --session "SESSION_ID"
With thinking: 添加 --thinking 标志
Models: 75+ 来自提供商(格式: provider/model)。首选模型: claude-sonnet-4-5, claude-opus-4-5, gpt-5.2, gpt-5.1-codex, gemini-3-pro, minimax-m2.1
力度模型变体
low(用户选择或默认)low
medium(用户选择或默认)medium
high(用户选择或默认)high
max(用户选择或默认)high + --thinking
解析输出: 从stdout解析JSON事件,提取最终文本响应 会话ID: 从JSON输出中提取(如果可用),或使用
--continue
自动恢复最近会话。 可续会话: 是(通过
--continue
--session
)。会话存储在OpenCode数据目录的SQLite数据库中。使用
--session SESSION_ID
指定会话,或
--continue
恢复最近会话。

Copilot

Copilot

Command: copilot -p "QUESTION"
Models: claude-sonnet-4-5 (default), claude-opus-4-6, claude-haiku-4-5, claude-sonnet-4, gpt-5
EffortNotes
allNo effort control available. Model selectable via --model flag.
Parse output: Raw text from stdout Continuable: No
Command: copilot -p "QUESTION"
Models: claude-sonnet-4-5 (默认), claude-opus-4-6, claude-haiku-4-5, claude-sonnet-4, gpt-5
力度说明
所有无力度控制,可通过--model标志选择模型。
解析输出: stdout中的原始文本 可续会话: 否

Input Validation

输入验证

Before building commands, validate all user-provided arguments:
  • --tool: MUST be one of: gemini, codex, claude, opencode, copilot. Reject all other values.
  • --effort: MUST be one of: low, medium, high, max. Default to medium.
  • --model: Allow any string, but quote it in the command.
  • --context=file=PATH: MUST resolve within the project directory. Reject absolute paths outside cwd. Additional checks:
    1. Block UNC paths (Windows): Reject paths starting with
      \\
      or
      //
      (network shares)
    2. Resolve canonical path: Use the Read tool to read the file (do NOT use shell commands). Before reading, resolve the path: join
      cwd + PATH
      , then normalize (collapse
      .
      ,
      ..
      , resolve symlinks)
    3. Verify containment: The resolved canonical path MUST start with the current working directory. If it escapes (via
      ..
      , symlinks, or junction points), reject with:
      [ERROR] Path escapes project directory: {PATH}
    4. No shell access: Read file content using the Read tool only. Never pass user-provided paths to shell commands (prevents injection via path values)
构建命令前,验证所有用户提供的参数:
  • --tool: 必须是以下之一: gemini, codex, claude, opencode, copilot。拒绝其他所有值。
  • --effort: 必须是以下之一: low, medium, high, max。默认值为medium。
  • --model: 允许任意字符串,但在命令中需加引号。
  • --context=file=PATH: 必须在项目目录内解析。拒绝当前工作目录外的绝对路径。额外检查:
    1. 阻止UNC路径(Windows): 拒绝以
      \\
      //
      开头的路径(网络共享)
    2. 解析规范路径: 使用Read工具读取文件(不要使用shell命令)。读取前解析路径:拼接
      cwd + PATH
      ,然后规范化(折叠
      .
      ,
      ..
      , 解析符号链接)
    3. 验证包含性: 解析后的规范路径必须以当前工作目录开头。如果通过
      ..
      , 符号链接或连接点跳出目录,返回错误:
      [ERROR] Path escapes project directory: {PATH}
    4. 无Shell访问: 仅使用Read工具读取文件内容。绝不要将用户提供的路径传递给shell命令(防止路径注入)

Command Building

命令构建

Given the parsed arguments, build the complete CLI command. All user-provided values MUST be quoted in the shell command to prevent injection.
根据解析后的参数,构建完整的CLI命令。所有用户提供的值必须在shell命令中加引号以防止注入。

Step 1: Resolve Model

步骤1: 解析模型

If
--model
is specified, use it directly. Otherwise, use the effort-based model from the provider table above.
如果指定了
--model
,直接使用。否则,使用上述提供商表格中基于力度的模型。

Step 2: Build Command String

步骤2: 构建命令字符串

Use the command template from the provider's configuration section. Substitute QUESTION, MODEL, TURNS, LEVEL, and VARIANT with resolved values.
If continuing a session:
  • Claude or Gemini: append
    --resume SESSION_ID
    to the command.
  • Codex: use
    codex exec resume SESSION_ID "QUESTION" --json
    instead of the standard command. Use
    --last
    instead of a session ID for the most recent session.
  • OpenCode: append
    --session SESSION_ID
    to the command. If no session_id is saved, use
    --continue
    instead (resumes most recent session). If OpenCode at max effort: append
    --thinking
    .
使用提供商配置部分的命令模板。将QUESTION, MODEL, TURNS, LEVEL, VARIANT替换为解析后的值。
如果恢复会话:
  • Claude或Gemini: 在命令后追加
    --resume SESSION_ID
  • Codex: 使用
    codex exec resume SESSION_ID "QUESTION" --json
    代替标准命令。使用
    --last
    代替会话ID恢复最近会话。
  • OpenCode: 在命令后追加
    --session SESSION_ID
    。如果没有保存session_id,使用
    --continue
    (恢复最近会话)。 如果OpenCode使用max力度: 追加
    --thinking

Step 3: Context Packaging

步骤3: 上下文打包

If
--context=diff
: Run
git diff 2>/dev/null
and prepend output to the question. If
--context=file=PATH
: Read the file using the Read tool and prepend its content to the question.
如果
--context=diff
: 运行
git diff 2>/dev/null
并将输出前置到问题中。 如果
--context=file=PATH
: 使用Read工具读取文件并将内容前置到问题中。

Step 4: Safe Question Passing

步骤4: 安全传递问题

User-provided question text MUST NOT be interpolated into shell command strings. Shell escaping is insufficient --
$()
, backticks, and other expansion sequences can execute arbitrary commands even inside double quotes.
Required approach -- pass question via stdin or temp file:
  1. Write the question to a temporary file using the Write tool (e.g.,
    {AI_STATE_DIR}/consult/question.tmp
    )
    Platform state directory:
    • Claude Code:
      .claude/
    • OpenCode:
      .opencode/
    • Codex CLI:
      .codex/
  2. Build the command using the temp file as input instead of inline text:
ProviderSafe command pattern
Claude
claude -p - --output-format json --model "MODEL" --max-turns TURNS --allowedTools "Read,Glob,Grep" < "{AI_STATE_DIR}/consult/question.tmp"
Claude (resume)
claude -p - --output-format json --model "MODEL" --max-turns TURNS --allowedTools "Read,Glob,Grep" --resume "SESSION_ID" < "{AI_STATE_DIR}/consult/question.tmp"
Gemini
gemini -p - --output-format json -m "MODEL" < "{AI_STATE_DIR}/consult/question.tmp"
Gemini (resume)
gemini -p - --output-format json -m "MODEL" --resume "SESSION_ID" < "{AI_STATE_DIR}/consult/question.tmp"
Codex
codex exec "$(cat "{AI_STATE_DIR}/consult/question.tmp")" --json -m "MODEL" -c model_reasoning_effort="LEVEL"
(Codex exec lacks stdin mode -- cat reads from platform-controlled path, not user input)
Codex (resume)
codex exec resume SESSION_ID "$(cat "{AI_STATE_DIR}/consult/question.tmp")" --json -m "MODEL"
Codex (resume latest)
codex exec resume --last "$(cat "{AI_STATE_DIR}/consult/question.tmp")" --json -m "MODEL"
OpenCode
opencode run - --format json --model "MODEL" --variant "VARIANT" < "{AI_STATE_DIR}/consult/question.tmp"
OpenCode (resume by ID)
opencode run - --format json --model "MODEL" --variant "VARIANT" --session "SESSION_ID" < "{AI_STATE_DIR}/consult/question.tmp"
OpenCode (resume latest)
opencode run - --format json --model "MODEL" --variant "VARIANT" --continue < "{AI_STATE_DIR}/consult/question.tmp"
Copilot
copilot -p - < "{AI_STATE_DIR}/consult/question.tmp"
  1. Delete the temp file after the command completes (success or failure). Always clean up to prevent accumulation.
Model and session ID values are controlled strings (from pickers or saved state) and safe to quote directly in the command. Only the question contains arbitrary user text and requires the temp file approach. The temp file path (
{AI_STATE_DIR}/consult/question.tmp
) uses a platform-controlled directory and fixed filename -- no user input in the path.
用户提供的问题文本绝不能直接插入到shell命令字符串中。Shell转义是不够的——
$()
, 反引号和其他扩展序列即使在双引号内也能执行任意命令。
必要方法 -- 通过stdin或临时文件传递问题:
  1. 将问题写入临时文件 使用Write工具(例如
    {AI_STATE_DIR}/consult/question.tmp
    平台状态目录:
    • Claude Code:
      .claude/
    • OpenCode:
      .opencode/
    • Codex CLI:
      .codex/
  2. 构建命令 使用临时文件作为输入,而非内联文本:
提供商安全命令模式
Claude
claude -p - --output-format json --model "MODEL" --max-turns TURNS --allowedTools "Read,Glob,Grep" < "{AI_STATE_DIR}/consult/question.tmp"
Claude (恢复会话)
claude -p - --output-format json --model "MODEL" --max-turns TURNS --allowedTools "Read,Glob,Grep" --resume "SESSION_ID" < "{AI_STATE_DIR}/consult/question.tmp"
Gemini
gemini -p - --output-format json -m "MODEL" < "{AI_STATE_DIR}/consult/question.tmp"
Gemini (恢复会话)
gemini -p - --output-format json -m "MODEL" --resume "SESSION_ID" < "{AI_STATE_DIR}/consult/question.tmp"
Codex
codex exec "$(cat "{AI_STATE_DIR}/consult/question.tmp")" --json -m "MODEL" -c model_reasoning_effort="LEVEL"
(Codex exec无stdin模式 -- cat读取平台控制的路径,而非用户输入)
Codex (恢复会话)
codex exec resume SESSION_ID "$(cat "{AI_STATE_DIR}/consult/question.tmp")" --json -m "MODEL"
Codex (恢复最近会话)
codex exec resume --last "$(cat "{AI_STATE_DIR}/consult/question.tmp")" --json -m "MODEL"
OpenCode
opencode run - --format json --model "MODEL" --variant "VARIANT" < "{AI_STATE_DIR}/consult/question.tmp"
OpenCode (按ID恢复会话)
opencode run - --format json --model "MODEL" --variant "VARIANT" --session "SESSION_ID" < "{AI_STATE_DIR}/consult/question.tmp"
OpenCode (恢复最近会话)
opencode run - --format json --model "MODEL" --variant "VARIANT" --continue < "{AI_STATE_DIR}/consult/question.tmp"
Copilot
copilot -p - < "{AI_STATE_DIR}/consult/question.tmp"
  1. 删除临时文件 命令完成后(成功或失败)。始终清理以防止文件堆积。
模型和会话ID值是受控字符串(来自选择器或保存的状态),可以直接在命令中加引号。只有问题包含任意用户文本,需要使用临时文件方法。临时文件路径(
{AI_STATE_DIR}/consult/question.tmp
)使用平台控制的目录和固定文件名 -- 路径中无用户输入。

Provider Detection

工具检测

Cross-platform tool detection:
  • Windows:
    where.exe TOOL 2>nul
    -- returns 0 if found
  • Unix:
    which TOOL 2>/dev/null
    -- returns 0 if found
Check each tool (claude, gemini, codex, opencode, copilot) and return only the available ones.
跨平台工具检测:
  • Windows:
    where.exe TOOL 2>nul
    -- 找到则返回0
  • Unix:
    which TOOL 2>/dev/null
    -- 找到则返回0
检查每个工具(claude, gemini, codex, opencode, copilot)并仅返回可用的工具。

Session Management

会话管理

Save Session

保存会话

After successful consultation, save to
{AI_STATE_DIR}/consult/last-session.json
:
json
{
  "tool": "claude",
  "model": "opus",
  "effort": "high",
  "session_id": "abc-123-def-456",
  "timestamp": "2026-02-10T12:00:00Z",
  "question": "original question text",
  "continuable": true
}
AI_STATE_DIR
uses the platform state directory:
  • Claude Code:
    .claude/
  • OpenCode:
    .opencode/
  • Codex CLI:
    .codex/
成功咨询后,保存到
{AI_STATE_DIR}/consult/last-session.json
:
json
{
  "tool": "claude",
  "model": "opus",
  "effort": "high",
  "session_id": "abc-123-def-456",
  "timestamp": "2026-02-10T12:00:00Z",
  "question": "original question text",
  "continuable": true
}
AI_STATE_DIR
使用平台状态目录:
  • Claude Code:
    .claude/
  • OpenCode:
    .opencode/
  • Codex CLI:
    .codex/

Load Session

加载会话

For
--continue
, read the session file and restore:
  • tool (from saved state)
  • session_id (for --resume flag)
  • model (reuse same model)
If session file not found, warn and proceed as fresh consultation.
对于
--continue
,读取会话文件并恢复:
  • tool (来自保存的状态)
  • session_id (用于--resume标志)
  • model (复用相同模型)
如果会话文件不存在,发出警告并作为新咨询继续。

Output Sanitization

输出清理

Before including any consulted tool's response in the output, scan the response text and redact matches for these patterns:
PatternDescriptionReplacement
sk-[a-zA-Z0-9_-]{20,}
Anthropic API keys
[REDACTED_API_KEY]
sk-proj-[a-zA-Z0-9_-]{20,}
OpenAI project keys
[REDACTED_API_KEY]
sk-ant-[a-zA-Z0-9_-]{20,}
Anthropic API keys (ant prefix)
[REDACTED_API_KEY]
AIza[a-zA-Z0-9_-]{30,}
Google API keys
[REDACTED_API_KEY]
ghp_[a-zA-Z0-9]{36,}
GitHub personal access tokens
[REDACTED_TOKEN]
gho_[a-zA-Z0-9]{36,}
GitHub OAuth tokens
[REDACTED_TOKEN]
github_pat_[a-zA-Z0-9_]{20,}
GitHub fine-grained PATs
[REDACTED_TOKEN]
ANTHROPIC_API_KEY=[^\s]+
Key assignment in env output
ANTHROPIC_API_KEY=[REDACTED]
OPENAI_API_KEY=[^\s]+
Key assignment in env output
OPENAI_API_KEY=[REDACTED]
GOOGLE_API_KEY=[^\s]+
Key assignment in env output
GOOGLE_API_KEY=[REDACTED]
GEMINI_API_KEY=[^\s]+
Key assignment in env output
GEMINI_API_KEY=[REDACTED]
AKIA[A-Z0-9]{16}
AWS access keys
[REDACTED_AWS_KEY]
ASIA[A-Z0-9]{16}
AWS session tokens
[REDACTED_AWS_KEY]
Bearer [a-zA-Z0-9_-]{20,}
Authorization headers
Bearer [REDACTED]
Apply redaction to the full response text before inserting into the result JSON. If any redaction occurs, append a note:
[WARN] Sensitive tokens were redacted from the response.
在将任何咨询工具的响应包含到输出前,扫描响应文本并编辑匹配以下模式的内容:
模式描述替换内容
sk-[a-zA-Z0-9_-]{20,}
Anthropic API密钥
[REDACTED_API_KEY]
sk-proj-[a-zA-Z0-9_-]{20,}
OpenAI项目密钥
[REDACTED_API_KEY]
sk-ant-[a-zA-Z0-9_-]{20,}
Anthropic API密钥(ant前缀)
[REDACTED_API_KEY]
AIza[a-zA-Z0-9_-]{30,}
Google API密钥
[REDACTED_API_KEY]
ghp_[a-zA-Z0-9]{36,}
GitHub个人访问令牌
[REDACTED_TOKEN]
gho_[a-zA-Z0-9]{36,}
GitHub OAuth令牌
[REDACTED_TOKEN]
github_pat_[a-zA-Z0-9_]{20,}
GitHub细粒度PAT
[REDACTED_TOKEN]
ANTHROPIC_API_KEY=[^\s]+
环境输出中的密钥赋值
ANTHROPIC_API_KEY=[REDACTED]
OPENAI_API_KEY=[^\s]+
环境输出中的密钥赋值
OPENAI_API_KEY=[REDACTED]
GOOGLE_API_KEY=[^\s]+
环境输出中的密钥赋值
GOOGLE_API_KEY=[REDACTED]
GEMINI_API_KEY=[^\s]+
环境输出中的密钥赋值
GEMINI_API_KEY=[REDACTED]
AKIA[A-Z0-9]{16}
AWS访问密钥
[REDACTED_AWS_KEY]
ASIA[A-Z0-9]{16}
AWS会话令牌
[REDACTED_AWS_KEY]
Bearer [a-zA-Z0-9_-]{20,}
授权头
Bearer [REDACTED]
在将响应插入结果JSON前,对完整响应文本应用编辑。如果进行了任何编辑,追加提示:
[WARN] Sensitive tokens were redacted from the response.

Output Format

输出格式

Return a plain JSON object to stdout (no markers or wrappers):
json
{
  "tool": "gemini",
  "model": "gemini-3-pro",
  "effort": "high",
  "duration_ms": 12300,
  "response": "The AI's response text here...",
  "session_id": "abc-123",
  "continuable": true
}
向stdout返回纯JSON对象(无标记或包装器):
json
{
  "tool": "gemini",
  "model": "gemini-3-pro",
  "effort": "high",
  "duration_ms": 12300,
  "response": "The AI's response text here...",
  "session_id": "abc-123",
  "continuable": true
}

Install Instructions

安装说明

When a tool is not found, return these install commands:
ToolInstall
Claude
npm install -g @anthropic-ai/claude-code
GeminiSee https://gemini.google.com/cli for install instructions
Codex
npm install -g @openai/codex
OpenCode
npm install -g opencode-ai
or
brew install anomalyco/tap/opencode
Copilot
gh extension install github/copilot-cli
当未找到工具时,返回以下安装命令:
工具安装命令
Claude
npm install -g @anthropic-ai/claude-code
Gemini查看 https://gemini.google.com/cli 获取安装说明
Codex
npm install -g @openai/codex
OpenCode
npm install -g opencode-ai
brew install anomalyco/tap/opencode
Copilot
gh extension install github/copilot-cli

Error Handling

错误处理

ErrorResponse
Tool not installedReturn install instructions from table above
Tool execution timeoutReturn
"response": "Timeout after 120s"
JSON parse errorReturn raw text as response
Empty outputReturn
"response": "No output received"
Session file missingProceed without session resume
API key missingReturn tool-specific env var instructions
错误响应
工具未安装返回上表中的安装说明
工具执行超时返回
"response": "Timeout after 120s"
JSON解析错误返回原始文本作为响应
空输出返回
"response": "No output received"
会话文件缺失不恢复会话继续执行
API密钥缺失返回工具特定的环境变量说明

Integration

集成

This skill is invoked by:
  • consult-agent
    for
    /consult
    command
  • Direct invocation:
    Skill('consult', '"question" --tool=gemini --effort=high')
Example:
Skill('consult', '"Is this approach correct?" --tool=gemini --effort=high --model=gemini-3-pro')
此技能由以下方式调用:
  • consult-agent
    用于
    /consult
    命令
  • 直接调用:
    Skill('consult', '"question" --tool=gemini --effort=high')
示例:
Skill('consult', '"Is this approach correct?" --tool=gemini --effort=high --model=gemini-3-pro')