c-review
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseC/C++ Security Review
C/C++安全审查
Runs in the main conversation (invoke via ). Orchestrator owns the ledger as bookkeeping for retries; workers and judges have no Task tools. Workers and judges are named plugin subagents (, , ); tool sets are declared in . Findings are exchanged via markdown-with-YAML files in a shared output directory.
/c-review:c-reviewTask*c-review:c-review-workerc-review:c-review-dedup-judgec-review:c-review-fp-judgeplugins/c-review/agents/*.md在主对话中运行(通过调用)。编排器负责维护账本以记录重试情况;工作子代理(worker)和评审子代理(judge)没有Task工具。工作子代理和评审子代理是命名的插件子代理(、、);工具集在中声明。检测结果通过共享输出目录中的带YAML的Markdown文件进行交换。
/c-review:c-reviewTask*c-review:c-review-workerc-review:c-review-dedup-judgec-review:c-review-fp-judgeplugins/c-review/agents/*.mdWhen to Use
适用场景
Native C/C++ application security review: memory safety, integer overflow, races, type confusion, Linux/macOS daemons, Windows userspace services.
原生C/C++应用程序安全审查:内存安全、整数溢出、竞争条件、类型混淆、Linux/macOS守护进程、Windows用户空间服务。
When NOT to Use
不适用场景
- Kernel drivers/modules (Linux, Windows, macOS).
- Managed languages (Java, C#, Python, Go, Rust).
- Embedded/bare-metal code without libc.
- 内核驱动/模块(Linux、Windows、macOS)。
- 托管语言(Java、C#、Python、Go、Rust)。
- 无libc的嵌入式/裸机代码。
Subagents
子代理
| Subagent type | Purpose | Tool set |
|---|---|---|
| Run assigned cluster, write findings | Read, Write, Edit, Grep, Glob, Bash |
| Merge duplicates (runs first) | Read, Write, Edit, Glob |
| FP + severity + final reports (runs second) | Read, Write, Edit, Grep, Glob, Bash |
Tools come from each agent's frontmatter at spawn time. The orchestrator's ///etc. come from this skill's .
Task*AgentBashallowed-tools| 子代理类型 | 用途 | 工具集 |
|---|---|---|
| 运行指定的审查集群,撰写检测结果 | Read、Write、Edit、Grep、Glob、Bash |
| 合并重复结果(优先运行) | Read、Write、Edit、Glob |
| 误报处理 + 严重性评估 + 最终报告(其次运行) | Read、Write、Edit、Grep、Glob、Bash |
工具来自每个代理启动时的前置信息。编排器的//等工具来自本技能的配置。
Task*AgentBashallowed-toolsArchitecture
架构
coordinator: write context.md → build_run_plan.py → TaskCreate × M
→ spawn primer (foreground) → spawn M workers (parallel)
→ classify Phase-7 outcomes + write findings-index.txt
→ dedup-judge → fp-judge → SARIF safety net → return REPORT.mdOutput directory contains: , , , , (per-worker shards), , , , , , .
context.mdplan.jsonworker-prompts/findings/findings-index.d/findings-index.txtrun-summary.mddedup-summary.mdfp-summary.mdREPORT.mdREPORT.sarifPath convention: set if that resolves (), otherwise .
${C_REVIEW_PLUGIN_ROOT}=${CLAUDE_PLUGIN_ROOT}Bash: ls "${CLAUDE_PLUGIN_ROOT}/prompts/clusters/buffer-write-sinks.md"Bash: find ~/.claude -path '*/plugins/c-review/prompts/clusters/buffer-write-sinks.md' -print -quitScope convention: keep two scopes separate throughout the run:
- — the user-requested audit subtree. Workers may only file findings whose vulnerable location is inside this subtree.
finding_scope_root - — read-only repo roots/files workers and judges may inspect to verify reachability, callers, wrappers, build flags, mitigations, and threat-model details. Default to
context_rootsunless the user explicitly forbids broader context. Reading context outside.is allowed; filing findings there is not.finding_scope_root
coordinator: write context.md → build_run_plan.py → TaskCreate × M
→ spawn primer (foreground) → spawn M workers (parallel)
→ classify Phase-7 outcomes + write findings-index.txt
→ dedup-judge → fp-judge → SARIF safety net → return REPORT.md输出目录包含以下文件:、、、、(按工作子代理分片)、、、、、、。
context.mdplan.jsonworker-prompts/findings/findings-index.d/findings-index.txtrun-summary.mddedup-summary.mdfp-summary.mdREPORT.mdREPORT.sarif路径约定:如果能解析,则设置(Bash命令:);否则执行Bash命令:。
${CLAUDE_PLUGIN_ROOT}${C_REVIEW_PLUGIN_ROOT}=${CLAUDE_PLUGIN_ROOT}ls "${CLAUDE_PLUGIN_ROOT}/prompts/clusters/buffer-write-sinks.md"find ~/.claude -path '*/plugins/c-review/prompts/clusters/buffer-write-sinks.md' -print -quit范围约定:在整个运行过程中保持两个范围分离:
- — 用户请求的审计子树。工作子代理只能提交漏洞位置在此子树内的检测结果。
finding_scope_root - — 工作子代理和评审子代理可查看的只读仓库根目录/文件,用于验证可达性、调用者、包装器、构建标志、缓解措施和威胁模型细节。默认值为
context_roots,除非用户明确禁止更广泛的上下文。允许读取.之外的上下文,但不允许在此范围内提交检测结果。finding_scope_root
Rationalizations to Reject
需要避免的错误做法
- "Background spawns parallelize the workers." They do not — calls in a single assistant message already run concurrently.
Agentdefeats the Phase 6a primer cache, so every worker pays full cache-creation (run_in_background=true) and the ~15 K-token primer is wasted M times. This is the single most common defect — multiple recent runs spawned 7-of-8 (or all) workers withcache_read_input_tokens=0. Default: omitbg=truefrom worker spawns.run_in_background - "I'll re-derive the cluster list / paths / pass prefixes inline instead of running ." The script is the only authority for selection and rendering. Paraphrasing it drops fields that the worker self-check requires, producing
build_run_plan.py. Always run the script andworker-N abort: spawn prompt malformed.Read plan.json - "The run partially succeeded — I'll just write from what completed." Hiding partial runs behind a successful report is a correctness bug. If any Phase-5 cluster task is not
REPORT.md, surface it prominently incompletedand the final response.run-summary.md - "Zero findings — skip Phase 8." Always run both judges and Phase 8b: dedup-judge writes a minimal no-op on an empty index, fp-judge writes empty
dedup-summary.md/REPORT.md, and Phase 8b's SARIF generator emitsREPORT.sariffor the empty case. SARIF consumers depend on a stable artifact set.results: [] - "is fine for the preflight." Under zsh, an unmatched glob aborts the whole compound command before
Bash: ls README*runs. Use2>/dev/null(preferred) orGlob(never fails on no-match).find
- “后台启动工作子代理以实现并行化。” 这不可行 — 单个助手消息中的调用已经会并发运行。
Agent会破坏Phase 6a的缓存预热机制,导致每个工作子代理都要重新创建缓存(run_in_background=true),预热的约15K token会被浪费M次。这是最常见的缺陷 — 近期多次运行中,7/8(或全部)的工作子代理都以cache_read_input_tokens=0启动。默认做法:在工作子代理启动时省略bg=true参数。run_in_background - “我将直接推导集群列表/路径/前缀,而不是运行。” 该脚本是选择和渲染的唯一权威。改写脚本会丢失工作子代理自检所需的字段,导致
build_run_plan.py错误。务必运行脚本并读取worker-N abort: spawn prompt malformed。plan.json - “运行部分成功 — 我将根据已完成的内容撰写。” 在成功报告背后隐藏部分运行结果是正确性缺陷。如果任何Phase-5集群任务未标记为
REPORT.md,需在completed和最终响应中突出显示。run-summary.md - “未发现任何漏洞 — 跳过Phase 8。” 无论是否发现漏洞,都必须运行两个评审子代理和Phase 8b:当索引为空时,去重评审子代理会生成一个最小化的无操作,误报评审子代理会生成空的
dedup-summary.md/REPORT.md,Phase 8b的SARIF生成器会针对空情况输出REPORT.sarif。SARIF依赖稳定的工件集。results: [] - “用于预检没问题。” 在zsh中,不匹配的通配符会在
Bash: ls README*执行前终止整个复合命令。优先使用2>/dev/null工具,或使用Glob(无匹配时不会失败)。find
Orchestration Workflow
编排工作流
Run these phases in the main conversation.
在主对话中按以下阶段运行。
Phase 0: Parameter Collection
Phase 0:参数收集
Entry: skill invoked. Exit: , , resolved; resolved or set to ; ; resolved.
threat_modelworker_modelseverity_filterscope_subpath"."finding_scope_root=scope_subpathcontext_rootsThe skill is invoked directly (no command wrapper). Parse any free-text arguments the user passed on the line (e.g. , , ) and pre-fill the answers they imply — then ask for any missing required parameters with one call. Never silently default the required parameters.
/c-review:c-reviewflamenco onlyhigh severity onlyuse haikuAskUserQuestionRequired parameters:
| Parameter | Values | How to infer from args |
|---|---|---|
| | Words like "remote", "network", "attacker" → |
| | Explicit model name in args. Otherwise ask (no silent default). |
| | "all", "every", "noisy" → |
| repo-relative directory (optional) | Phrases like "X only", "just audit X/", "review subdirectory X" → |
Call exactly once with only unresolved required parameters (, , ) plus only when the user explicitly requested a narrowed scope but it is ambiguous. If the required parameters were all pre-filled and scope is absent or resolved, skip the question.
AskUserQuestionthreat_modelworker_modelseverity_filterscope_subpathAfter resolving , set . Set by default so workers can verify callers/build settings outside a narrowed subtree without filing out-of-scope findings. If the user explicitly asks to forbid broader context, set and note that reachability confidence may be lower.
scope_subpathfinding_scope_root="${scope_subpath:-.}"context_roots="."context_roots="${finding_scope_root}"进入条件:技能被调用。退出条件:、、已确定;已确定或设为;;已确定。
threat_modelworker_modelseverity_filterscope_subpath"."finding_scope_root=scope_subpathcontext_roots直接调用技能(无命令包装)。解析用户在行中传递的任意自由文本参数(例如、、)并预填充对应的答案 — 然后通过一次调用询问任何缺失的必填参数。切勿静默设置必填参数的默认值。
/c-review:c-reviewflamenco onlyhigh severity onlyuse haikuAskUserQuestion必填参数:
| 参数 | 可选值 | 如何从参数推断 |
|---|---|---|
| | 出现“remote”、“network”、“attacker”等词 → |
| | 参数中明确指定模型名称。否则询问用户(无静默默认值)。 |
| | 出现“all”、“every”、“noisy” → |
| 仓库相对目录(可选) | 出现“X only”、“just audit X/”、“review subdirectory X”等表述 → |
仅对未解决的必填参数(、、)以及用户明确要求缩小范围但范围不明确时的,调用一次。如果所有必填参数已预填充且范围已确定或未指定,则跳过此询问。
threat_modelworker_modelseverity_filterscope_subpathAskUserQuestion确定后,设置。默认设置,以便工作子代理可以验证缩小范围的子树之外的调用者/构建设置,而不会提交超出范围的检测结果。如果用户明确禁止更广泛的上下文,设置并注明可达性置信度可能降低。
scope_subpathfinding_scope_root="${scope_subpath:-.}"context_roots="."context_roots="${finding_scope_root}"Phase 1: Prerequisites
Phase 1:前置检查
Entry: Phase 0 complete. Exit: , , flags determined.
is_cppis_posixis_windowsProbe within . Prefer / when available in the orchestrator's tool set; some sessions only expose , so fall back to the equivalents below — both forms produce identical signals (non-empty output ⇒ flag true):
${finding_scope_root:-.}GlobGrepBashbash
undefined进入条件:Phase 0完成。退出条件:、、标志已确定。
is_cppis_posixis_windows在范围内探测。优先使用编排器工具集中的/;某些会话仅支持,因此可回退到以下等效命令 — 两种形式产生相同的信号(非空输出 ⇒ 标志为true):
${finding_scope_root:-.}GlobGrepBashbash
undefinedis_cpp
is_cpp
find "${finding_scope_root:-.}" -type f ( -name '.cpp' -o -name '.cxx' -o -name '.cc' -o -name '.hpp' -o -name '*.hh' ) -print -quit
find "${finding_scope_root:-.}" -type f \( -name '.cpp' -o -name '.cxx' -o -name '.cc' -o -name '.hpp' -o -name '*.hh' \) -print -quit
is_posix
is_posix
grep -rlE '#include[[:space:]]<(pthread|signal|sys/(socket|stat|types|wait)|unistd|errno).h>'
--include='.c' --include='.h'
--include='.cpp' --include='.cxx' --include='.cc' --include='.hpp' --include='.hh'
"${finding_scope_root:-.}" | head -1
--include='.c' --include='.h'
--include='.cpp' --include='.cxx' --include='.cc' --include='.hpp' --include='.hh'
"${finding_scope_root:-.}" | head -1
grep -rlE '#include[[:space:]]<(pthread|signal|sys/(socket|stat|types|wait)|unistd|errno)\.h>' \
--include='.c' --include='.h' \
--include='.cpp' --include='.cxx' --include='.cc' --include='.hpp' --include='.hh' \
"${finding_scope_root:-.}" | head -1
is_windows
is_windows
grep -rlE '#include[[:space:]]<(windows|winbase|winnt|winuser|winsock|ntdef|ntstatus).h>'
--include='.c' --include='.h'
--include='.cpp' --include='.cxx' --include='.cc' --include='.hpp' --include='.hh'
"${finding_scope_root:-.}" | head -1
--include='.c' --include='.h'
--include='.cpp' --include='.cxx' --include='.cc' --include='.hpp' --include='.hh'
"${finding_scope_root:-.}" | head -1
`compile_commands.json` is informational (no agent currently uses LSP), but the probe is mandatory so the run summary records whether richer local tooling is available. Probe via `Glob: **/compile_commands.json` under `${context_roots}`. If `Glob` is unavailable, use:
```bash
printf '%s\n' "${context_roots:-.}" | tr ',' '\n' | while IFS= read -r root; do
[ -n "$root" ] && find "$root" -name compile_commands.json -print -quit
done | head -1grep -rlE '#include[[:space:]]<(windows|winbase|winnt|winuser|winsock|ntdef|ntstatus)\.h>' \
--include='.c' --include='.h' \
--include='.cpp' --include='.cxx' --include='.cc' --include='.hpp' --include='.hh' \
"${finding_scope_root:-.}" | head -1
`compile_commands.json`仅作参考(当前没有代理使用LSP),但必须进行探测,以便运行摘要记录是否有更丰富的本地工具可用。通过`Glob: **/compile_commands.json`在`${context_roots}`下探测。如果`Glob`不可用,使用:
```bash
printf '%s\
' "${context_roots:-.}" | tr ',' '\
' | while IFS= read -r root; do
[ -n "$root" ] && find "$root" -name compile_commands.json -print -quit
done | head -1find "$root"
is quoted intentionally so a context root containing spaces
find "$root"find "$root"
被引号包裹,因此包含空格的上下文根目录
find "$root"(e.g. "/Users/me/My Repo") survives word-splitting. Do not unquote it.
—
If absent, suggest CMake `-DCMAKE_EXPORT_COMPILE_COMMANDS=ON`/Bear/compiledb to the user but continue.#(例如"/Users/me/My Repo")不会被拆分。请勿去掉引号。
如果不存在,建议用户使用CMake的`-DCMAKE_EXPORT_COMPILE_COMMANDS=ON`参数、Bear或compiledb工具,但继续运行流程。Phase 2: Output Directory
Phase 2:输出目录
Entry: Phase 1 flags set. Exit: absolute resolved; exists.
output_dir${output_dir}/findings/Resolve an absolute path for (default: ):
output_dir$(pwd)/.c-review-results/$(date -u +%Y%m%dT%H%M%SZ)/bash
mkdir -p "${output_dir}/findings"进入条件:Phase 1的标志已设置。退出条件:绝对路径已确定;已存在。
output_dir${output_dir}/findings/解析的绝对路径(默认值:):
output_dir$(pwd)/.c-review-results/$(date -u +%Y%m%dT%H%M%SZ)/bash
mkdir -p "${output_dir}/findings"Phase 3: Codebase Context
Phase 3:代码库上下文
Entry: exists. Exit: written.
${output_dir}${output_dir}/context.mdSkim and any build file (, , , ) — preflight with the tool before any (a on a missing file aborts the turn). Do not use for the preflight: under zsh, an unmatched glob aborts the whole compound command before runs (observed: a Phase-3 call failed with and dropped the entire preflight). If you must use , use , which never fails on no-match.
README.{md,rst,txt}MakefileCMakeLists.txtmeson.buildconfigure.acGlobReadReadBash: ls README*2>/dev/nullls src/X/README*no matches foundBashfind . -maxdepth 2 -name 'README*' -o -name 'Makefile' -o -name 'CMakeLists.txt' -o -name 'meson.build'Write with: YAML frontmatter (, , , , , , , , , as / plus path when present), then a short markdown body with five sections — Purpose (1-3 sentences), Scope (what's in , and that findings outside it are out of scope), Entry points (where untrusted data enters: network, files, CLI, IPC), Trust boundaries (sandboxed vs trusted peers vs arbitrary remote), Existing hardening (fuzzing corpora, sanitizers, privilege separation).
${output_dir}/context.mdthreat_modelseverity_filterscope_subpathfinding_scope_rootcontext_rootsis_cppis_posixis_windowsoutput_dircompile_commandspresentabsentfinding_scope_root进入条件:已存在。退出条件:已写入。
${output_dir}${output_dir}/context.md浏览和任何构建文件(、、、) — 在进行任何操作前先用工具预检(读取不存在的文件会中断流程)。请勿使用进行预检:在zsh中,不匹配的通配符会在执行前终止整个复合命令(已发现:Phase-3的调用因失败,导致整个预检流程中断)。如果必须使用,使用,该命令在无匹配时不会失败。
README.{md,rst,txt}MakefileCMakeLists.txtmeson.buildconfigure.acReadGlobBash: ls README*2>/dev/nullls src/X/README*no matches foundBashfind . -maxdepth 2 -name 'README*' -o -name 'Makefile' -o -name 'CMakeLists.txt' -o -name 'meson.build'写入,包含:YAML前置信息(、、、、、、、、、设为/,存在时附加路径),然后是简短的Markdown正文,包含五个部分 — 用途(1-3句话)、范围(包含的内容,以及超出此范围的结果不纳入报告)、入口点(不可信数据的进入位置:网络、文件、CLI、IPC)、信任边界(沙盒环境 vs 可信对等方 vs 任意远程攻击者)、现有加固措施(模糊测试语料库、 sanitizer、权限分离)。
${output_dir}/context.mdthreat_modelseverity_filterscope_subpathfinding_scope_rootcontext_rootsis_cppis_posixis_windowsoutput_dircompile_commandspresentabsentfinding_scope_rootPhase 4: Build Run Plan (deterministic)
Phase 4:构建运行计划(确定性)
Entry: language flags + known; exists. Exit: and written; known.
threat_model${output_dir}/findings/${output_dir}/plan.json${output_dir}/worker-prompts/*.txtM = worker_countSelection, filtering, path resolution, and spawn-prompt rendering are delegated to the script to prevent the "orchestrator paraphrases the spawn template and drops fields" failure mode:
bash
python3 "${C_REVIEW_PLUGIN_ROOT}/scripts/build_run_plan.py" \
--plugin-root "${C_REVIEW_PLUGIN_ROOT}" --output-dir "${output_dir}" \
--threat-model "${threat_model}" --severity-filter "${severity_filter}" \
--scope-subpath "${finding_scope_root:-.}" --context-roots "${context_roots:-.}" \
--is-cpp "${is_cpp}" --is-posix "${is_posix}" --is-windows "${is_windows}"The script writes + + (if , the default) , and prints a JSON summary on stdout. Exits non-zero on any missing prompt — surface the message and stop. Typical M: 7 (C POSIX), 8 (C++ POSIX), 10 (C POSIX + Windows), 11 (C++ POSIX + Windows). After it returns, for the structured selection — never re-derive filtering or paths.
plan.jsonworker-prompts/worker-N.txt--cache-primer=trueworker-prompts/cache-primer.txtRead plan.json进入条件:语言标志 + 已知;已存在。退出条件:和已写入;已知。
threat_model${output_dir}/findings/${output_dir}/plan.json${output_dir}/worker-prompts/*.txtM = worker_count选择、过滤、路径解析和启动提示渲染委托给脚本,以避免“编排器改写启动模板并丢失字段”的失败模式:
bash
python3 "${C_REVIEW_PLUGIN_ROOT}/scripts/build_run_plan.py" \\
--plugin-root "${C_REVIEW_PLUGIN_ROOT}" --output-dir "${output_dir}" \\
--threat-model "${threat_model}" --severity-filter "${severity_filter}" \\
--scope-subpath "${finding_scope_root:-.}" --context-roots "${context_roots:-.}" \\
--is-cpp "${is_cpp}" --is-posix "${is_posix}" --is-windows "${is_windows}"该脚本会写入 + +(如果,默认启用),并在标准输出打印JSON摘要。如果缺少任何提示,会返回非零退出码 — 显示错误信息并停止运行。典型的M值:7(C POSIX)、8(C++ POSIX)、10(C POSIX + Windows)、11(C++ POSIX + Windows)。脚本返回后,读取获取结构化选择结果 — 切勿重新推导过滤规则或路径。
plan.jsonworker-prompts/worker-N.txt--cache-primer=trueworker-prompts/cache-primer.txtplan.jsonPhase 5: Create Bookkeeping Tasks (orchestrator-internal)
Phase 5:创建记账任务(编排器内部)
Entry: exists; . Exit: created (1:1 with ), all .
${output_dir}/plan.jsonM = plan.workers.lengthcluster_task_ids[]plan.workerspendingThe task ledger is orchestrator bookkeeping only (TUI visibility + Phase-7 retry tracking) — workers never read or write it. One per worker, populating with , , , , , — all values copied verbatim from . Track in order.
TaskCreatemetadatakind="cluster"worker_ncluster_idspawn_prompt_pathpass_prefixesattempt=1plan.workers[i]cluster_task_ids[]plan.workers进入条件:已存在;。退出条件:已创建(与一一对应),状态均为。
${output_dir}/plan.jsonM = plan.workers.lengthcluster_task_ids[]plan.workerspending任务账本仅用于编排器记账(TUI可见性 + Phase-7重试跟踪) — 工作子代理不会读取或写入它。每个工作子代理对应一个,在中填充、、、、、 — 所有值均直接复制自。按的顺序跟踪。
TaskCreatemetadatakind="cluster"worker_ncluster_idspawn_prompt_pathpass_prefixesattempt=1plan.workers[i]plan.workerscluster_task_ids[]Phase 6: Spawn workers (optional cache-primer first, then M in parallel)
Phase 6:启动工作子代理(可选先缓存预热,再并行启动M个)
Entry: populated; per-worker spawn prompt files exist at . Exit: all M calls have returned (the parallel spawn block completed).
cluster_task_ids[]${output_dir}/worker-prompts/worker-N.txtAgent进入条件:已填充;每个工作子代理的启动提示文件位于。退出条件:所有M个调用已返回(并行启动块已完成)。
cluster_task_ids[]${output_dir}/worker-prompts/worker-N.txtAgentPhase 6a: Cache primer (gated on plan.run.cache_primer
)
plan.run.cache_primerPhase 6a:缓存预热(受plan.run.cache_primer
控制)
plan.run.cache_primerA parallel batch from cold start cannot share cache (all M requests dispatch simultaneously, none has finished writing). To warm the prefix, spawn a tiny primer first — foreground (background spawns don't share cache with subsequent foreground spawns).
If , has written . Spawn it in its own assistant message: the file, pass verbatim as with , , , no . The script wrote the prefix byte-identical to through the block — that byte-identity is what gives the parallel workers their cache hit. The primer trailer contains , which the worker system prompt treats as a first-class mode and returns exactly in one text response with zero tool calls. Discard the abort line — Phase 7 ignores it (no id).
plan.run.cache_primer == truebuild_run_plan.py${output_dir}/worker-prompts/cache-primer.txtReadAgentpromptsubagent_type=c-review:c-review-workermodel=${worker_model}description="C review cache primer"run_in_backgroundworker-1.txt<context>Cache primer: trueworker-PRIMER abort: cache primer (no analysis performed)worker-NForeground spawn already serializes — no needed before Phase 6b. Skip Phase 6a entirely if .
sleepplan.run.cache_primer == false冷启动的并行批次无法共享缓存(所有M个请求同时分发,没有请求完成写入缓存)。为了预热前缀,先启动一个小型预热代理 — 前台启动(后台启动的代理不会与后续前台启动的代理共享缓存)。
如果,会写入。在单独的助手消息中启动它:读取该文件,将内容直接作为的参数,设置、、,不设置。脚本写入的前缀与的块完全一致 — 这种字节一致性是并行工作子代理命中缓存的关键。预热代理的结尾包含,工作子代理的系统提示会将其视为一等模式,并返回的文本响应,无工具调用。忽略此终止行 — Phase 7会忽略它(无 ID)。
plan.run.cache_primer == truebuild_run_plan.py${output_dir}/worker-prompts/cache-primer.txtAgentpromptsubagent_type=c-review:c-review-workermodel=${worker_model}description="C review cache primer"run_in_backgroundworker-1.txt<context>Cache primer: trueworker-PRIMER abort: cache primer (no analysis performed)worker-N前台启动已自动序列化 — Phase 6b前无需。如果,则完全跳过Phase 6a。
sleepplan.run.cache_primer == falsePhase 6b: Spawn M real workers in ONE message
Phase 6b:在一条消息中启动M个实际工作子代理
STOP — read this before composing the spawn message.Workers MUST be spawned foreground (nofield, orrun_in_background). "Parallel" here means one assistant message containing Mrun_in_background=falsecalls — that already runs them concurrently. Background spawns are NOT how you parallelize this skill.AgentBackground spawns defeat Phase 6a's primer cache: every worker pays full cache-creation on its first turn (), and the primer's ~15 K tokens are wasted M times over. Two real runs (audit logs available) had exactly this symptom — every worker started withcache_read_input_tokens=0.first_cr=0Before sending the spawn message, audit your draft: everycall must have noAgentkey. If you wroterun_in_background, delete it.run_in_background=true
Required spawn shape: emit a single assistant message containing M tool invocations. Sequential spawning serializes the review and is also wrong, but that failure is loud (timing); the background-spawn failure is silent (cost).
AgentFor each worker :
N ∈ [1..M]Read: ${output_dir}/worker-prompts/worker-N.txt- Pass the file contents verbatim as the tool's
Agentargument:prompt
| Parameter | Value |
|---|---|
| |
| |
| |
| the full text of |
| field MUST be omitted, OR set to |
The spawn prompt is the single authority. Pass it verbatim — every field is required by the worker's self-check; any deviation triggers .
worker-N abort: spawn prompt malformedAnti-patterns to reject:
- Passing (the dominant historical defect — see warning above).
run_in_background=true - Hand-typing the spawn prompt instead of reading .
worker-N.txt - Inserting Task-related instructions ("first call TaskList", "Assigned task id: <N>"). Workers have no Task tools.
- Editing the rendered prompt before passing it (trimming "redundant" fields, collapsing pass lists).
注意 — 撰写启动消息前请阅读此内容。工作子代理必须前台启动(不设置字段,或设为run_in_background)。 此处的“并行”指一条助手消息包含M个run_in_background=false调用 — 这已经会并发运行它们。后台启动不是此技能实现并行的方式。Agent后台启动会破坏Phase 6a的预热缓存:每个工作子代理在第一轮都要重新创建缓存(),预热的约15K token会被浪费M次。两次实际运行(审计日志可用)都出现了此症状 — 每个工作子代理都以cache_read_input_tokens=0启动。first_cr=0发送启动消息前,检查草稿:每个调用必须没有Agent键。如果写了run_in_background,请删除它。run_in_background=true
启动消息的必填格式:生成一条包含M个工具调用的助手消息。顺序启动会序列化审查流程,这也是错误的,但这种失败很明显(耗时);而后台启动的失败是静默的(成本浪费)。
Agent对于每个工作子代理:
N ∈ [1..M]- 执行
Read: ${output_dir}/worker-prompts/worker-N.txt - 将文件内容直接作为工具的
Agent参数:prompt
| 参数 | 值 |
|---|---|
| |
| |
| |
| |
| 必须省略此字段,或设为 |
启动提示是唯一权威。直接传递内容 — 工作子代理的自检需要每个字段;任何偏差都会触发错误。
worker-N abort: spawn prompt malformed需要避免的反模式:
- 设置(历史上最常见的缺陷 — 参见上述警告)。
run_in_background=true - 手动输入启动提示而非读取。
worker-N.txt - 插入与Task相关的指令(“先调用TaskList”、“分配的任务ID:<N>”)。工作子代理没有Task工具。
- 在传递前修改渲染后的提示(修剪“冗余”字段、合并前缀列表)。
Phase 7: Wait for Workers and Classify Outcomes
Phase 7:等待工作子代理并分类结果
Entry: all M Phase-6 calls have returned. Exit: every cluster has either succeeded or been retried up to the cap; written.
Agent${output_dir}/findings-index.txtThe Phase-6 invocations block until each worker returns. Inspect each worker's return text and apply this classifier in order — first match wins:
Agent| # | Match (in return text) | Outcome | Action |
|---|---|---|---|
| 1 | | success | |
| 2 | | non-retryable orchestrator bug | Stop the run, surface the abort + spawn-prompt path. Re-running the same prompt repeats the failure — pre-work-budget exhaustion always means the worker couldn't pass its self-check, which a retry won't fix. |
| 3 | other | retryable | Mark |
| 4 | | retryable | Same as #3 (transient worker crash). |
If any non-retryable, stop. Otherwise re-spawn each retryable with in one parallel block (cap = 2 attempts per cluster). Replacement workers can safely overwrite partial files — finding IDs are deterministic per prefix.
pendingattempt < 2进入条件:所有Phase-6的调用已返回。退出条件:每个集群要么成功,要么已重试至上限;已写入。
Agent${output_dir}/findings-index.txtPhase-6的调用会阻塞直到每个工作子代理返回。检查每个工作子代理的返回文本,并按以下顺序分类 — 匹配第一个规则即停止:
Agent| # | 匹配内容(返回文本中) | 结果 | 操作 |
|---|---|---|---|
| 1 | | 成功 | |
| 2 | | 不可重试的编排器错误 | 停止运行,显示终止信息和启动提示路径。重新运行相同的提示会重复失败 — 预工作预算耗尽通常意味着工作子代理无法通过自检,重试无法解决问题。 |
| 3 | 其他 | 可重试 | 标记为 |
| 4 | | 可重试 | 与#3处理方式相同(工作子代理临时崩溃)。 |
如果存在不可重试的错误,停止运行。否则,将每个且的可重试任务在一个并行块中重新启动(每个集群最多重试2次)。替换的工作子代理可以安全地覆盖部分文件 — 检测结果ID按前缀是确定性的。
pendingattempt < 2Sanity-check + write index
sanity检查 + 写入索引
For every cluster, list for each (from ). A worker that says "wrote N finding files" with N>0 but zero files on disk is suspicious — treat as retryable (classifier row #4). Zero claimed + zero on disk is fine.
complete:${output_dir}/findings/${prefix}-*.mdpass_prefixplan.jsonThen build the index — workers wrote per-worker shards under , prefer those:
${output_dir}/findings-index.d/bash
undefined对于每个的集群,列出中每个对应的文件。如果工作子代理声称“已写入N个检测结果文件”且N>0,但磁盘上没有文件,这是可疑的 — 按可重试处理(分类规则#4)。声称0个且磁盘上无文件是正常的。
complete:plan.jsonpass_prefix${output_dir}/findings/${prefix}-*.md然后构建索引 — 工作子代理在下写入了按工作子代理分片的文件,优先使用这些文件:
${output_dir}/findings-index.d/bash
undefinedUse find
rather than a worker-*.txt
glob: zsh aborts the compound command on no-match
findworker-*.txt使用find
而非worker-*.txt
通配符:zsh在无匹配时会终止复合命令
findworker-*.txteven with 2>/dev/null
, so an empty findings-index.d would otherwise drop the index file.
2>/dev/null即使使用2>/dev/null
,因此空的findings-index.d会导致索引文件丢失。
2>/dev/nullawk 1
(vs cat
) normalizes a missing trailing newline on any shard, so a future
awk 1catawk 1
(替代cat
)会规范化任何分片末尾缺失的换行符,因此当未来的工作子代理
awk 1catworker that writes shards via Write/printf instead of ls -1 | sort
can't silently glue
ls -1 | sort通过Write/printf而非ls -1 | sort
写入分片时,不会在去重时将一个分片的最后路径与下一个分片的第一个路径拼接在一起。
ls -1 | sortthe last path of one shard onto the first of the next when sort -u dedupes.
—
if [ -d "${output_dir}/findings-index.d" ]; then
find "${output_dir}/findings-index.d" -maxdepth 1 -type f -name 'worker-.txt' -exec awk 1 {} + 2>/dev/null
| sort -u > "${output_dir}/findings-index.txt" else find "${output_dir}/findings" -maxdepth 1 -type f -name '.md' 2>/dev/null | sort > "${output_dir}/findings-index.txt" fi
| sort -u > "${output_dir}/findings-index.txt" else find "${output_dir}/findings" -maxdepth 1 -type f -name '.md' 2>/dev/null | sort > "${output_dir}/findings-index.txt" fi
`sort -u` collapses duplicates from Phase-7 retries. Empty file is the unambiguous "zero findings" signal. Cross-check the line count against the sum of `wrote N` worker claims; log mismatches but don't abort.
After task updates and index creation, run `TaskList` and write `${output_dir}/run-summary.md` with:
- resolved parameters (`threat_model`, `severity_filter`, `finding_scope_root`, `context_roots`, language/platform flags, compile-commands status)
- worker outcome table (`worker_n`, `cluster_id`, claimed finding count, shard line count, task status, retry/abort state)
- `findings-index.txt` line count and any mismatch against worker claims
- judge status once Phase 8 finishes, or the reason a judge was skipped/failed
If any Phase-5 cluster task is not `completed`, include it prominently in `run-summary.md` and the final response. Do not hide a partial run behind a successful report.
**Always run Phase 8 even on zero findings** — both judges short-circuit on an empty index: dedup-judge writes a minimal no-op `dedup-summary.md`, and fp-judge writes empty `REPORT.md`/`REPORT.sarif` so SARIF consumers get a stable artifact set.if [ -d "${output_dir}/findings-index.d" ]; then
find "${output_dir}/findings-index.d" -maxdepth 1 -type f -name 'worker-.txt' -exec awk 1 {} + 2>/dev/null \
| sort -u > "${output_dir}/findings-index.txt"
else
find "${output_dir}/findings" -maxdepth 1 -type f -name '.md' 2>/dev/null | sort > "${output_dir}/findings-index.txt"
fi
`sort -u`会合并Phase-7重试产生的重复项。空文件明确表示“未发现任何漏洞”。将行数与工作子代理声称的总数交叉核对;记录不匹配情况但不终止运行。
完成任务更新和索引创建后,运行`TaskList`并写入`${output_dir}/run-summary.md`,包含:
- 已解析的参数(`threat_model`、`severity_filter`、`finding_scope_root`、`context_roots`、语言/平台标志、compile-commands状态)
- 工作子代理结果表(`worker_n`、`cluster_id`、声称的检测结果数量、分片行数、任务状态、重试/终止状态)
- `findings-index.txt`的行数以及与工作子代理声称数量的任何不匹配情况
- Phase 8完成后的评审子代理状态,或评审子代理被跳过/失败的原因
如果任何Phase-5集群任务未标记为`completed`,需在`run-summary.md`和最终响应中突出显示。切勿在成功报告背后隐藏部分运行结果。
**即使未发现任何漏洞,也必须运行Phase 8** — 两个评审子代理在索引为空时会短路:去重评审子代理会生成最小化的无操作`dedup-summary.md`,误报评审子代理会生成空的`REPORT.md`/`REPORT.sarif`,以便SARIF消费者获得稳定的工件集。Phase 8: Judge Pipeline (sequential, dedup → fp+severity)
Phase 8:评审流水线(顺序执行,去重 → 误报+严重性评估)
Entry: exists. Exit: dedup-judge and fp-judge have returned; , , , and ideally are written.
findings-index.txtdedup-summary.mdfp-summary.mdREPORT.mdREPORT.sarifEach judge's full protocol is its system prompt (); spawn prompts pass only per-run variables. Do not reference — those files don't exist.
agents/c-review-{dedup,fp}-judge.mdprompts/internal/judges/Spawn sequentially (dedup first, fp-judge sees only merged primaries):
Agent(subagent_type="c-review:c-review-dedup-judge", description="Dedup judge", prompt=f"output_dir: {output_dir}")- — resolve
Agent(subagent_type="c-review:c-review-fp-judge", description="FP + severity judge", prompt=f"output_dir: {output_dir}\nsarif_generator_path: {sarif_generator_path}")tosarif_generator_path.${C_REVIEW_PLUGIN_ROOT}/scripts/generate_sarif.py
Judge failure handling. Same shape as Phase 7's classifier, applied to judge return text:
- → success.
… complete: - → non-retryable. Surface the abort line plus
… abort:; stop.ls -l ${output_dir}/findings-index.txt - No (help message / error / question) → retryable once.
complete:rather than a fresh spawn (the agent already paid the protocol-parse cost). Include the explicit finding paths fromSendMessage(to=<agentId>, …). If the second try still fails, surface the transcript and continue to Phase 8b.findings-index.txt
进入条件:已存在。退出条件:去重评审子代理和误报评审子代理已返回;、、(理想情况下还有)已写入。
findings-index.txtdedup-summary.mdfp-summary.mdREPORT.mdREPORT.sarif每个评审子代理的完整协议在其系统提示中();启动提示仅传递运行时变量。请勿引用 — 这些文件不存在。
agents/c-review-{dedup,fp}-judge.mdprompts/internal/judges/顺序启动(先去重,误报评审子代理仅查看合并后的主结果):
Agent(subagent_type="c-review:c-review-dedup-judge", description="Dedup judge", prompt=f"output_dir: {output_dir}")- — 将
Agent(subagent_type="c-review:c-review-fp-judge", description="FP + severity judge", prompt=f"output_dir: {output_dir}\ sarif_generator_path: {sarif_generator_path}")解析为sarif_generator_path。${C_REVIEW_PLUGIN_ROOT}/scripts/generate_sarif.py
评审子代理失败处理:对评审子代理的返回文本应用与Phase 7相同的分类规则:
- → 成功。
… complete: - → 不可重试。 显示终止行以及
… abort:的结果;停止运行。ls -l ${output_dir}/findings-index.txt - 无(帮助信息/错误/问题) → 可重试一次。 使用
complete:而非重新启动(代理已支付协议解析成本)。包含SendMessage(to=<agentId>, …)中的明确检测结果路径。如果第二次尝试仍失败,显示对话记录并继续执行Phase 8b。findings-index.txt
Phase 8b: SARIF safety net
Phase 8b:SARIF安全网
Entry: fp-judge returned, or the run aborted early. Exit: exists.
${output_dir}/REPORT.sarifbash
test -d "${output_dir}/findings" && python3 "${C_REVIEW_PLUGIN_ROOT}/scripts/generate_sarif.py" "${output_dir}"Run unconditionally whenever exists — generator is idempotent (full overwrite), emits for zero-survivor runs, and handles partial runs (findings without are emitted as rather than being silently dropped). Always overwriting protects against the case where fp-judge crashed mid-write and left a corrupt on disk. Skip only if doesn't exist (Phase 2 failed). After this phase, update with judge/SARIF status.
findings/results: []fp_verdictLIKELY_TPREPORT.sarif${output_dir}/findings/${output_dir}/run-summary.md进入条件:误报评审子代理已返回,或运行提前终止。退出条件:已存在。
${output_dir}/REPORT.sarifbash
test -d "${output_dir}/findings" && python3 "${C_REVIEW_PLUGIN_ROOT}/scripts/generate_sarif.py" "${output_dir}"只要存在,就无条件运行 — 生成器是幂等的(完全覆盖),对无存活结果的运行输出,并处理部分运行结果(无的检测结果会被标记为而非静默丢弃)。始终覆盖可以防止误报评审子代理崩溃时磁盘上留下损坏的的情况。仅当不存在(Phase 2失败)时跳过此阶段。此阶段完成后,更新中的评审子代理/SARIF状态。
findings/results: []fp_verdictLIKELY_TPREPORT.sarif${output_dir}/findings/${output_dir}/run-summary.mdPhase 9: Return Report
Phase 9:返回报告
Entry: Phase 8b complete. Exit: every item in Success Criteria verified true; returned to the caller.
REPORT.mdBefore composing the response, walk the Success Criteria checklist below and confirm each bullet against on-disk artifacts ( for cluster tasks, / for the files). If any criterion fails, surface the failure prominently in the response — do not hide a partial run behind a successful report.
TaskListlsReadThen and return its content to the caller. Append an Artifacts list pointing at , , , , , , .
Read ${output_dir}/REPORT.mdfindings/findings-index.txtrun-summary.mddedup-summary.mdfp-summary.mdREPORT.mdREPORT.sarif进入条件:Phase 8b完成。退出条件:成功标准中的每一项均已验证为true;已返回给调用者。
REPORT.md撰写响应前,按以下成功标准清单检查,并对照磁盘上的工件(集群任务查看,文件查看/)确认每一项。如果任何标准未满足,需在响应中突出显示失败情况 — 切勿在成功报告背后隐藏部分运行结果。
TaskListlsRead然后读取并将内容返回给调用者。附加工件列表,指向、、、、、、。
${output_dir}/REPORT.mdfindings/findings-index.txtrun-summary.mddedup-summary.mdfp-summary.mdREPORT.mdREPORT.sarifFinding file frontmatter — three stages
检测结果文件前置信息 — 三个阶段
Authoritative schema: ("Finding File Format"). Three-stage write:
agents/c-review-worker.md- Worker — base fields (,
id,bug_class,title,location,function,confidence) + seven body sections.worker - Dedup-judge — adds on duplicates, or
merged_into+also_known_ason primaries that absorbed.locations - FP+Severity judge — adds +
fp_verdicton every primary; on survivors (fp_rationale/TRUE_POSITIVE) also addsLIKELY_TP,severity,attack_vector,exploitability.severity_rationale
权威 schema:(“Finding File Format”)。分三个阶段写入:
agents/c-review-worker.md- 工作子代理 — 基础字段(、
id、bug_class、title、location、function、confidence) + 七个正文部分。worker - 去重评审子代理 — 在重复结果中添加字段,或在合并后的主结果中添加
merged_into+also_known_as字段。locations - 误报+严重性评审子代理 — 在每个主结果中添加+
fp_verdict字段;对存活结果(fp_rationale/TRUE_POSITIVE)还添加LIKELY_TP、severity、attack_vector、exploitability字段。severity_rationale
Bug classes / clusters
漏洞类别/集群
Authoritative: . 47 always-on bug classes, up to 64 with all conditional clusters enabled. is fully consolidated (its sub-prompts are not re-read at runtime).
prompts/clusters/manifest.jsonbuffer-write-sinks权威来源:。47个始终启用的漏洞类别,启用所有条件集群后最多可达64个。已完全整合(其子提示不会在运行时重新读取)。
prompts/clusters/manifest.jsonbuffer-write-sinksSuccess Criteria
成功标准
The phase exits already cover most of this; the orchestrator-visible end-state is:
- Every Phase-5 cluster task is (verify via
completed).TaskList - exists and records resolved scope/context, compile-commands probe result, worker claims vs index count, task status, and judge/SARIF status.
${output_dir}/run-summary.md - Every primary finding (no ) has
merged_into+fp_verdict; every survivor (fp_rationale/TRUE_POSITIVE) also hasLIKELY_TP,severity,attack_vector,exploitability.severity_rationale - exists, severity-filtered per
REPORT.md.severity_filter - exists (Phase 8b safety net guarantees this).
REPORT.sarif
各阶段的退出条件已涵盖大部分内容,编排器可见的最终状态为:
- 所有Phase-5集群任务均为(通过
completed验证)。TaskList - 已存在,并记录了已解析的范围/上下文、compile-commands探测结果、工作子代理声称数量与索引数量的对比、任务状态以及评审子代理/SARIF状态。
${output_dir}/run-summary.md - 每个主结果(无字段)都有
merged_into+fp_verdict字段;每个存活结果(fp_rationale/TRUE_POSITIVE)还有LIKELY_TP、severity、attack_vector、exploitability字段。severity_rationale - 已存在,并按
REPORT.md进行了严重性过滤。severity_filter - 已存在(Phase 8b安全网保证这一点)。",
REPORT.sarif