repo-value-analysis

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Repo Competitive Analysis Pipeline

Repo竞争分析管道

Operator Context

操作者背景

This skill operates as an operator for systematic repo value analysis of external repositories against our toolkit. It implements a 6-phase Pipeline Architecture — clone, parallel deep-read, self-inventory, synthesis, targeted audit, reality-grounded report — with parallel subagents dispatched via the Agent tool.
该Skill作为操作者,针对我们的工具包对外部仓库进行系统化的Repo价值分析。它通过Agent工具调度并行子Agent,实现了一套6阶段管道架构 —— 克隆、并行深度读取、自我盘点、差距整合、定向审计、基于实际情况的报告。

Hardcoded Behaviors (Always Apply)

硬编码行为(始终适用)

  • Full File Reading: Agents MUST read every file in their assigned zone, not sample or skim
  • Artifacts at Every Phase: Save findings to files; context is ephemeral
  • Reality-Grounding: Every recommendation MUST be audited against our actual codebase before inclusion in the final report
  • Read-Only on External Repo: Never modify the cloned repository
  • Comparison Focus: All analysis is relative — "what do they have that we lack?" not "what do they have?"
  • Structured Output: Final report follows the prescribed table format
  • 全文件读取:Agent必须读取其分配区域内的所有文件,不得抽样或略读
  • 各阶段产出物:将发现结果保存到文件中;上下文信息是临时的
  • 基于实际情况:所有建议在纳入最终报告前,必须针对我们的实际代码库进行审计
  • 外部仓库只读:不得修改已克隆的仓库
  • 聚焦对比:所有分析均为相对分析 —— "他们有什么我们没有的?" 而非 "他们有什么?"
  • 结构化输出:最终报告需遵循指定的表格格式

Default Behaviors (ON unless disabled)

默认行为(除非禁用否则启用)

  • Parallel Deep-Read: Dispatch 1 agent per analysis zone (up to 8 zones)
  • Self-Inventory: 1 agent catalogs our own system in parallel with deep-read
  • Zone Capping: Cap each analysis zone at ~100 files; split larger zones
  • Draft-Then-Final: Phase 4 saves a draft; Phase 6 overwrites with the audited final report
  • ADR Suggestion: If HIGH-value items found, suggest creating an adoption ADR
  • 并行深度读取:为每个分析区域调度1个Agent(最多8个区域)
  • 自我盘点:在深度读取的同时,调度1个Agent并行盘点我们自己的系统
  • 区域上限:每个分析区域最多包含约100个文件;若区域过大则拆分
  • 先草稿后定稿:第4阶段保存草稿;第6阶段用经过审计的最终报告覆盖草稿
  • ADR建议:若发现高价值内容,建议创建采纳ADR

Optional Behaviors (OFF unless enabled)

可选行为(除非启用否则禁用)

  • Skip Clone: Use
    --local [path]
    if the repo is already cloned or is a local directory
  • Focus Zone: Use
    --zone [name]
    to analyze only a specific zone (e.g., skills, hooks)
  • Quick Mode: Use
    --quick
    to skip Phase 5 audit (produces unverified recommendations)
  • 跳过克隆:若仓库已克隆或为本地目录,使用
    --local [path]
    参数
  • 聚焦特定区域:使用
    --zone [name]
    参数仅分析特定区域(如skills、hooks)
  • 快速模式:使用
    --quick
    参数跳过第5阶段审计(会生成未经验证的建议)

What This Skill CAN Do

该Skill能做什么

  • Clone and systematically analyze an external repository using parallel subagents
  • Read every file across categorized analysis zones
  • Inventory our own toolkit for accurate comparison
  • Produce a reality-grounded comparison report with effort estimates
  • Identify genuine gaps (things they have, we lack) vs superficial differences
  • Suggest ADR creation for high-value adoption candidates
  • 利用并行子Agent克隆并系统化分析外部仓库
  • 读取分类分析区域内的所有文件
  • 盘点我们自己的工具包以进行准确对比
  • 生成带有工作量估算的、基于实际情况的对比报告
  • 识别真实差距(他们有而我们没有的内容)与表面差异
  • 为高价值的采纳候选内容建议创建ADR

What This Skill CANNOT Do

该Skill不能做什么

  • Modify files in either repository (read-only analysis)
  • Implement recommended changes (use feature-implement or systematic-refactoring)
  • Analyze private repos without proper authentication configured
  • Replace domain-expert judgment on adoption decisions
  • Guarantee completeness for repos with 10,000+ files (zone capping applies)

  • 修改任一仓库中的文件(仅支持只读分析)
  • 实现建议的变更(请使用feature-implement或systematic-refactoring)
  • 在未配置正确身份验证的情况下分析私有仓库
  • 替代领域专家对采纳决策的判断
  • 保证包含10000+文件的仓库分析的完整性(会应用区域上限)

Instructions

操作说明

Input Parsing

输入解析

Before starting Phase 1, parse the user's input:
  • GitHub URL: Extract repo name from URL (e.g.,
    https://github.com/org/repo
    ->
    repo
    )
  • Local path: Validate the path exists and contains files
  • Bare repo name: Assume
    https://github.com/{name}
    if it looks like
    org/repo
Set
REPO_NAME
and
REPO_PATH
variables for use throughout the pipeline.
在开始第1阶段前,解析用户输入:
  • GitHub URL:从URL中提取仓库名称(例如:
    https://github.com/org/repo
    ->
    repo
  • 本地路径:验证路径存在且包含文件
  • 纯仓库名称:若格式为
    org/repo
    ,则默认使用
    https://github.com/{name}
设置
REPO_NAME
REPO_PATH
变量,供整个管道使用。

Phase 1: CLONE

阶段1:克隆

Goal: Obtain the repository and categorize its contents into analysis zones.
Step 1: Clone the repository
bash
git clone --depth 1 <url> /tmp/<REPO_NAME>
If
--local
flag was provided, skip cloning and use the provided path.
Step 2: Count and categorize files
Survey the repository structure:
  • Count total files (excluding
    .git/
    )
  • List top-level directories with file counts
Step 3: Define analysis zones
Categorize files into zones based on directory names and file patterns:
ZoneTypical directories/patternsPurpose
skills
skills/
,
commands/
,
prompts/
,
templates/
Reusable skill/prompt definitions
agents
agents/
,
personas/
,
roles/
Agent configurations
hooks
hooks/
,
middleware/
,
interceptors/
Event-driven automation
docs
docs/
,
*.md
(non-config),
adr/
,
guides/
Documentation and decisions
tests
tests/
,
*_test.*
,
*.spec.*
,
__tests__/
Test suites
configConfig files, CI/CD,
*.yaml
,
*.toml
,
*.json
(root)
Configuration
code
scripts/
,
src/
,
lib/
,
pkg/
,
*.py
,
*.go
,
*.ts
Source code
otherEverything elseUncategorized files
Step 4: Cap zones
If any zone exceeds ~100 files:
  1. Split it into sub-zones by subdirectory
  2. Each sub-zone gets its own agent in Phase 2
  3. Log the split in the analysis notes
Gate: Repository cloned (or local path validated). All files categorized into zones. Zone file counts recorded. No zone exceeds ~100 files (split if needed). Proceed only when gate passes.
目标:获取仓库并将其内容分类为分析区域。
步骤1:克隆仓库
bash
git clone --depth 1 <url> /tmp/<REPO_NAME>
若提供了
--local
参数,则跳过克隆,使用指定的本地路径。
步骤2:统计并分类文件
调查仓库结构:
  • 统计总文件数(排除
    .git/
    目录)
  • 列出包含文件的顶级目录及其文件数
步骤3:定义分析区域
根据目录名称和文件模式将文件分类到不同区域:
区域典型目录/模式用途
skills
skills/
,
commands/
,
prompts/
,
templates/
可复用的Skill/提示词定义
agents
agents/
,
personas/
,
roles/
Agent配置
hooks
hooks/
,
middleware/
,
interceptors/
事件驱动的自动化
docs
docs/
,
*.md
(非配置文件),
adr/
,
guides/
文档与决策记录
tests
tests/
,
*_test.*
,
*.spec.*
,
__tests__/
测试套件
config配置文件、CI/CD文件、
*.yaml
,
*.toml
,
*.json
(根目录下)
配置
code
scripts/
,
src/
,
lib/
,
pkg/
,
*.py
,
*.go
,
*.ts
源代码
other其他所有内容未分类文件
步骤4:区域拆分
若任一区域包含超过约100个文件:
  1. 按子目录将其拆分为子区域
  2. 每个子区域在第2阶段分配一个独立的Agent
  3. 在分析记录中记录拆分情况
准入条件:已克隆仓库(或验证了本地路径)。所有文件已分类到区域中。已记录各区域文件数。所有区域文件数不超过约100个(若超过已拆分)。仅当满足所有条件时才可进入下一阶段。

Phase 2: DEEP-READ (Parallel)

阶段2:深度读取(并行)

Goal: Read every file in every zone of the external repository.
Dispatch 1 Agent per analysis zone (background). Each agent receives:
  • The zone name and file list
  • Instructions to read EVERY file (not sample, not skim)
  • A structured output template
Agent instructions template (replace ALL bracketed placeholders with actual values before dispatching):
You are analyzing the "[zone]" zone of repository [REPO_NAME].

Read EVERY file listed below. For each file, extract:
1. Purpose (1-2 sentences)
2. Key techniques or patterns used
3. Notable or unique approaches
4. Dependencies on other components

Files to read:
[file list]

After reading ALL files, produce a structured summary:
目标:读取外部仓库所有区域中的所有文件。
为每个分析区域调度1个Agent(后台运行)。每个Agent会收到:
  • 区域名称和文件列表
  • 读取所有文件的指令(不得抽样、不得略读)
  • 结构化输出模板
Agent指令模板(调度前需将所有括号占位符替换为实际值):
你正在分析仓库[REPO_NAME]的"[zone]"区域。

请阅读以下所有文件。对于每个文件,提取:
1. 用途(1-2句话)
2. 使用的关键技术或模式
3. 值得注意或独特的实现方案
4. 对其他组件的依赖

需读取的文件:
[file list]

阅读完所有文件后,生成结构化摘要:

Zone: [zone]

区域: [zone]

Component Inventory

组件清单

FilePurposeKey Pattern
.........
文件用途关键模式
.........

Key Techniques

关键技术

  • [technique]: [which files use it, how]

Notable Patterns

值得注意的模式

  • [pattern]: [why it's notable]

Potential Gaps They Fill

它们填补的潜在差距

  • [gap]: [what capability this provides that might be missing elsewhere]
Save your findings to /tmp/[REPO_NAME]-zone-[zone].md

Dispatch up to 8 agents in parallel. If more than 8 zones exist, batch them (first 8, wait, then remaining).

**Gate**: All zone agents have completed (or timed out after 5 minutes each). At least 75% of agents returned results. Zone finding files exist in `/tmp/`. Proceed only when gate passes.
将你的发现保存到/tmp/[REPO_NAME]-zone-[zone].md

最多并行调度8个Agent。若区域超过8个,则分批处理(先处理前8个,等待完成后再处理剩余区域)。

**准入条件**:所有区域Agent已完成(或每个Agent超时5分钟)。至少75%的Agent返回了结果。区域发现文件已保存到`/tmp/`目录。仅当满足所有条件时才可进入下一阶段。

Phase 3: INVENTORY (Parallel with Phase 2)

阶段3:盘点(与阶段2并行)

Goal: Catalog our own toolkit for accurate comparison.
Dispatch 1 Agent (in background, concurrent with Phase 2) to inventory our system:
You are cataloging the claude-code-toolkit repository for comparison purposes.

Inventory these component types:
1. Agents (agents/*.md) - count and list with brief descriptions
2. Skills (skills/*/SKILL.md) - count and list with brief descriptions
3. Hooks (hooks/*.py) - count and list with brief descriptions
4. Scripts (scripts/*.py) - count and list with brief descriptions

For each category, note:
- Total count
- Key capability areas covered
- Notable patterns in how components are structured

Save your inventory to /tmp/self-inventory.md
Gate: Self-inventory agent completed (or timed out after 5 minutes).
/tmp/self-inventory.md
exists and contains counts for all 4 component types. Proceed only when gate passes.
目标:盘点我们自己的工具包以进行准确对比。
调度1个Agent(后台运行,与阶段2并行)来盘点我们的系统:
你正在为对比目的盘点claude-code-toolkit仓库。

请盘点以下组件类型:
1. Agents (agents/*.md) - 统计数量并附上简要说明
2. Skills (skills/*/SKILL.md) - 统计数量并附上简要说明
3. Hooks (hooks/*.py) - 统计数量并附上简要说明
4. Scripts (scripts/*.py) - 统计数量并附上简要说明

对于每个类别,记录:
- 总数量
- 覆盖的关键能力领域
- 组件结构中的值得注意的模式

将盘点结果保存到/tmp/self-inventory.md
准入条件:自我盘点Agent已完成(或超时5分钟)。
/tmp/self-inventory.md
文件存在且包含所有4类组件的数量。仅当满足所有条件时才可进入下一阶段。

Phase 4: SYNTHESIZE

阶段4:整合

Goal: Merge findings from Phase 2 and Phase 3 into a comparison with candidate recommendations.
Step 1: Read all zone findings
Read every
/tmp/[REPO_NAME]-zone-*.md
file and
/tmp/self-inventory.md
.
Step 2: Build comparison table
For each capability area discovered in the external repo:
CapabilityTheir ApproachOur ApproachGap?
.........Yes/No/Partial
Step 3: Identify candidate recommendations
For each genuine gap (not just a different approach to the same thing):
  • Describe what they have
  • Describe what we lack
  • Rate value: HIGH / MEDIUM / LOW
  • HIGH = addresses a real pain point or enables new capability
  • MEDIUM = nice to have, improves existing workflow
  • LOW = marginal improvement, different but not better
Step 4: Save draft report
Save to
research-[REPO_NAME]-comparison.md
with:
  • Executive summary
  • Comparison table
  • Candidate recommendations with ratings
  • Clear "DRAFT — pending audit" watermark
Gate: Draft report saved. At least 1 candidate recommendation identified (or explicit "no gaps found" conclusion). All recommendations have value ratings. Proceed only when gate passes.
目标:将阶段2和阶段3的发现整合为带有候选建议的对比结果。
步骤1:读取所有区域发现结果
读取所有
/tmp/[REPO_NAME]-zone-*.md
文件和
/tmp/self-inventory.md
文件。
步骤2:构建对比表格
对于在外部仓库中发现的每个能力领域:
能力他们的实现方案我们的实现方案是否存在差距?
.........是/否/部分
步骤3:识别候选建议
对于每个真实差距(不仅仅是同一事物的不同实现方案):
  • 描述他们拥有的内容
  • 描述我们缺失的内容
  • 价值评级:高/中/低
  • 高 = 解决了实际痛点或启用了新能力
  • 中 = 锦上添花,改进现有工作流程
  • 低 = 微小改进,不同但并不更优
步骤4:保存草稿报告
将报告保存到
research-[REPO_NAME]-comparison.md
,包含:
  • 执行摘要
  • 对比表格
  • 带有评级的候选建议
  • 清晰的"草稿 — 待审计"水印
准入条件:已保存草稿报告。至少识别出1条候选建议(或明确得出"未发现差距"的结论)。所有建议均有价值评级。仅当满足所有条件时才可进入下一阶段。

Phase 5: AUDIT (Parallel)

阶段5:审计(并行)

Goal: Reality-check each HIGH and MEDIUM recommendation against our actual codebase.
For each HIGH or MEDIUM recommendation, dispatch 1 Agent (in background):
You are auditing whether recommendation "[recommendation]" is already
addressed in the claude-code-toolkit repository.

The recommendation suggests: [description]

Your task:
1. Search the repository for components that address this capability
2. Read the SPECIFIC files/subsystems that would be affected
3. Determine coverage level:
   - ALREADY EXISTS: We have this. Cite the exact files.
   - PARTIAL: We have something similar but incomplete. Cite files and gaps.
   - MISSING: We genuinely lack this. Confirm by searching for related patterns.
4. If PARTIAL or MISSING, identify the exact files that would need to change

Save findings to /tmp/audit-[recommendation-slug].md with:
目标:针对我们的实际代码库对每条高价值和中价值建议进行实际验证。
对于每条高价值或中价值建议,调度1个Agent(后台运行):
你正在审计建议"[recommendation]"是否已在claude-code-toolkit仓库中得到解决。

该建议内容为:[描述]

你的任务:
1. 在仓库中搜索处理该能力的组件
2. 读取会受到影响的具体文件/子系统
3. 确定覆盖程度:
   - 已存在:我们已具备该能力。请引用具体文件。
   - 部分覆盖:我们有类似内容但不完整。请引用文件并说明差距。
   - 缺失:我们确实缺少该能力。请通过搜索相关模式确认。
4. 若为部分覆盖或缺失,识别需要修改的具体文件

将发现结果保存到/tmp/audit-[recommendation-slug].md,格式如下:

Recommendation: [name]

建议: [名称]

Coverage: [ALREADY EXISTS | PARTIAL | MISSING]

覆盖程度: [已存在 | 部分覆盖 | 缺失]

Evidence

证据

  • [file path]: [what it does / doesn't do]

Verdict

结论

[1-2 sentence conclusion]

**Gate**: All audit agents completed (or timed out after 5 minutes). At least 75% returned results. Audit files exist in `/tmp/`. Proceed only when gate passes.
[1-2句话的结论]

**准入条件**:所有审计Agent已完成(或超时5分钟)。至少75%的Agent返回了结果。审计文件已保存到`/tmp/`目录。仅当满足所有条件时才可进入下一阶段。

Phase 6: REPORT

阶段6:报告

Goal: Produce the final reality-grounded report.
Step 1: Read all audit findings
Read every
/tmp/audit-*.md
file.
Step 2: Adjust recommendations
For each recommendation:
  • If audit found ALREADY EXISTS: remove from recommendations, note in "already covered" section
  • If audit found PARTIAL: adjust description to focus on what's actually missing
  • If audit found MISSING: keep as-is, add the affected files from audit
Step 3: Build final report
Overwrite
research-[REPO_NAME]-comparison.md
with the final report:
markdown
undefined
目标:生成基于实际情况的最终报告。
步骤1:读取所有审计发现结果
读取所有
/tmp/audit-*.md
文件。
步骤2:调整建议
对于每条建议:
  • 若审计发现已存在:从建议中移除,在"已覆盖内容"部分记录
  • 若审计发现部分覆盖:调整描述以聚焦实际缺失的内容
  • 若审计发现缺失:保留原建议,添加审计中发现的受影响文件
步骤3:构建最终报告
覆盖
research-[REPO_NAME]-comparison.md
文件,生成最终报告:
markdown
undefined

Competitive Analysis: [REPO_NAME] vs claude-code-toolkit

竞争分析: [REPO_NAME] vs claude-code-toolkit

Executive Summary

执行摘要

[2-3 sentences: what the repo is, whether it adds value, headline finding]
[2-3句话:该仓库是什么、是否具备价值、核心发现]

Repository Overview

仓库概述

  • URL: [url]
  • Total files analyzed: [count]
  • Analysis zones: [list with counts]
  • Analysis date: [date]
  • URL: [url]
  • 分析的总文件数: [数量]
  • 分析区域: [带数量的列表]
  • 分析日期: [日期]

Comparison Table

对比表格

CapabilityTheir ApproachOur ApproachStatus
.........Equivalent / They lead / We lead / Unique to them
能力他们的实现方案我们的实现方案状态
.........相当 / 他们更优 / 我们更优 / 他们独有

Already Covered

已覆盖内容

[Capabilities we initially thought were gaps but audit confirmed we have]
CapabilityOur ImplementationFiles
.........
[我们最初认为是差距但经审计确认已具备的能力]
能力我们的实现文件
.........

Recommendations

建议

#RecommendationValueWhat We HaveWhat's MissingEffortAffected Files
1...HIGH......S/M/L...
序号建议价值我们已有的内容缺失的内容工作量受影响文件
1.........小/中/大...

Verdict

结论

[Final assessment: is this repo worth adopting ideas from? Which specific items?]
[最终评估:该仓库的思路是否值得借鉴?具体哪些内容值得借鉴?]

Next Steps

后续步骤

  • [Actionable items]
  • [If HIGH-value items: "Create ADR for adoption of [specific items]"]

**Step 4: Cleanup**

Remove temporary zone and audit files from `/tmp/` (keep the cloned repo for reference).

**Gate**: Final report saved to `research-[REPO_NAME]-comparison.md`. Report contains comparison table, adjusted recommendations, and verdict. No "DRAFT" watermark remains. All recommendations have been reality-checked against audit findings. Proceed only when gate passes.

---
  • [可执行任务]
  • [若有高价值内容:"为采纳[特定内容]创建ADR"]

**步骤4:清理**

从`/tmp/`目录中删除临时区域文件和审计文件(保留克隆的仓库供参考)。

**准入条件**:最终报告已保存到`research-[REPO_NAME]-comparison.md`。报告包含对比表格、调整后的建议和结论。报告中无"草稿"水印。所有建议均已根据审计结果进行实际验证。仅当满足所有条件时才可完成流程。

---

Error Handling

错误处理

Error: "Repository Clone Failed"

错误:"仓库克隆失败"

Cause: Invalid URL, private repo, network issue, or repo doesn't exist Solution:
  1. Verify the URL is correct and the repo is public
  2. If private, check that git credentials are configured
  3. If network issue, retry once after 5 seconds
  4. If repo doesn't exist, report to user and abort pipeline
原因:URL无效、仓库为私有仓库、网络问题或仓库不存在 解决方案:
  1. 验证URL正确且仓库为公共仓库
  2. 若为私有仓库,检查git凭据是否已配置
  3. 若为网络问题,5秒后重试一次
  4. 若仓库不存在,告知用户并终止管道

Error: "Repository Too Large (10,000+ files)"

错误:"仓库过大(10000+文件)"

Cause: Monorepo or very large codebase Solution:
  1. Increase zone capping to split aggressively (sub-zones of ~50 files)
  2. Prioritize zones most relevant to our toolkit (skills, agents, hooks, docs)
  3. Deprioritize vendor, generated, and third-party code zones
  4. Note incomplete coverage in the final report
原因:单体仓库或超大代码库 解决方案:
  1. 增加区域上限以进行更激进的拆分(子区域约50个文件)
  2. 优先处理与我们工具包最相关的区域(skills、agents、hooks、docs)
  3. 降低供应商代码、生成代码和第三方代码区域的优先级
  4. 在最终报告中说明覆盖不完整

Error: "Agent Timed Out in Phase 2/5"

错误:"阶段2/5中Agent超时"

Cause: Zone too large, agent stuck on binary/generated files Solution:
  1. Proceed with results from completed agents (minimum 75% required)
  2. Note which zones/audits were incomplete in the report
  3. If below 75%, retry failed zones with smaller file batches
原因:区域过大、Agent在处理二进制/生成文件时卡住 解决方案:
  1. 使用已完成Agent的结果继续(至少需要75%的完成率)
  2. 在报告中记录哪些区域/审计未完成
  3. 若完成率低于75%,将失败的区域拆分为更小的文件批次并重试

Error: "No Gaps Found"

错误:"未发现差距"

Cause: External repo covers the same ground or less than ours Solution:
  1. This is a valid outcome, not an error
  2. Report confirms our toolkit already covers or exceeds the external repo
  3. Note any interesting alternative approaches even if not gaps
  4. Skip Phase 5 (no recommendations to audit)
原因:外部仓库的覆盖范围与我们的工具包相同或更小 解决方案:
  1. 这是有效的结果,并非错误
  2. 在报告中确认我们的工具包已覆盖或超过外部仓库的能力
  3. 即使不是差距,也记录任何有趣的替代实现方案
  4. 跳过第5阶段(无建议需要审计)

Error: "Self-Inventory Agent Failed"

错误:"自我盘点Agent失败"

Cause: Our own repo structure changed or agent timed out Solution:
  1. Fall back to reading
    skills/INDEX.json
    for skill counts
  2. Use
    ls agents/ hooks/ scripts/
    for basic counts
  3. Note that self-inventory is approximate in the report

原因:我们自己的仓库结构变更或Agent超时 解决方案:
  1. 退而求其次,读取
    skills/INDEX.json
    获取Skill数量
  2. 使用
    ls agents/ hooks/ scripts/
    获取基本数量
  3. 在报告中说明自我盘点结果为近似值

Anti-Patterns

反模式

Anti-Pattern 1: Shallow Reading (Skimming Instead of Reading Every File)

反模式1:浅度读取(略读而非读取所有文件)

What it looks like: Agent reads 10 of 50 files in a zone, claims to understand the zone Why wrong: Misses the components that distinguish the repo; surface-level analysis produces surface-level recommendations Do instead: Each agent MUST read every file in its zone. The zone capping in Phase 1 ensures this is feasible.
表现:Agent读取区域中50个文件中的10个,就声称了解该区域 危害:会遗漏仓库的独特组件;表面分析会产生表面建议 正确做法:每个Agent必须读取其区域内的所有文件。阶段1的区域上限确保这一点是可行的。

Anti-Pattern 2: Recommending Things We Already Have

反模式2:建议我们已有的内容

What it looks like: "They have a debugging skill; we should add one" (when we already have systematic-debugging) Why wrong: Wastes effort on false gaps; undermines report credibility Do instead: Phase 5 audit exists specifically to catch this. Never skip it. Every recommendation must survive audit.
表现:"他们有调试Skill;我们应该添加一个"(而我们已经有systematic-debugging) 危害:在虚假差距上浪费精力;降低报告可信度 正确做法:第5阶段审计正是为了避免这种情况。绝不要跳过审计。所有建议必须通过审计。

Anti-Pattern 3: Over-Counting Differences as Gaps

反模式3:将所有差异算作差距

What it looks like: Listing every difference as a recommendation regardless of value Why wrong: Different is not better. A different naming convention is not a gap worth addressing. Do instead: Only flag genuine capability gaps — things they can do that we cannot. Rate honestly: most differences are LOW or not gaps at all.
表现:将每个差异都列为建议,无论其价值如何 危害:不同并不等于更优。不同的命名规范并非值得解决的差距。 正确做法:仅标记真实的能力差距——他们能做到而我们做不到的事情。如实评级:大多数差异是低价值或根本不是差距。

Anti-Pattern 4: Skipping the Audit Phase

反模式4:跳过审计阶段

What it looks like: Producing the report directly from Phase 4 synthesis without verifying Why wrong: Unverified recommendations erode trust. The whole point of this pipeline is reality-grounding. Do instead: Always run Phase 5 unless
--quick
was explicitly requested. Audit is what separates this from a superficial comparison.
表现:直接从第4阶段整合结果生成报告,不进行验证 危害:未经验证的建议会损害信任。该管道的核心价值就是基于实际情况的分析。 正确做法:除非明确使用
--quick
参数,否则始终运行第5阶段。审计是区分该管道与表面对比的关键。

Anti-Pattern 5: Anchoring on Repository Size or Star Count

反模式5:过度依赖仓库大小或星标数量

What it looks like: "This repo has 5,000 stars so it must have good ideas" Why wrong: Popularity does not equal relevance to our specific toolkit Do instead: Evaluate every component on its merits relative to our needs. A 10-star repo with one brilliant pattern is more valuable than a 10,000-star repo that duplicates what we have.
表现:"这个仓库有5000星,所以肯定有好的思路" 危害:流行度不等于与我们工具包的相关性 正确做法:根据我们的需求评估每个组件的实际价值。一个只有10星但包含一个出色模式的仓库,比一个有10000星但重复我们已有功能的仓库更有价值。

Anti-Pattern 6: Generating Adoption Recommendations Without Effort Estimates

反模式6:生成无工作量估算的采纳建议

What it looks like: "We should adopt X" without saying how much work it would take Why wrong: A HIGH-value recommendation that takes 3 weeks may be lower priority than a MEDIUM-value one that takes 30 minutes Do instead: Every recommendation in the final table MUST include an effort estimate (S/M/L).

表现:"我们应该采纳X"但未说明需要多少工作量 危害:一个高价值但需要3周工作量的建议,优先级可能低于一个中价值但仅需30分钟的建议 正确做法:最终表格中的所有建议必须包含工作量估算(小/中/大)。

References

参考资料

This skill uses these shared patterns:
  • Anti-Rationalization - Prevents shortcut rationalizations
  • Verification Checklist - Pre-completion checks
  • Gate Enforcement - Phase transition rules
  • Pipeline Architecture - Pipeline design principles
该Skill使用以下共享模式:
  • Anti-Rationalization - 防止捷径式合理化
  • Verification Checklist - 完成前检查
  • Gate Enforcement - 阶段转换规则
  • Pipeline Architecture - 管道设计原则

Domain-Specific Anti-Rationalization

领域特定的反合理化

RationalizationWhy It's WrongRequired Action
"I read enough files to get the picture"Sampling bias misses distinguishing componentsRead every file in the zone
"Our system obviously has this"Obvious to whom? Prove it with file paths.Run audit agent, cite exact files
"This difference is clearly valuable"Clearly to whom? Different is not better.Rate honestly, audit against reality
"Audit would just confirm what I know"Confidence is not correctnessRun audit; let evidence decide
"The repo is too big to read fully"Zone capping exists for this reasonSplit zones, read all files in each
"Quick comparison is good enough"Quick comparisons miss nuance and produce false positivesComplete all 6 phases
合理化理由错误原因要求的行动
"我读的文件足够多了,已经了解情况"抽样偏差会遗漏独特组件读取区域内的所有文件
"我们的系统显然已经具备这个能力"对谁来说显然?请用文件路径证明运行审计Agent,引用具体文件
"这个差异显然有价值"对谁来说显然?不同并不等于更优如实评级,基于实际情况审计
"审计只是确认我已经知道的事情"自信不等于正确运行审计;让证据决定
"仓库太大,无法完整读取"区域上限正是为这种情况设计的拆分区域,读取每个子区域的所有文件
"快速对比就足够了"快速对比会遗漏细节并产生误报完成全部6个阶段
",