nexus-mapper
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
Chinesenexus-mapper — AI 项目探测协议
nexus-mapper — AI Project Probing Protocol
"你不是在写代码文档。你是在为下一个接手的 AI 建立思维基础。"
本 Skill 指导 AI Agent 使用 PROBE 五阶段协议,对任意本地 Git 仓库执行系统性探测,产出 分层知识库。
.nexus-map/"You are not writing code documentation. You are building a cognitive foundation for the next AI that takes over this project."
This Skill guides AI Agents to use the PROBE 5-Phase Protocol to perform systematic probing on any local Git repository, producing a layered knowledge base in .
.nexus-map/⚠️ CRITICAL — 五阶段不可跳过
⚠️ CRITICAL — Do Not Skip Any of the Five Phases
[!IMPORTANT] 在 PROFILE、REASON、OBJECT、BENCHMARK 完成前,不得产出最终。.nexus-map/这不是为了形式完整,而是为了防止 AI 把第一眼假设直接写成结论。最终产物必须建立在脚本输出、仓库结构、反证挑战和回查验证之上。
❌ 禁止行为:
- 跳过 OBJECT 直接写输出资产
- 在 BENCHMARK 完成前生成
concept_model.json - PROFILE 阶段脚本失败后继续执行后续阶段
✅ 必须做到:
- 每个阶段完成后显式确认「✅ 阶段名 完成」再进入下一阶段
- OBJECT 提出足以推翻当前假设的最少一组高价值质疑,通常为 1-3 条,绝不凑数
- 节点的
implemented必须在仓库中真实存在;code_path节点不得伪造planned/inferred(见守则2)code_path
[!IMPORTANT] You must not produce the finaluntil PROFILE, REASON, OBJECT, and BENCHMARK are completed..nexus-map/This is not for formal completeness, but to prevent the AI from directly writing first-glance assumptions as conclusions. The final output must be based on script outputs, repository structure, counterevidence challenges, and review verification.
❌ Prohibited Actions:
- Skipping OBJECT to directly generate output assets
- Generating before completing BENCHMARK
concept_model.json - Proceeding to subsequent phases after script failure in the PROFILE phase
✅ Required Actions:
- Explicitly confirm "✅ Phase Name Completed" after finishing each phase before moving to the next
- OBJECT must propose a minimal set of high-value questions sufficient to overturn current assumptions, usually 1-3 questions, never padding the number
- The of
code_pathnodes must actually exist in the repository;implementednodes must not fakeplanned/inferred(see Rule 2)code_path
📌 何时调用 / 何时不调用
📌 When to Call / When Not to Call
| 场景 | 调用 |
|---|---|
| 用户提供本地 repo 路径,希望 AI 理解其架构 | ✅ |
需要生成 | ✅ |
| 用户说「帮我分析项目」「建立项目知识库」「让 AI 了解这个仓库」 | ✅ |
运行环境无 shell 执行能力(纯 API 调用模式,无 | ❌ |
| 宿主机无本地 Python 3.10+ | ❌ |
目标仓库无任何已知语言源文件( | ❌ |
用户只想查询某个特定文件/函数 → 直接用 | ❌ |
| Scenario | Call |
|---|---|
| User provides a local repo path and wants the AI to understand its architecture | ✅ |
Need to generate | ✅ |
| User says "help me analyze the project", "build a project knowledge base", "let the AI understand this repository" | ✅ |
No shell execution capability in the runtime environment (pure API call mode, no | ❌ |
| No local Python 3.10+ on the host machine | ❌ |
No known language source files in the target repository (none of | ❌ |
User only wants to query a specific file/function → directly use | ❌ |
⚠️ 前提检查(缺失项要显式告知;可降级时优先降级而不是中止)
⚠️ Prerequisite Checks (Explicitly inform users of missing items; prioritize downgrade over abort when possible)
| 前提 | 检查方式 |
|---|---|
| 目标路径存在 | |
| Python 3.10+ | |
| 脚本依赖已安装 | |
| 有 shell 执行能力 | Agent 环境支持 |
git.git| Prerequisite | Check Method |
|---|---|
| Target path exists | |
| Python 3.10+ | |
| Script dependencies installed | |
| Shell execution capability | Agent environment supports |
git.git📥 输入契约
📥 Input Contract
repo_path: 目标仓库的本地绝对路径(必填)语言支持:自动按文件扩展名 dispatch,语言配置(扩展名映射 + Tree-sitter 查询)存储在 ,优先用 bundled structural queries 提取模块/类/函数;若 grammar 可加载但当前没有结构 query,则至少保留 Module 级节点并在输出中标注 。当前已接入的常见语言包括 Python/JavaScript/TypeScript/TSX/Bash/Java/Go/Rust/C#/C/C++/Kotlin/Ruby/Swift/Scala/PHP/Lua/Elixir/GDScript/Dart/Haskell/Clojure/SQL/Proto/Solidity/Vue/Svelte/R/Perl。
scripts/languages.jsonmodule-only coverage不支持的语言扩展:若仓库含有内置未支持的语言文件,agent 可通过命令行参数动态赋予支持:
- 添加新文件扩展名映射(可重复)
--add-extension .templ=templ - 为某语言添加结构查询(可重复)
--add-query templ struct "(component_declaration ...)"
查询参数格式: 其中 为 或 。
--add-query <LANG> <TYPE> <QUERY_STRING><TYPE>structimports高级用法:若配置复杂,可用 显式指定一个 JSON 配置文件,格式同前,允许扩展名映射、自定义查询和显式标记不支持的语言。
--language-config <JSON_FILE>如果当前任务涉及“补一个暂未适配的语言”或“为某种非标准扩展名补 Tree-sitter 支持”,应继续读取 。该文件不是阶段门控文件,而是命令行扩展与可选 JSON 配置的专项操作说明。
references/05-language-customization.mdrepo_path: Local absolute path of the target repository (required)Language Support: Automatically dispatch by file extension. Language configurations (extension mappings + Tree-sitter queries) are stored in . Priority is given to bundled structural queries to extract modules/classes/functions; if the grammar can be loaded but no structural query is available for the current language, at least retain Module-level nodes and mark in the output. Currently supported common languages include Python/JavaScript/TypeScript/TSX/Bash/Java/Go/Rust/C#/C/C++/Kotlin/Ruby/Swift/Scala/PHP/Lua/Elixir/GDScript/Dart/Haskell/Clojure/SQL/Proto/Solidity/Vue/Svelte/R/Perl.
scripts/languages.jsonmodule-only coverageUnsupported Language Extensions: If the repository contains language files not supported by default, the agent can dynamically add support via command-line parameters:
- Add a new file extension mapping (repeatable)
--add-extension .templ=templ - Add a structural query for a language (repeatable)
--add-query templ struct "(component_declaration ...)"
Query parameter format: where is or .
--add-query <LANG> <TYPE> <QUERY_STRING><TYPE>structimportsAdvanced Usage: For complex configurations, explicitly specify a JSON configuration file with , in the same format as above, allowing extension mappings, custom queries, and explicit marking of unsupported languages.
--language-config <JSON_FILE>If the current task involves "adding support for a language not yet adapted" or "adding Tree-sitter support for a non-standard extension", continue reading . This file is not a phase-gated document, but a special operation guide for command-line extensions and optional JSON configurations.
references/05-language-customization.md📤 输出格式
📤 Output Format
执行完成后,目标仓库根目录下将产出:
text
.nexus-map/
├── INDEX.md ← AI 冷启动主入口(< 2000 tokens)
├── arch/
│ ├── systems.md ← 系统边界 + 代码位置
│ ├── dependencies.md ← Mermaid 依赖图 + 时序图
│ └── test_coverage.md ← 静态测试面:测试文件、覆盖到的核心模块、证据缺口
├── concepts/
│ ├── concept_model.json ← Schema V1 机器可读图谱
│ └── domains.md ← 核心领域概念说明
├── hotspots/
│ └── git_forensics.md ← Git 热点 + 耦合对分析
└── raw/
├── ast_nodes.json ← Tree-sitter 解析原始数据
├── git_stats.json ← Git 热点与耦合数据
└── file_tree.txt ← 过滤后的文件树所有生成的 Markdown 文件必须带一个简短头部,至少包含:
generated_byverified_atprovenance
concept_model.jsonlabeltitletitlelabel如果 PROFILE 阶段发现已知但未支持的语言文件, 必须明确写出哪些部分属于人工推断或降级分析。
如果 PROFILE 阶段发现 ,也必须写清楚:这些语言已被计入 AST 文件覆盖,但没有类/函数级结构保证。
如果 PROFILE 阶段发现某个通过覆盖配置声明的语言仍然无法加载 parser,也必须写清楚:这是 ,不能伪装成已覆盖。
provenancemodule-only coverageconfigured-but-unavailableAfter execution, the following will be generated in the root directory of the target repository:
text
.nexus-map/
├── INDEX.md ← AI cold-start main entry (< 2000 tokens)
├── arch/
│ ├── systems.md ← System boundaries + code locations
│ ├── dependencies.md ← Mermaid dependency graph + sequence diagram
│ └── test_coverage.md ← Static test surface: test files, covered core modules, evidence gaps
├── concepts/
│ ├── concept_model.json ← Schema V1 machine-readable graph
│ └── domains.md ← Core domain concept explanations
├── hotspots/
│ └── git_forensics.md ← Git hotspots + coupling pair analysis
└── raw/
├── ast_nodes.json ← Tree-sitter parsed raw data
├── git_stats.json ← Git hotspot and coupling data
└── file_tree.txt ← Filtered file treeAll generated Markdown files must include a short header with at least:
generated_byverified_atprovenance
The human-readable name field in must use uniformly. Do not add ; if appears in any generated result, delete it and revert to the semantics during the EMIT phase.
concept_model.jsonlabeltitletitlelabelIf known but unsupported language files are found during the PROFILE phase, must explicitly state which parts are based on manual inference or downgraded analysis.
If is found during the PROFILE phase, it must also be clearly stated: these languages are included in AST file coverage, but there is no class/function-level structural guarantee.
If a language declared via coverage configuration still fails to load the parser during the PROFILE phase, it must be clearly stated: this is , do not pretend it is covered.
provenancemodule-only coverageconfigured-but-unavailable🔍 按需查询工具
🔍 On-Demand Query Tool
scripts/query_graph.pyraw/ast_nodes.json零额外依赖——纯 Python 标准库,输入 即可运行。
ast_nodes.jsonscripts/query_graph.pyraw/ast_nodes.jsonZero Extra Dependencies — Pure Python standard library, just input to run.
ast_nodes.json查询模式
Query Modes
bash
undefinedbash
undefined查看某个文件的类/函数结构及 import 清单
View the class/function structure and import list of a file
python $SKILL_DIR/scripts/query_graph.py <ast_nodes.json> --file <path>
python $SKILL_DIR/scripts/query_graph.py <ast_nodes.json> --file <path>
反向依赖查询:谁引用了这个模块
Reverse dependency query: who imports this module
python $SKILL_DIR/scripts/query_graph.py <ast_nodes.json> --who-imports <module_or_path>
python $SKILL_DIR/scripts/query_graph.py <ast_nodes.json> --who-imports <module_or_path>
影响半径:上游依赖 + 下游被依赖者
Impact radius: upstream dependencies + downstream dependents
python $SKILL_DIR/scripts/query_graph.py <ast_nodes.json> --impact <path>
python $SKILL_DIR/scripts/query_graph.py <ast_nodes.json> --impact <path>
叠加 git 风险与耦合数据(可选)
Overlay git risk and coupling data (optional)
python $SKILL_DIR/scripts/query_graph.py <ast_nodes.json> --impact <path>
--git-stats <git_stats.json>
--git-stats <git_stats.json>
python $SKILL_DIR/scripts/query_graph.py <ast_nodes.json> --impact <path>
--git-stats <git_stats.json>
--git-stats <git_stats.json>
高扇入/扇出核心节点识别
Identify core nodes with high fan-in/fan-out
python $SKILL_DIR/scripts/query_graph.py <ast_nodes.json> --hub-analysis [--top N]
python $SKILL_DIR/scripts/query_graph.py <ast_nodes.json> --hub-analysis [--top N]
按目录聚合的结构摘要(为 EMIT 阶段 systems.md 提供数据支撑)
Directory-aggregated structural summary (provides data support for systems.md in the EMIT phase)
python $SKILL_DIR/scripts/query_graph.py <ast_nodes.json> --summary
undefinedpython $SKILL_DIR/scripts/query_graph.py <ast_nodes.json> --summary
undefined使用时机
Usage Timing
| 阶段 | 推荐查询 | 用途 |
|---|---|---|
| REASON | | 用扇入/扇出数据验证核心系统假说,而不是仅凭目录名猜测 |
| OBJECT | | 验证边界假设,查看真实上下游依赖;叠加 git 热度和耦合度 |
| EMIT | | 生成 |
| 开发中 | | Bug 调查、修改影响评估、重构分析 |
定位:是"地图",.nexus-map/是"放大镜"。地图帮你定位大方向,放大镜帮你看清局部细节。query_graph.py
五个查询模式的详细使用场景与实战案例,见。references/06-query-guide.md
| Phase | Recommended Query | Purpose |
|---|---|---|
| REASON | | Verify core system hypotheses using fan-in/fan-out data instead of guessing based on directory names |
| OBJECT | | Verify boundary assumptions, view real upstream and downstream dependencies; overlay git heat and coupling degree |
| EMIT | | Provide data support for generating |
| During Development | | Bug investigation, modification impact assessment, refactoring analysis |
Positioning:is the "map", and.nexus-map/is the "magnifying glass". The map helps you locate the general direction, while the magnifying glass helps you see local details clearly.query_graph.py
Detailed usage scenarios and practical cases for the five query modes can be found in.references/06-query-guide.md
🧠 持久指令
🧠 Persistent Instructions
为避免新会话忘记读取既有知识库,请把下面这段精炼规则写入宿主工具的持久指令文件,例如 、 或同类记忆文件:
AGENTS.mdCLAUDE.mdmd
如果仓库中存在 .nexus-map/INDEX.md,开始任务前必须先阅读它恢复全局上下文。
如果任务需要判断局部结构、依赖关系、影响半径或边界归属,优先回读 nexus-mapper skill 的按需查询说明,并使用 query_graph.py 基于 .nexus-map/raw/ast_nodes.json 做验证;不要重新猜结构。
当一次任务改变了项目的结构认知时,应在交付前评估是否同步更新 .nexus-map。结构认知包括:系统边界、入口、依赖关系、测试面、语言支持、路线图或阶段性进度事实。纯局部实现细节默认不更新。
不要把 .nexus-map 视为静态文档;它是项目记忆的一部分。新对话优先读取,重要变更后按需同步。这是一条 建议写入宿主持久记忆 的规则,目的是让 agent 在真正需要时自然想起读取或更新 。
.nexus-mapTo prevent new sessions from forgetting to read existing knowledge bases, write the following concise rules into the host tool's persistent instruction file, such as , , or similar memory files:
AGENTS.mdCLAUDE.mdmd
If .nexus-map/INDEX.md exists in the repository, you must read it first to restore global context before starting any task.
If a task requires judging local structure, dependency relationships, impact radius, or boundary attribution, prioritize reviewing the on-demand query instructions of the nexus-mapper skill and use query_graph.py to verify based on .nexus-map/raw/ast_nodes.json; do not re-guess the structure.
When a task changes the project's structural cognition, evaluate whether to synchronously update .nexus-map before delivery. Structural cognition includes: system boundaries, entry points, dependency relationships, test surfaces, language support, roadmaps, or phased progress facts. Pure local implementation details do not require updates by default.
Do not treat .nexus-map as static documentation; it is part of the project's memory. New conversations should read it first, and it should be updated on demand after important changes.This is a rule recommended to be written into the host's persistent memory, aiming to make the agent naturally remember to read or update when truly needed.
.nexus-map📋 PROBE 阶段硬门控
📋 PROBE Phase Hard Gates
[!IMPORTANT] 进入每个阶段前必须对应 reference,不得跳过。 各阶段详细步骤、完成检查清单与边界场景处理均在 reference 中定义。read_file
[Skill 激活时] → read_file references/01-probe-protocol.md (阶段步骤蓝图)
[REASON 前] → read_file references/03-edge-cases.md (边界场景检查)
[OBJECT 前] → read_file references/04-object-framework.md (三维度质疑模板)
[EMIT 前] → read_file references/02-output-schema.md (Schema 校验规范)[!IMPORTANT] You mustthe corresponding reference before entering each phase; do not skip it. Detailed steps, completion checklists, and boundary scenario handling for each phase are defined in the references.read_file
[When Skill is activated] → read_file references/01-probe-protocol.md (Phase step blueprint)
[Before REASON] → read_file references/03-edge-cases.md (Boundary scenario check)
[Before OBJECT] → read_file references/04-object-framework.md (3-dimensional questioning template)
[Before EMIT] → read_file references/02-output-schema.md (Schema verification specifications)🛡️ 执行守则
🛡️ Execution Rules
守则1: OBJECT 拒绝形式主义
Rule 1: OBJECT Rejects Formalism
OBJECT 的存在意义是打破 REASON 的幸存者偏差。大量工程事实隐藏在目录命名和 git 热点背后,第一直觉几乎总是错的。
❌ 无效质疑(禁止提交):
Q1: 我对系统结构的把握还不够扎实
Q2: xxx 目录的职责暂时没有直接证据▲ 问题不在于用了某几个词,而在于这类表述没有证据线索,也无法在 BENCHMARK 阶段验证。
✅ 有效质疑格式:
Q1: git_stats 显示 tasks/analysis_tasks.py 变更 21 次(high risk),
但 HYPOTHESIS 认为编排入口是 evolution/detective_loop.py。
矛盾:若 detective_loop 是入口,为何 analysis_tasks 热度更高?
证据线索: git_stats.json hotspots[0].path
验证计划: view tasks/analysis_tasks.py 的 class 定义 + import 树The purpose of OBJECT is to break the survivorship bias of REASON. A large number of engineering facts are hidden behind directory names and git hotspots, and first intuitions are almost always wrong.
❌ Invalid Questions (Prohibited):
Q1: My grasp of the system structure is not solid enough
Q2: The responsibility of the xxx directory has no direct evidence for now▲ The problem is not the use of certain words, but that such statements have no evidence clues and cannot be verified in the BENCHMARK phase.
✅ Valid Question Format:
Q1: git_stats shows that tasks/analysis_tasks.py has been changed 21 times (high risk),
but the HYPOTHESIS considers evolution/detective_loop.py as the orchestration entry point.
Contradiction: If detective_loop is the entry point, why is analysis_tasks more frequently modified?
Evidence Clue: git_stats.json hotspots[0].path
Verification Plan: View the class definition + import tree of tasks/analysis_tasks.py守则2: implemented 节点必须有真实 code_path
Rule 2: implemented
Nodes Must Have Real code_path
implementedcode_path[!IMPORTANT] 写入前,必须先区分节点是concept_model.json、implemented还是planned。 只有inferred节点允许写入implemented,且必须亲手验证存在。code_path
bash
undefined[!IMPORTANT] Before writing to, you must first distinguish whether a node isconcept_model.json,implemented, orplanned. Onlyinferrednodes are allowed to haveimplementedwritten, and you must personally verify its existence.code_path
bash
undefinedBENCHMARK 阶段验证方式
Verification method in BENCHMARK phase
ls $repo_path/src/nexus/application/weaving/ # ✅ 目录存在 → 节点有效
ls $repo_path/src/nexus/application/nonexist/ # ❌ [!ERROR] → 修正或删除此节点
对于 `planned` 或 `inferred` 节点,使用:
```json
{
"implementation_status": "planned",
"code_path": null,
"evidence_path": "docs/architecture.md",
"evidence_gap": "仓库中未发现 src/agents/monarch/,仅在设计文档中出现"
}❌ 禁止:
- 用一个“勉强相关”的文件冒充
code_path - 为
implementation_status,却写入伪精确目录planned/inferred - 这类把状态塞进路径字段的写法
code_path: "PLANNED"
ls $repo_path/src/nexus/application/weaving/ # ✅ Directory exists → node is valid
ls $repo_path/src/nexus/application/nonexist/ # ❌ [!ERROR] → Correct or delete this node
For `planned` or `inferred` nodes, use:
```json
{
"implementation_status": "planned",
"code_path": null,
"evidence_path": "docs/architecture.md",
"evidence_gap": "src/agents/monarch/ not found in the repository, only mentioned in design documents"
}❌ Prohibited:
- Using a "barely related" file to pretend to be
code_path - Writing a pseudo-precise directory when is
implementation_statusplanned/inferred - Writing which puts status into the path field
code_path: "PLANNED"
守则3: EMIT 原子性
Rule 3: EMIT Atomicity
先全部写入 ,全部成功后整体移动到正式目录,删除 。
.nexus-map/.tmp/.tmp/目的:中途失败不留半成品。下次执行检测到 存在 → 清理后重新生成。
.tmp/✅ 幂等性规则:
| 状态 | 处理方式 |
|---|---|
| 直接继续 |
| 询问用户:「是否覆盖?[y/n]」 |
| 「检测到未完成分析,将重新生成」,直接继续 |
First write all content to , then move the entire directory to the official location after all are successful, and delete .
.nexus-map/.tmp/.tmp/Purpose: No half-finished products left if execution fails midway. If is detected during the next execution → clean it up and regenerate.
.tmp/✅ Idempotency Rules:
| Status | Handling Method |
|---|---|
| Proceed directly |
| Ask the user: "Overwrite? [y/n]" |
| "Incomplete analysis detected, will regenerate", proceed directly |
守则4: INDEX.md 是唯一冷启动入口
Rule 4: INDEX.md is the Only Cold-Start Entry
INDEX.md- < 2000 tokens — 超过就重写,不是截断
- 结论必须具体 — 不要用空泛的模糊词搪塞;证据不足时明确写出 或
evidence gap,并说明缺了什么证据unknown
写完后执行 token 估算:行数 × 平均 30 tokens/行 = 粗估值。
The reader of is an AI that has never seen this repository before. Two hard constraints:
INDEX.md- < 2000 tokens — Rewrite if exceeded, do not truncate
- Conclusions must be specific — Do not use vague empty words; when evidence is insufficient, explicitly write or
evidence gapand explain what evidence is missingunknown
After writing, estimate tokens: number of lines × average 30 tokens/line = rough estimate.
🧭 不确定性表达规范
🧭 Uncertainty Expression Specifications
避免只写:待确认 · 可能是 · 疑似 · 也许 · 待定 · 暂不清楚 · 需要进一步 · 不确定
避免只写:pending · maybe · possibly · perhaps · TBD · to be confirmed如果证据不足,可以这样写:
unknown: 未发现直接证据表明 api/ 是主入口,当前仅能确认 cli.py 被 README 引用evidence gap: 仓库没有 git 历史,因此 hotspots 部分跳过
原则:允许诚实地写不确定,但必须解释不确定来自哪一条缺失证据,而不是把模糊词当结论。
Avoid writing only: to be confirmed · may be · suspected · perhaps · pending · unclear · need further · uncertain
Avoid writing only: pending · maybe · possibly · perhaps · TBD · to be confirmedIf evidence is insufficient, you can write:
unknown: No direct evidence found indicating api/ is the main entry; currently only confirmed that cli.py is referenced in READMEevidence gap: No git history in the repository, so the hotspots section is skipped
Principle: It is allowed to honestly write uncertainty, but you must explain which missing evidence leads to the uncertainty, instead of using vague words as conclusions.
守则5: 最小执行面与敏感信息保护
Rule 5: Minimal Execution Surface and Sensitive Information Protection
[!IMPORTANT] 默认只运行本 Skill 自带脚本和必要的只读检查。不要因为“想更懂仓库”就执行目标仓库里的构建脚本、测试脚本或自定义命令。
- 默认允许:、
extract_ast.py、目录遍历、文本搜索、只读文件查看git_detective.py - 默认禁止:执行目标仓库的 、
npm install、pnpm dev、python main.py等命令,除非用户明确要求docker compose up - 遇到 、密钥文件、凭据配置时:只记录其存在和用途,不抄出具体值
.env
[!IMPORTANT] By default, only run scripts included with this Skill and necessary read-only checks. Do not execute build scripts, test scripts, or custom commands in the target repository just because "you want to understand the repository better".
- Allowed by default: ,
extract_ast.py, directory traversal, text search, read-only file viewinggit_detective.py - Prohibited by default: Executing commands like ,
npm install,pnpm dev,python main.pyin the target repository, unless explicitly requested by the userdocker compose up - When encountering , key files, or credential configurations: Only record their existence and purpose, do not copy specific values
.env
守则6: 降级与人工推断必须显式可见
Rule 6: Downgrades and Manual Inferences Must Be Explicitly Visible
[!IMPORTANT] 如果 AST 覆盖不完整,或者某部分依赖图/系统边界来自人工阅读而非脚本产出,必须在最终文件中显式标注 provenance。
- 中凡是非 AST 直接支持的依赖关系,必须标注
dependencies.mdinferred from file tree/manual inspection - 、
domains.md、systems.md如果涉及未支持语言区域,必须显式说明INDEX.mdunsupported language downgrade - 若写入进度快照、Sprint 状态、路线图,必须附 ,避免过期信息伪装成当前事实
verified_at
[!IMPORTANT] If AST coverage is incomplete, or if part of the dependency graph/system boundary comes from manual reading rather than script output, you must explicitly mark the provenance in the final file.
- In , any dependency relationship not directly supported by AST must be marked
dependencies.mdinferred from file tree/manual inspection - If ,
domains.md,systems.mdinvolve unsupported language areas, explicitly stateINDEX.mdunsupported language downgrade - If writing progress snapshots, Sprint status, or roadmaps, attach to avoid outdated information being disguised as current facts
verified_at
🛠️ 脚本工具链
🛠️ Script Toolchain
bash
undefinedbash
undefined设置 SKILL_DIR(根据实际安装路径)
Set SKILL_DIR (based on actual installation path)
场景 A: 作为 .agent/skills 安装
Scenario A: Installed as .agent/skills
SKILL_DIR=".agent/skills/nexus-mapper"
SKILL_DIR=".agent/skills/nexus-mapper"
场景 B: 独立 repo(开发/调试时)
Scenario B: Independent repo (for development/debugging)
SKILL_DIR="/path/to/nexus-mapper"
SKILL_DIR="/path/to/nexus-mapper"
PROFILE 阶段调用 — 基础用法
Call in PROFILE phase — Basic Usage
python $SKILL_DIR/scripts/extract_ast.py <repo_path> [--max-nodes 500] \
<repo_path>/.nexus-map/raw/ast_nodes.json
python $SKILL_DIR/scripts/extract_ast.py <repo_path> [--max-nodes 500] \
<repo_path>/.nexus-map/raw/ast_nodes.json
若仓库包含非标准语言,可通过命令行参数添加支持
If the repository contains non-standard languages, add support via command-line parameters
python $SKILL_DIR/scripts/extract_ast.py <repo_path> [--max-nodes 500]
--add-extension .templ=templ
--add-query templ struct "(component_declaration name: (identifier) @class.name) @class.def" \
--add-extension .templ=templ
--add-query templ struct "(component_declaration name: (identifier) @class.name) @class.def" \
<repo_path>/.nexus-map/raw/ast_nodes.json
python $SKILL_DIR/scripts/extract_ast.py <repo_path> [--max-nodes 500]
--add-extension .templ=templ
--add-query templ struct "(component_declaration name: (identifier) @class.name) @class.def" \
--add-extension .templ=templ
--add-query templ struct "(component_declaration name: (identifier) @class.name) @class.def" \
<repo_path>/.nexus-map/raw/ast_nodes.json
若配置复杂,用 JSON 文件(格式参见 --language-config 说明)
For complex configurations, use a JSON file (see --language-config description for format)
python $SKILL_DIR/scripts/extract_ast.py <repo_path> [--max-nodes 500]
--language-config /custom/path/to/language-config.json \
--language-config /custom/path/to/language-config.json \
<repo_path>/.nexus-map/raw/ast_nodes.json
python $SKILL_DIR/scripts/extract_ast.py <repo_path> [--max-nodes 500]
--language-config /custom/path/to/language-config.json \
--language-config /custom/path/to/language-config.json \
<repo_path>/.nexus-map/raw/ast_nodes.json
PROFILE 阶段同时生成过滤后的文件树
Generate filtered file tree simultaneously in PROFILE phase
python $SKILL_DIR/scripts/extract_ast.py <repo_path> [--max-nodes 500]
--file-tree-out .nexus-map/raw/file_tree.txt \
--file-tree-out .nexus-map/raw/file_tree.txt \
<repo_path>/.nexus-map/raw/ast_nodes.json
**依赖安装(首次使用)**:
```bash
pip install -r $SKILL_DIR/scripts/requirements.txtpython $SKILL_DIR/scripts/extract_ast.py <repo_path> [--max-nodes 500]
--file-tree-out .nexus-map/raw/file_tree.txt \
--file-tree-out .nexus-map/raw/file_tree.txt \
<repo_path>/.nexus-map/raw/ast_nodes.json
**Dependency Installation (First Use)**:
```bash
pip install -r $SKILL_DIR/scripts/requirements.txt✅ 质量自检(EMIT 前必须全部通过)
✅ Quality Self-Inspection (Must Pass All Before EMIT)
- 五个阶段均已完成,每阶段有显式「✅ 完成」标记
- OBJECT 的质疑数量没有凑数;每条都带证据线索和可执行验证计划
- 节点的
implemented已亲手验证存在;code_path节点使用了planned/inferredimplementation_status + evidence_path + evidence_gap - 字段:具体、可验证;证据不足时明确说明缺口
responsibility - 全文 < 2000 tokens,结论具体不过度装确定(守则4)
INDEX.md - 若发现未支持语言文件,最终 Markdown 头部和相关章节已显式标注降级与人工推断范围
- 已生成,并明确这是静态测试面而不是运行时覆盖率
arch/test_coverage.md
- All five phases are completed, with explicit "✅ Completed" marks for each phase
- The number of questions in OBJECT is not padded; each question is accompanied by evidence clues and an executable verification plan
- The of
code_pathnodes has been personally verified to exist;implementednodes useplanned/inferredimplementation_status + evidence_path + evidence_gap - field: specific, verifiable; explicitly explain gaps when evidence is insufficient
responsibility - Full text of is < 2000 tokens, conclusions are specific and not overly certain (Rule 4)
INDEX.md - If unsupported language files are found, downgrades and manual inference ranges have been explicitly marked in the final Markdown headers and relevant sections
- has been generated, and it is clearly stated that this is a static test surface rather than runtime coverage
arch/test_coverage.md