explore-codebase
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseUser request: $ARGUMENTS
用户请求: $ARGUMENTS
Thoroughness Level
细致程度分级
FIRST: Determine level before exploring. Parse from natural language (e.g., "quick", "do a quick search", "thorough exploration", "very thorough") or auto-select if not specified: single entity lookup → quick; single bounded subsystem (per Definitions) → medium; query spanning 2+ bounded subsystems OR explicit interaction queries ("how do X and Y interact") → thorough; "comprehensive"/"all"/"architecture"/"audit" keywords → very-thorough.
Trigger conflicts: When a query contains triggers from multiple levels, use the highest thoroughness level indicated (very-thorough > thorough > medium > quick). Example: "where is the comprehensive auth?" → very-thorough ("comprehensive" overrides "where is").
| Level | Behavior | Triggers |
|---|---|---|
| quick | No research file, no todos, 1-2 search calls (Glob or Grep, not counting Read); if first search returns no results, try one alternative (if Glob failed, use Grep with same keyword; if Grep failed, use broader Glob like | "where is", "find the", "locate", single entity lookup |
| medium | Research file, 3-5 todos, core implementation + files in import/export statements within core (first-level only) + up to 3 callers; skip tests/config | specific bug, single feature, query about one bounded subsystem |
| thorough | Full logging, trace all imports + all direct callers + test files + config | multi-area feature, "how do X and Y interact", cross-cutting concerns |
| very-thorough | Unbounded exploration up to 100 files; if >100 files match, prioritize by dependency centrality (files with more direct import statements from other files, counting each importing file once) and note "N additional files exist" in overview | "comprehensive", "all", "architecture", "security audit", onboarding |
Definitions:
- Search call: One invocation of Glob or Grep (Read does not count toward the 1-2 limit)
- Core implementation: Files containing the primary logic for the queried topic (the files you'd edit to change behavior)
- Peripheral files: Files that interact with the topic but whose primary purpose is something else
- Direct callers: Files that import or invoke the core implementation
- Bounded subsystem: Code reachable within 2 hops of direct file imports or function calls (excluding transitive dependencies through dependency injection containers or event buses)
- First-level imports/exports: Files directly imported by or exporting to core implementation files (depth=1). Does not include files imported by those imports (transitive/depth>1).
- Same-module: Files in the same directory as the target file, or in an immediate subdirectory of the target file's directory
- Caller prioritization (for medium level): Select up to 3 callers total by applying in order: (1) same-module callers first, (2) then callers passing more arguments to the target function, (3) then by search order (order returned by Grep/Glob). Exhaust each tier before proceeding to next.
- Topic-kebab-case: Extract primary subject from query (typically 1-3 key nouns), convert to lowercase, replace spaces with hyphens. Examples: "payment timeout bug" → , "files related to authentication" →
payment-timeout, "how do orders and payments interact" →authentication.orders-payments
State: then proceed.
**Thoroughness**: [level] — [reason]第一步: 在开始探索前确定细致程度。可从自然语言中解析(例如"quick"、"do a quick search"、"thorough exploration"、"very thorough"),若未指定则自动选择:单一实体查找→快速模式;单一有界子系统(参考定义)→中等模式;跨2个及以上有界子系统的查询,或涉及交互的明确查询(如"how do X and Y interact")→全面模式;包含"comprehensive"/"all"/"architecture"/"audit"关键词→深度全面模式。
触发指令冲突: 当查询包含多个层级的触发指令时,使用最高级别的细致程度(深度全面 > 全面 > 中等 > 快速)。示例:"where is the comprehensive auth?"→深度全面模式("comprehensive"覆盖"where is")。
| 级别 | 行为 | 触发指令 |
|---|---|---|
| 快速模式 | 不生成研究文件,无待办事项,执行1-2次搜索调用(Glob或Grep,不计入Read调用);若首次搜索无结果,尝试一种替代方式(若Glob失败,使用相同关键词的Grep;若Grep失败,使用更宽泛的Glob如 | "where is"、"find the"、"locate"、单一实体查找 |
| 中等模式 | 生成研究文件,3-5项待办事项,覆盖核心实现+核心文件的一级导入/导出文件+最多3个调用方;跳过测试/配置文件 | 特定bug、单一功能、单一有界子系统相关查询 |
| 全面模式 | 完整日志记录,追踪所有导入项+所有直接调用方+测试文件+配置文件 | 跨领域功能、"how do X and Y interact"、横切关注点 |
| 深度全面模式 | 无边界探索,最多覆盖100个文件;若匹配文件超过100个,按依赖中心性排序(被其他文件直接导入次数更多的文件,每个导入文件仅计数一次),并在概览中备注"存在N个额外文件" | "comprehensive"、"all"、"architecture"、"security audit"、新员工入职 |
Scope Boundaries
定义
Check if the prompt includes scope markers. This determines your exploration boundaries.
- 搜索调用: 一次Glob或Grep的调用(Read不计入1-2次的限制)
- 核心实现: 包含查询主题主要逻辑的文件(修改这些文件可改变主题相关行为)
- 外围文件: 与主题交互但主要用途为其他功能的文件
- 直接调用方: 导入或调用核心实现的文件
- 有界子系统: 通过直接文件导入或函数调用2跳范围内可访问的代码(排除通过依赖注入容器或事件总线的间接依赖)
- 一级导入/导出: 核心实现文件直接导入或导出的文件(深度=1),不包含这些导入文件的依赖(间接/深度>1)
- 同模块: 与目标文件在同一目录,或目标文件目录的直接子目录中的文件
- 调用方优先级(中等模式):最多选择3个调用方,优先级顺序为:(1) 同模块调用方优先,(2) 其次是向目标函数传递更多参数的调用方,(3) 最后按搜索返回顺序(Grep/Glob的返回顺序)。需穷尽当前层级后再进入下一层级。
- topic-kebab-case: 从查询中提取主要主题(通常1-3个核心名词),转换为小写,用连字符替换空格。示例:"payment timeout bug"→,"files related to authentication"→
payment-timeout,"how do orders and payments interact"→authentication。orders-payments
状态格式:,然后继续执行。
**细致程度**: [级别] — [原因]Scope Detection
范围边界
If "YOUR ASSIGNED SCOPE:" and "DO NOT EXPLORE:" sections are present:
- STAY WITHIN your assigned scope - go deep on those specific areas
- RESPECT EXCLUSIONS - other processes handle the excluded areas
- If you naturally discover excluded topics while searching, note them as "Out of scope: {discovery}" in research file but don't pursue
- This prevents duplicate work across parallel explorations
If no scope boundaries: Explore the full topic as presented.
Boundary check before each search: Ask "Is this within my assigned scope?" If a search would primarily return files in excluded areas, skip it.
检查提示中是否包含范围标记,这将决定你的探索边界。
Out-of-Scope Discoveries
范围检测
When you find something relevant but outside your scope:
markdown
undefined如果存在"YOUR ASSIGNED SCOPE:"和"DO NOT EXPLORE:"部分:
- 严格遵守分配的范围——深入探索指定领域
- 尊重排除项——其他流程负责处理被排除的领域
- 若在搜索中自然发现被排除的主题,需在研究文件中备注"超出范围:{发现内容}",但不要继续探索
- 这可避免并行探索中的重复工作
如果没有范围边界: 按呈现的完整主题进行探索。
每次搜索前的边界检查: 询问"这是否在我的分配范围内?"若搜索主要返回排除领域的文件,跳过该搜索。
Out-of-scope discoveries
超出范围的发现
- {file or area}: {why it seemed relevant} → covered by {which excluded area}
These get reported back so the orchestrating skill can verify coverage.
---当发现相关但超出范围的内容时,按以下格式记录:
markdown
undefinedExplore Codebase
超出范围的发现
Find all files relevant to a specific query so the main context masters that topic without another search.
Loop: Determine thoroughness → Search → Expand todos → Write findings → Repeat (depth varies by level) → Compress output
Research file: (if /tmp write fails for any reason—permission denied, disk full, or missing—use current working directory instead. If file already exists, append , , etc. to filename before extension.)
(Use ISO-like format: year-month-day-hour-minute-second, e.g., . Obtain timestamp via in Bash.)
/tmp/explore-{topic-kebab-case}-{YYYYMMDD-HHMMSS}.md-2-320260110-143052date +%Y%m%d-%H%M%S- {文件或领域}: {为何看似相关} → 由{排除领域}负责
这些内容将被反馈,以便协调工具验证覆盖范围。
---Purpose
探索代码库
Limited context window means exploration tokens are spent now so subsequent work can go directly to relevant files without filling context with search noise.
- Search exhaustively (uses tokens on exploration)
- Return overview + complete file list with line ranges
- Subsequent work reads only those files → context stays focused
- Analysis/problem-solving happens after exploration
Scope: Only files relevant to the query. NOT a general codebase tour.
Relevance criteria: A file is relevant if: (1) it contains logic that implements the queried topic, (2) it directly calls or is called by topic code, (3) it defines types/config used by topic code, or (4) it tests topic behavior. Files that only log, monitor, or incidentally mention the topic are not relevant unless the query specifically asks about logging/monitoring.
Metaphor: Librarian preparing complete reading list. After reading, patron passes any test without returning.
Success test: After reading the files, answer ANY question, make decisions, understand edge cases, know constraints—no second search needed. If another search would be needed, a file was missed.
查找与特定查询相关的所有文件,以便主上下文工具无需再次搜索即可掌握该主题。
循环流程: 确定细致程度 → 搜索 → 扩展待办事项 → 记录发现 → 重复(深度随级别变化) → 压缩输出
研究文件: (若/tmp目录写入失败——如权限不足、磁盘已满或不存在——则使用当前工作目录。若文件已存在,在扩展名前添加、等后缀。)
(使用类ISO格式:年-月-日-时-分-秒,例如。可通过Bash命令获取时间戳。)
/tmp/explore-{topic-kebab-case}-{YYYYMMDD-HHMMSS}.md-2-320260110-143052date +%Y%m%d-%H%M%SPhase 1: Initial Scoping
目标
1.0 Determine Thoroughness
—
State: . Quick mode: skip research file and todo list; proceed directly to 1-2 search calls, then return results immediately.
**Thoroughness**: [level] — [reason]由于上下文窗口有限,需在当前阶段消耗探索令牌,以便后续工作可直接访问相关文件,无需在上下文中填充搜索噪声。
- 全面搜索(消耗探索令牌)
- 返回概览信息 + 完整的带行号范围的文件列表
- 后续工作仅读取这些文件 → 上下文保持聚焦
- 分析/问题解决在探索完成后进行
范围: 仅包含与查询相关的文件,而非通用代码库导览。
相关性标准: 若文件满足以下任一条件则视为相关:(1) 包含查询主题的实现逻辑;(2) 直接调用或被主题代码调用;(3) 定义主题代码使用的类型/配置;(4) 测试主题相关行为。仅记录、监控或偶然提及主题的文件不视为相关,除非查询明确询问日志/监控相关内容。
比喻: 图书管理员准备完整的阅读清单。读者阅读后无需返回即可通过任何测试。
成功标准: 阅读文件后,可回答任何相关问题、做出决策、理解边缘情况、了解约束——无需再次搜索。若仍需搜索,则说明遗漏了文件。
1.1 Create todo list (skip for quick)
阶段1:初始范围确定
—
1.0 确定细致程度
Todos = areas to explore + write-to-log operations. Start small, expand as discoveries reveal new areas.
Area: A logical grouping of related code (e.g., "JWT handling", "database queries", "error responses"). One area = one todo, followed by a write-to-file todo.
Starter todos (expand during exploration):
- [ ] Create research file
- [ ] Core {topic} implementation
- [ ] Write core findings to research file
- [ ] {topic} dependencies / callers
- [ ] Write dependency findings to research file
- [ ] (expand as discoveries reveal new areas)
- [ ] (expand: write-to-file after each new area)
- [ ] Refresh context: read full research file
- [ ] Compile outputCritical todos (never skip):
- - after EACH exploration area
Write {X} findings to research file - - ALWAYS before compile output
Refresh context: read full research file
状态格式:。快速模式: 跳过研究文件和待办事项列表;直接执行1-2次搜索调用,然后返回结果。
**细致程度**: [级别] — [原因]1.2 Create research file (skip for quick)
1.1 创建待办事项列表(快速模式跳过)
Path: (use SAME path for ALL updates)
/tmp/explore-{topic-kebab-case}-{YYYYMMDD-HHMMSS}.mdmarkdown
undefined待办事项 = 待探索领域 + 写入日志操作。初始列表从简,随着发现新领域逐步扩展。
领域: 相关代码的逻辑分组(例如"JWT处理"、"数据库查询"、"错误响应")。一个领域对应一项待办事项,后续跟随一项写入文件的待办事项。
初始待办事项(探索中扩展):
- [ ] 创建研究文件
- [ ] {主题}核心实现
- [ ] 将核心发现写入研究文件
- [ ] {主题}依赖项 / 调用方
- [ ] 将依赖项发现写入研究文件
- [ ] (随发现新领域扩展)
- [ ] (扩展:每个新领域后添加写入文件的待办事项)
- [ ] 刷新上下文:读取完整研究文件
- [ ] 编译输出关键待办事项(不可跳过):
- - 每个探索领域完成后执行
将{X}发现写入研究文件 - - 必须在编译输出前执行
刷新上下文:读取完整研究文件
Research: {topic}
1.2 创建研究文件(快速模式跳过)
Started: {YYYYMMDD-HHMMSS} | Query: {original query}
路径:(所有更新使用同一路径)
/tmp/explore-{topic-kebab-case}-{YYYYMMDD-HHMMSS}.mdmarkdown
undefinedSearch Log
研究:{主题}
{YYYYMMDD-HHMMSS} - Initial scoping
—
- Searching for: {keywords}
- Areas to explore: {list}
开始时间:{YYYYMMDD-HHMMSS} | 查询内容:{原始查询}
Findings
搜索日志
—
{YYYYMMDD-HHMMSS} - 初始范围确定
(populated incrementally)
- 搜索关键词:{关键词}
- 待探索领域:{列表}
Files Found
发现内容
(populated incrementally)
undefined(逐步填充)
Phase 2: Iterative Exploration
找到的文件
Quick: Skip — 1-2 search calls, return. Medium: core + first-level imports/exports only + up to 3 callers (per Caller prioritization in Definitions). Thorough: all imports + all direct callers (no transitive) + tests + config. Very-thorough: unbounded transitive exploration up to 100 files.
(逐步填充)
undefinedExploration Loop (medium, thorough, very-thorough)
阶段2:迭代探索
- Mark todo in_progress → 2. Search → 3. Write findings to research file → 4. Expand todos when discoveries reveal new areas → 5. Complete → 6. Repeat
When to expand todos: Add a new todo when you discover a distinct area not covered by existing todos (e.g., finding Redis session code while exploring auth → add "Session storage in Redis" todo).
Empty results: If a search returns no relevant files: (1) try 2-3 alternative keywords/patterns, (2) if still empty, note in research file and move to next todo, (3) if all searches empty, return overview stating "No files found matching query" with suggested search terms. If the entire codebase appears to have no source files (only config/docs), return overview stating "No source code files found in repository. Found: {list of non-source files}" and suggest verifying repository contents.
NEVER proceed without writing findings first.
快速模式: 跳过——执行1-2次搜索调用后返回结果。中等模式: 仅覆盖核心实现+一级导入/导出文件+最多3个调用方(参考定义中的调用方优先级)。全面模式: 覆盖所有导入项+所有直接调用方(不含间接)+测试文件+配置文件。深度全面模式: 无边界的间接探索,最多覆盖100个文件。
Todo Expansion Triggers (medium, thorough, very-thorough)
探索循环(中等、全面、深度全面模式)
| Discovery | Add todos for |
|---|---|
| Function call | Trace callers (medium: up to 3 per Caller prioritization; thorough: all direct; very-thorough: transitive) |
| Import | Trace imported module |
| Interface/type | Find implementations |
| Service | Config, tests, callers |
| Route/handler | Middleware, controller, service chain |
| Error handling | Error types, fallbacks |
| Config reference | Config files, env vars |
| Test file | Note test patterns |
- 标记待办事项为进行中 → 2. 搜索 → 3. 将发现写入研究文件 → 4. 发现新领域时扩展待办事项 → 5. 标记待办事项为完成 → 6. 重复
何时扩展待办事项: 当发现未被现有待办事项覆盖的独立领域时,添加新的待办事项(例如探索认证时发现Redis会话代码→添加"Redis会话存储"待办事项)。
无结果处理: 若搜索无相关文件:(1) 尝试2-3个替代关键词/模式;(2) 若仍无结果,在研究文件中备注并进入下一项待办事项;(3) 若所有搜索均无结果,返回概览信息说明"未找到匹配查询的文件"并建议搜索关键词。若代码库中似乎无源代码文件(仅包含配置/文档),返回概览信息说明"仓库中未找到源代码文件,找到的非源代码文件:{文件列表}"并建议验证仓库内容。
必须先写入发现内容,再继续执行。
Research File Update Format
待办事项扩展触发条件(中等、全面、深度全面模式)
After EACH search step append:
markdown
undefined| 发现内容 | 添加待办事项用于 |
|---|---|
| 函数调用 | 追踪调用方(中等模式:最多3个,按调用方优先级;全面模式:所有直接调用方;深度全面模式:间接调用方) |
| 导入项 | 追踪导入的模块 |
| 接口/类型 | 查找实现文件 |
| 服务 | 配置、测试、调用方 |
| 路由/处理器 | 中间件、控制器、服务链 |
| 错误处理 | 错误类型、降级策略 |
| 配置引用 | 配置文件、环境变量 |
| 测试文件 | 记录测试模式 |
{YYYYMMDD-HHMMSS} - {what explored}
研究文件更新格式
- Searched: {query/pattern}
- Found: {count} relevant files
- Key files: path/file.ext:lines - {purpose}
- New areas: {list}
- Relationships: {imports, calls}
undefined每次搜索步骤完成后,追加以下内容:
markdown
undefinedTodo Evolution Example
{YYYYMMDD-HHMMSS} - {探索内容}
Query: "Find files related to authentication"
Initial:
- [ ] Create research file
- [ ] Core auth implementation
- [ ] Write core findings to research file
- [ ] Auth dependencies / callers
- [ ] Write dependency findings to research file
- [ ] (expand as discoveries reveal new areas)
- [ ] Refresh context: read full research file
- [ ] Compile final outputAfter exploring core auth (discovered JWT, Redis sessions, OAuth):
- [x] Create research file
- [x] Core auth implementation → AuthService, middleware/auth.ts
- [x] Write core findings to research file
- [ ] Auth dependencies / callers
- [ ] Write dependency findings to research file
- [ ] JWT token handling
- [ ] Write JWT findings to research file
- [ ] Redis session storage
- [ ] Write Redis findings to research file
- [ ] OAuth providers
- [ ] Write OAuth findings to research file
- [ ] Refresh context: read full research file
- [ ] Compile final output- 搜索内容:{查询/模式}
- 找到相关文件数量:{数量}
- 关键文件:path/file.ext:lines - {用途}
- 新领域:{列表}
- 关系:{导入项、调用关系}
undefinedPhase 3: Compress Output
待办事项演变示例
3.1 Final research file update
—
markdown
undefined查询:"Find files related to authentication"
初始待办事项:
- [ ] 创建研究文件
- [ ] 核心认证实现
- [ ] 将核心发现写入研究文件
- [ ] 认证依赖项 / 调用方
- [ ] 将依赖项发现写入研究文件
- [ ] (随发现新领域扩展)
- [ ] 刷新上下文:读取完整研究文件
- [ ] 编译最终输出探索核心认证后(发现JWT、Redis会话、OAuth):
- [x] 创建研究文件
- [x] 核心认证实现 → AuthService, middleware/auth.ts
- [x] 将核心发现写入研究文件
- [ ] 认证依赖项 / 调用方
- [ ] 将依赖项发现写入研究文件
- [ ] JWT令牌处理
- [ ] 将JWT发现写入研究文件
- [ ] Redis会话存储
- [ ] 将Redis发现写入研究文件
- [ ] OAuth提供商
- [ ] 将OAuth发现写入研究文件
- [ ] 刷新上下文:读取完整研究文件
- [ ] 编译最终输出Exploration Complete
阶段3:压缩输出
—
3.1 研究文件最终更新
Finished: {YYYYMMDD-HHMMSS} | Files: {count} | Search calls: {count}
markdown
undefinedSummary
探索完成
{1-3 sentences: what areas were explored and key relationships found}
undefined结束时间:{YYYYMMDD-HHMMSS} | 文件数量:{数量} | 搜索调用次数:{数量}
3.2 Refresh context (MANDATORY)
摘要
CRITICAL: Complete the "Refresh context: read full research file" todo by reading the FULL research file using the Read tool. This restores ALL findings into context before generating output.
- [x] Refresh context: read full research file ← Must complete BEFORE compile output
- [ ] Compile outputWhy this matters: By this point, findings from earlier exploration areas have degraded due to context rot. The research file contains ALL discoveries. Reading it immediately before output moves everything into recent context where attention is strongest.
{1-3句话:探索了哪些领域,发现的关键关系}
undefined3.3 Generate structured output (only after 3.2)
3.2 刷新上下文(必须执行)
undefined关键: 通过Read工具读取完整的研究文件,完成"刷新上下文:读取完整研究文件"待办事项。这会在生成输出前将所有发现内容恢复到上下文中。
- [x] 刷新上下文:读取完整研究文件 ← 必须在编译输出前完成
- [ ] 编译输出重要性: 此时,早期探索领域的发现内容已因上下文丢失而淡化。研究文件包含所有发现内容,在输出前读取可将所有内容移至最新上下文,确保注意力集中。
OVERVIEW
3.3 生成结构化输出(仅在3.2完成后执行)
[Single paragraph, 100-400 words (target 200-300). Under 100 indicates missing information; over 400 indicates non-structural content. If completeness genuinely requires >400 words, include all structural facts but review for prescriptive or redundant content to remove. Describe THE QUERIED TOPIC structure:
which files/modules exist, how they connect, entry points, data flow.
Include specific details (timeouts, expiry, algorithms) found during exploration.
Factual/structural ONLY—NO diagnosis, recommendations, opinions.]
undefinedFILES TO READ
概览
(Only files relevant to the query)
MUST READ:
- path/file.ext:50-120 - [brief reason (<80 chars) why relevant]
SHOULD READ:
- path/related.ext:10-80 - [brief reason (<80 chars)]
REFERENCE:
- path/types.ext - [brief reason (<80 chars)]
[单段落,100-400字(目标200-300字)。若完整性确实需要超过400字,包含所有结构化事实,但需删除说明性或冗余内容。描述查询主题的结构:存在哪些文件/模块,如何关联,入口点,数据流。包含探索中发现的具体细节(超时时间、过期时间、算法)。仅包含事实/结构化内容——无诊断、建议、观点。]
OUT OF SCOPE (if boundaries were provided)
待读取文件
- {file/area}: {why relevant} → excluded because: {which boundary}
undefined(仅包含与查询相关的文件)
必须读取:
- path/file.ext:50-120 - [简短原因(<80字符)]
建议读取:
- path/related.ext:10-80 - [简短原因(<80字符)]
参考:
- path/types.ext - [简短原因(<80字符)]
3.4 Mark all todos complete
超出范围(若提供边界)
Overview Guidelines
—
Overview describes the queried topic area only, not the whole codebase.
GOOD content (structural knowledge about the topic):
- File organization: "Auth in , middleware in
src/auth/"src/middleware/auth.ts - Relationships: "login handler → validateCredentials() → TokenService"
- Entry points: "Routes in , handlers in
routes/api.ts"handlers/ - Data flow: "Request → middleware → handler → service → repository → DB"
- Patterns: "Repository pattern, constructor DI"
- Scope: "12 files touch auth; 5 core (files you'd edit to change auth behavior), 7 peripheral (interact with auth but serve other purposes)"
- Key facts: "Tokens 15min expiry, refresh in Redis 7d TTL"
- Dependencies: "Auth needs Redis (sessions) + Postgres (users)"
- Error handling: "401 for auth failures, 403 for invalid tokens"
BAD content (prescriptive—convert to descriptive):
- Diagnosis: "Bug is in validateCredentials() because..."
- Recommendations: "Refactor to use..."
- Opinions: "Poorly structured..."
- Solutions: "Fix by adding null check..."
Overview = dense map of the topic area, not diagnosis or codebase tour.
- {文件/领域}: {为何相关} → 排除原因:{边界内容}
undefinedWhat You Do NOT Output
3.4 标记所有待办事项为完成
—
概览指南
- NO diagnosis (describe area, don't identify bugs)
- NO recommendations (don't suggest fixes/patterns)
- NO opinions (don't comment on quality)
- NO solutions (analysis happens after exploration)
概览仅描述查询主题领域,而非整个代码库。
优质内容(主题相关的结构化知识):
- 文件组织:"认证逻辑位于,中间件位于
src/auth/"src/middleware/auth.ts - 关系:"登录处理器 → validateCredentials() → TokenService"
- 入口点:"路由位于,处理器位于
routes/api.ts"handlers/ - 数据流:"请求 → 中间件 → 处理器 → 服务 → 仓库 → 数据库"
- 模式:"仓库模式、构造函数依赖注入"
- 范围:"12个文件涉及认证;5个核心文件(修改这些文件可改变认证行为),7个外围文件(与认证交互但主要用途为其他功能)"
- 关键事实:"令牌过期时间15分钟,Redis中刷新令牌的TTL为7天"
- 依赖项:"认证需要Redis(会话)+ Postgres(用户数据)"
- 错误处理:"认证失败返回401,令牌无效返回403"
劣质内容(说明性内容——需转换为描述性内容):
- 诊断:"bug位于validateCredentials(),因为..."
- 建议:"重构为使用..."
- 观点:"结构不佳..."
- 解决方案:"通过添加空检查修复..."
概览 = 主题领域的密集映射,而非诊断或代码库导览。
Search Strategy
禁止输出内容
- Extract keywords from query
- Search broadly: Glob (,
**/auth/**), Grep (functions, classes, errors), common locations (**/*payment*,src/,lib/,services/)api/ - Follow graph (ADD TODOS FOR EACH, depth per thoroughness level):
- Imports/exports, callers (medium: up to 3 per Caller prioritization; thorough: all direct; very-thorough: transitive), callees, implementations, usages
- Supporting files (thorough and very-thorough only):
- Tests (,
*.test.*,*.spec.*) — expected behavior__tests__/ - Config (,
.env*, env vars) — runtime behaviorconfig/ - Types (,
types/, interfaces) — contracts*.d.ts - Error handling (catch blocks, error types, fallbacks)
- Utilities (shared helpers)
- Tests (
- Non-obvious (very-thorough only):
- Middleware/interceptors, event handlers, background jobs, migrations, env-specific code
- Verify: Skim files, note specific line ranges (not entire files)
- 无诊断(描述领域,不识别bug)
- 无建议(不建议修复/模式)
- 无观点(不评论质量)
- 无解决方案(分析在探索完成后进行)
Priority Criteria
搜索策略
| Priority | Criteria |
|---|---|
| MUST READ | Entry points (where execution starts), core business logic (files you'd edit to change behavior), primary implementation of queried topic |
| SHOULD READ | Direct callers/callees, error handling for the topic, related modules in same domain, test files (thorough+ only), config affecting the topic (thorough+ only) |
| REFERENCE | Type definitions, utility functions used by core, boilerplate/scaffolding, files that mention topic but aren't central to it |
Level overrides priority: Thoroughness level determines which file categories to include. Priority criteria categorizes files within those categories. At medium level, skip SHOULD READ items marked as "thorough+ only" (tests, config, all callers).
Priority order: Level restrictions > Completeness > Brevity. Include all files matching the thoroughness level's scope. Level restrictions (e.g., no tests at medium) are hard limits. Within those limits, prefer completeness over brevity. Include files that interact with the topic in actual code logic. Exclude files where the topic keyword appears only in comments, log strings, or variable names without logic.
- 从查询中提取关键词
- 宽泛搜索: Glob(、
**/auth/**)、Grep(函数、类、错误)、常见位置(**/*payment*、src/、lib/、services/)api/ - 跟随关系图(为每个节点添加待办事项,深度取决于细致程度级别):
- 导入/导出、调用方(中等模式:最多3个,按调用方优先级;全面模式:所有直接调用方;深度全面模式:间接调用方)、被调用方、实现、用法
- 支持文件(仅全面和深度全面模式):
- 测试文件(、
*.test.*、*.spec.*)——预期行为__tests__/ - 配置文件(、
.env*、环境变量)——运行时行为config/ - 类型文件(、
types/、接口)——契约*.d.ts - 错误处理(catch块、错误类型、降级策略)
- 工具函数(共享助手)
- 测试文件(
- 非明显文件(仅深度全面模式):
- 中间件/拦截器、事件处理器、后台任务、迁移文件、环境特定代码
- 验证: 快速浏览文件,记录具体行号范围(而非整个文件)
Key Principles
优先级标准
| Principle | Rule |
|---|---|
| Scope-adherent | Stay within assigned scope; note out-of-scope discoveries without pursuing |
| Todos with write-to-log | Each exploration area gets a write-to-research-file todo |
| Write before proceed | Write findings BEFORE next search (research file = external memory) |
| Todo-driven | Every new area discovered → new todo + write-to-file todo (no mental notes) |
| Depth by level | Stop at level-appropriate depth (medium: first-level deps + up to 3 callers; thorough: all direct callers+tests+config; very-thorough: transitive, up to 100 files) |
| Incremental | Update research file after EACH exploration area (not at end) |
| Context refresh | Read full research file BEFORE compile output - non-negotiable |
| Compress last | Output only after all todos completed including refresh |
Log Pattern Summary:
- Create research file at start
- Add write-to-file todos after each exploration area
- Write findings after EVERY area before moving to next
- "Refresh context: read full research file" todo before compile
- Read FULL file before generating output (restores all context)
| 优先级 | 标准 |
|---|---|
| 必须读取 | 入口点(执行起始位置)、核心业务逻辑(修改这些文件可改变行为)、查询主题的主要实现 |
| 建议读取 | 直接调用方/被调用方、主题相关错误处理、同领域相关模块、测试文件(仅全面+模式)、影响主题的配置文件(仅全面+模式) |
| 参考 | 类型定义、核心代码使用的工具函数、模板/脚手架、提及主题但非核心的文件 |
级别覆盖优先级: 细致程度级别决定包含的文件类别,优先级标准在这些类别内对文件进行分类。中等模式下,跳过标记为"仅全面+模式"的建议读取项(测试、配置、所有调用方)。
优先级顺序:级别限制 > 完整性 > 简洁性。包含所有符合细致程度级别范围的文件。级别限制(如中等模式不包含测试文件)为硬性要求。在限制范围内,优先保证完整性而非简洁性。包含与主题有实际代码逻辑交互的文件,排除仅在注释、日志字符串或变量名中提及主题但无相关逻辑的文件。
Never Do
核心原则
- Explore areas in "DO NOT EXPLORE" section (other processes handle those)
- Skip write-to-file todos (every area completion must be written)
- Compile output without completing "Refresh context" todo first
- Keep discoveries as mental notes instead of todos
- Skip todo list (except quick mode)
- Generate output before all todos completed
- Forget to add write-to-file todo for newly discovered areas
| 原则 | 规则 |
|---|---|
| 遵守范围 | 严格在分配范围内探索;记录超出范围的发现但不继续探索 |
| 待办事项包含写入日志 | 每个探索领域对应一项写入研究文件的待办事项 |
| 先写入再继续 | 写入发现内容后再进行下一次搜索(研究文件 = 外部内存) |
| 待办事项驱动 | 每发现新领域 → 添加新待办事项 + 写入文件的待办事项(不使用心理记录) |
| 按级别控制深度 | 在符合级别的深度停止(中等模式:一级依赖+最多3个调用方;全面模式:所有直接调用方+测试+配置;深度全面模式:间接依赖,最多100个文件) |
| 增量更新 | 每个探索领域完成后更新研究文件(而非在最后统一更新) |
| 刷新上下文 | 生成输出前必须读取完整研究文件——不可协商 |
| 最后压缩输出 | 完成所有待办事项(包括刷新上下文)后再生成输出 |
日志模式总结:
- 开始时创建研究文件
- 每个探索领域完成后添加写入文件的待办事项
- 每个领域完成后立即写入发现内容,再进入下一个领域
- 编译前添加"刷新上下文:读取完整研究文件"待办事项
- 生成输出前读取完整文件(恢复所有上下文)
Final Checklist
禁止操作
- Scope boundaries respected (if provided)
- Out-of-scope discoveries noted (if any)
- Write-to-file todos completed after each exploration area
- "Refresh context: read full research file" completed before output
- All todos completed (no pending items)
- Research file complete (incremental findings after each step)
- Depth appropriate (medium stops at first-level deps + 3 callers; thorough includes all direct callers+tests+config; very-thorough transitive up to 100 files)
- Coverage matches level (configs, utilities, error handlers, tests for thorough+)
- Overview is 100-400 words (target 200-300), structural only, no opinions
- File list has precise line ranges, prioritized, brief reasons (<80 chars)
Key question: After MUST READ + SHOULD READ, will all questions about this topic be answerable?
- 探索"DO NOT EXPLORE"部分的领域(其他流程负责)
- 跳过写入文件的待办事项(每个领域完成后必须写入)
- 未完成"刷新上下文"待办事项就编译输出
- 用心理记录代替待办事项记录发现内容
- 跳过待办事项列表(快速模式除外)
- 未完成所有待办事项就生成输出
- 忘记为新发现的领域添加写入文件的待办事项
Example 1: Payment Timeout Bug
最终检查清单
Query: "Find files related to the payment timeout bug"
undefined- 遵守范围边界(若提供)
- 记录超出范围的发现(若有)
- 每个探索领域完成后都完成了写入文件的待办事项
- 输出前完成了"刷新上下文:读取完整研究文件"待办事项
- 所有待办事项均已完成(无未处理项)
- 研究文件完整(每个步骤后有增量发现)
- 深度符合级别要求(中等模式在一级依赖+3个调用方停止;全面模式包含所有直接调用方+测试+配置;深度全面模式间接依赖最多100个文件)
- 覆盖范围符合级别要求(全面+模式包含配置、工具函数、错误处理器、测试)
- 概览为100-400字(目标200-300字),仅包含结构化内容,无观点
- 文件列表包含精确行号范围,按优先级排序,原因简短(<80字符)
核心问题: 读取必须读取+建议读取的文件后,是否可回答该主题的所有问题?
OVERVIEW
示例1:支付超时Bug
Payment: 3 layers. (routes/payments.ts:20-80) HTTP, (services/payment.ts) logic, (clients/stripe.ts) external calls. Timeout 30s default in config/payments.ts. Retry logic services/payment.ts:150-200 catches timeouts, retries 3x. Tests: happy path covered, timeout scenarios only tests/payment.test.ts:200-280.
PaymentControllerPaymentServicePaymentClient查询:"Find files related to the payment timeout bug"
undefinedFILES TO READ
概览
MUST READ:
- src/services/payment.ts:89-200 - Core processing, timeout/retry logic
- src/clients/stripe.ts:50-95 - External API calls where timeouts occur
SHOULD READ:
- src/config/payments.ts:1-30 - Timeout configuration
- tests/payments/payment.test.ts:200-280 - Timeout test cases
REFERENCE:
- src/routes/payments.ts:20-80 - HTTP layer
- src/types/payment.ts - Type definitions
**Bad**: "Timeout bug caused by retry logic not respecting total budget. Recommend circuit breaker." — NO. Describe structurally, don't diagnose.支付逻辑分为3层。(routes/payments.ts:20-80)处理HTTP请求,(services/payment.ts)处理业务逻辑,(clients/stripe.ts)处理外部调用。默认超时时间30秒,定义在config/payments.ts。重试逻辑位于services/payment.ts:150-200,捕获超时后重试3次。测试覆盖正常流程,超时场景仅在tests/payment.test.ts:200-280中覆盖。
PaymentControllerPaymentServicePaymentClientExample 2: Authentication
待读取文件
Query: "Find files related to authentication"
undefined必须读取:
- src/services/payment.ts:89-200 - 核心处理逻辑、超时/重试逻辑
- src/clients/stripe.ts:50-95 - 发生超时的外部API调用
建议读取:
- src/config/payments.ts:1-30 - 超时配置
- tests/payments/payment.test.ts:200-280 - 超时测试用例
参考:
- src/routes/payments.ts:20-80 - HTTP层
- src/types/payment.ts - 类型定义
**错误示例**: "Timeout bug caused by retry logic not respecting total budget. Recommend circuit breaker." — 不允许。仅描述结构,不进行诊断。OVERVIEW
示例2:认证
JWT (RS256) in httpOnly cookies. 15min expiry, refresh tokens Redis 7d TTL. Flow: POST /login (routes/auth.ts:15-40) → AuthController.login() → AuthService.authenticate() → UserRepository.findByEmail(). Bcrypt cost 12. Middleware middleware/auth.ts validates JWT, attaches user. Refresh: AuthService.refreshToken() issues new token if refresh valid. Logout: clears cookie, blacklists token in Redis (checked every request). Rate limit: 5/15min/IP. Failed logins → audit_logs. OAuth (Google, GitHub) in services/oauth.ts.
查询:"Find files related to authentication"
undefinedFILES TO READ
概览
MUST READ:
- src/services/auth.ts:1-150 - Core auth (authenticate, refresh, logout, tokens)
- src/middleware/auth.ts:15-85 - JWT validation, user context, blacklist check
- src/services/tokenBlacklist.ts:1-60 - Redis token invalidation
SHOULD READ:
- src/routes/auth.ts:15-100 - Routes, validation, rate limiting
- src/repositories/user.ts:30-80 - User lookup, password verify
- src/services/oauth.ts:1-120 - OAuth providers
- src/utils/crypto.ts:10-45 - Hashing, signing utilities
- tests/auth/auth.test.ts:1-250 - Expected behaviors
- tests/auth/auth.integration.ts:1-150 - Redis/DB integration
REFERENCE:
- src/types/auth.ts - Types/interfaces
- src/config/auth.ts - JWT secret, expiry, bcrypt rounds, rate limits
- src/middleware/rateLimit.ts:20-50 - Rate limiting impl
- prisma/schema.prisma:45-70 - User model, audit_logs
Comprehensive coverage — after reading, understand auth completely.使用RS256算法的JWT存储在httpOnly Cookie中。令牌过期时间15分钟,刷新令牌在Redis中的TTL为7天。流程:POST /login(routes/auth.ts:15-40)→ AuthController.login() → AuthService.authenticate() → UserRepository.findByEmail()。Bcrypt加密成本为12。中间件middleware/auth.ts验证JWT并附加用户上下文。刷新流程:AuthService.refreshToken()在刷新令牌有效时颁发新令牌。登出流程:清除Cookie,在Redis中拉黑令牌(每次请求时检查)。速率限制:每个IP 15分钟内最多5次请求。登录失败记录至audit_logs。OAuth(Google、GitHub)逻辑位于services/oauth.ts。
Example 3: ORM Usage
待读取文件
Query: "Find all files that use the ORM"
undefined必须读取:
- src/services/auth.ts:1-150 - 核心认证逻辑(认证、刷新、登出、令牌)
- src/middleware/auth.ts:15-85 - JWT验证、用户上下文、拉黑令牌检查
- src/services/tokenBlacklist.ts:1-60 - Redis令牌失效逻辑
建议读取:
- src/routes/auth.ts:15-100 - 路由、验证、速率限制
- src/repositories/user.ts:30-80 - 用户查找、密码验证
- src/services/oauth.ts:1-120 - OAuth提供商
- src/utils/crypto.ts:10-45 - 哈希、签名工具函数
- tests/auth/auth.test.ts:1-250 - 预期行为
- tests/auth/auth.integration.ts:1-150 - Redis/数据库集成测试
参考:
- src/types/auth.ts - 类型/接口
- src/config/auth.ts - JWT密钥、过期时间、bcrypt轮数、速率限制
- src/middleware/rateLimit.ts:20-50 - 速率限制实现
- prisma/schema.prisma:45-70 - 用户模型、audit_logs
全面覆盖——读取后可完全理解认证逻辑。OVERVIEW
示例3:ORM使用
Prisma ORM. Schema prisma/schema.prisma: 8 models (User, Order, Product, Category, Review, Cart, CartItem, Address). Client singleton src/db/client.ts, imported everywhere. Repository pattern: src/repositories/{model}.repository.ts. Services use repositories, never Prisma directly. 12 migrations in prisma/migrations/. Raw queries: repositories/report.repository.ts:50-80 (analytics), repositories/search.repository.ts:30-60 (full-text search).
查询:"Find all files that use the ORM"
undefinedFILES TO READ
概览
MUST READ:
- prisma/schema.prisma - Model definitions
- src/db/client.ts:1-30 - Prisma singleton
- src/repositories/user.repository.ts:1-120 - Repository pattern example
SHOULD READ:
- src/repositories/order.repository.ts:1-150 - Complex relations
- src/repositories/report.repository.ts:50-80 - Raw SQL
- src/services/user.service.ts:30-100 - Service→repository usage
REFERENCE:
- prisma/migrations/ - 12 migration files
- src/types/db.ts - Generated types
undefined使用Prisma ORM。Schema定义在prisma/schema.prisma:包含8个模型(User、Order、Product、Category、Review、Cart、CartItem、Address)。客户端单例位于src/db/client.ts,被所有文件导入。采用仓库模式:src/repositories/{model}.repository.ts。服务层使用仓库,不直接调用Prisma。prisma/migrations/中有12个迁移文件。原生查询:repositories/report.repository.ts:50-80(分析)、repositories/search.repository.ts:30-60(全文搜索)。
—
待读取文件
—
必须读取:
- prisma/schema.prisma - 模型定义
- src/db/client.ts:1-30 - Prisma单例
- src/repositories/user.repository.ts:1-120 - 仓库模式示例
建议读取:
- src/repositories/order.repository.ts:1-150 - 复杂关联
- src/repositories/report.repository.ts:50-80 - 原生SQL
- src/services/user.service.ts:30-100 - 服务→仓库的使用方式
参考:
- prisma/migrations/ - 12个迁移文件
- src/types/db.ts - 生成的类型
undefined