dev-rlm
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseRecursive Language Model (RLM) Skill
递归语言模型(RLM)技能
Core Philosophy
核心理念
"Context is an external resource, not a local variable."
Three principles:
- Never load what you can query — Filesystem is a database. Use to query it.
rlm.py - The model decides the strategy — No fixed modes. Assess the task, pick the approach.
- Recurse when complexity demands it — If a sub-task is too complex for one agent, that agent spawns its own sub-agents.
「上下文是外部资源,而非局部变量。」
三大原则:
- 能查询的内容绝不加载 — 文件系统就是数据库。使用进行查询。
rlm.py - 由模型决定策略 — 无固定模式。评估任务后选择合适的方法。
- 复杂度要求时进行递归 — 如果单个Agent无法处理某个子任务,该Agent可生成自己的子Agent。
Context Engine (rlm.py)
上下文引擎(rlm.py)
The streaming query engine for filesystem interaction. Never loads all files into RAM.
bash
undefined用于文件系统交互的流式查询引擎。绝不会将所有文件加载到内存中。
bash
undefinedCodebase overview (no file reads)
代码库概览(不读取文件内容)
python3 ~/.claude/skills/rlm/scripts/dev-rlm.py stats
python3 ~/.claude/skills/rlm/scripts/dev-rlm.py stats --type py
python3 ~/.claude/skills/rlm/scripts/dev-rlm.py stats
python3 ~/.claude/skills/rlm/scripts/dev-rlm.py stats --type py
Regex search across files (streaming)
跨文件正则搜索(流式处理)
python3 ~/.claude/skills/rlm/scripts/dev-rlm.py grep "pattern" --type py
python3 ~/.claude/skills/rlm/scripts/dev-rlm.py grep "pattern" --type py
Substring search with context window
带上下文窗口的子串搜索
python3 ~/.claude/skills/rlm/scripts/dev-rlm.py peek "error_handler" --context 300
python3 ~/.claude/skills/rlm/scripts/dev-rlm.py peek "error_handler" --context 300
Read single file or line range
读取单个文件或指定行范围
python3 ~/.claude/skills/rlm/scripts/dev-rlm.py read src/auth/login.py --lines 50-100
python3 ~/.claude/skills/rlm/scripts/dev-rlm.py read src/auth/login.py --lines 50-100
Partition files for agent distribution
划分文件以分配给多个Agent处理
python3 ~/.claude/skills/rlm/scripts/dev-rlm.py chunk --type py --size 15 --output /tmp/rlm_chunks.json
All commands support `--output /path/to/file.json` to write results to file.
**Fallback**: If rlm.py unavailable, use native tools: Grep, Glob, Read, `rg`, `find`.python3 ~/.claude/skills/rlm/scripts/dev-rlm.py chunk --type py --size 15 --output /tmp/rlm_chunks.json
所有命令均支持`--output /path/to/file.json`参数,用于将结果写入文件。
**备选方案**:如果rlm.py不可用,可使用原生工具:Grep、Glob、Read、`rg`、`find`。Pipeline: Index → Filter → Map → Reduce
流程:索引 → 过滤 → 映射 → 归约
1. Index
1. 索引
Discover structure without reading file content.
bash
python3 ~/.claude/skills/rlm/scripts/dev-rlm.py stats
python3 ~/.claude/skills/rlm/scripts/dev-rlm.py stats --type py无需读取文件内容即可发现代码库结构。
bash
python3 ~/.claude/skills/rlm/scripts/dev-rlm.py stats
python3 ~/.claude/skills/rlm/scripts/dev-rlm.py stats --type py2. Filter
2. 过滤
Narrow candidates programmatically.
bash
python3 ~/.claude/skills/rlm/scripts/dev-rlm.py grep "TODO|FIXME|HACK" --type py
rg -l "error" --type py通过编程方式缩小候选文件范围。
bash
python3 ~/.claude/skills/rlm/scripts/dev-rlm.py grep "TODO|FIXME|HACK" --type py
rg -l "error" --type py3. Map (Parallel Agents)
3. 映射(并行Agent)
Distribute filtered work across agents. See Strategy Selection below.
将过滤后的任务分配给多个Agent处理。详见下方的策略选择。
4. Reduce
4. 归约
Aggregate results from /tmp/.
bash
jq -s '.' /tmp/rlm_*.json > /tmp/rlm_report.json
jq -s '[.[].findings] | add | group_by(.severity)' /tmp/rlm_*.json从/tmp目录聚合结果。
bash
jq -s '.' /tmp/rlm_*.json > /tmp/rlm_report.json
jq -s '[.[].findings] | add | group_by(.severity)' /tmp/rlm_*.jsonStrategy Selection
策略选择
Assess the task. Pick the strategy that fits. Combine strategies within a single analysis.
评估任务类型,选择最适配的策略。可在单次分析中组合多种策略。
Strategies
策略列表
| Strategy | When | Agent Type | Agents |
|---|---|---|---|
| Peek | Quick answer, few files relevant | None (main context) | 0 |
| Grep + Read | Pattern in known locations | None (main context) | 0 |
| Fan-out Explore | Question about code behavior/patterns | Explore | 2-5 |
| Partition + Map | Systematic analysis of many files | general-purpose | 3-8 |
| Recursive Decompose | Partitions still complex | general-purpose | 2-4 per level |
| Summarize + Drill | Large result set needs synthesis first | Mixed | 2-6 |
| 策略 | 适用场景 | Agent类型 | Agent数量 |
|---|---|---|---|
| Peek(快速查看) | 需要快速获取答案、相关文件数量少 | 无(主上下文处理) | 0 |
| Grep + Read(搜索+读取) | 已知位置存在特定模式 | 无(主上下文处理) | 0 |
| Fan-out Explore(扇出探索) | 针对代码行为/模式的问题 | Explore | 2-5 |
| Partition + Map(划分+映射) | 对大量文件进行系统性分析 | 通用型 | 3-8 |
| Recursive Decompose(递归分解) | 划分后的任务仍较复杂 | 通用型 | 每层级2-4个 |
| Summarize + Drill(汇总+深挖) | 大型结果集需要先进行合成 | 混合型 | 2-6 |
Selection Logic
选择逻辑
- Run Index (). How many candidate files?
stats - < 5 files: Peek or Grep+Read. Handle in main context. No agents needed.
- 5-50 files: Fan-out Explore (questions) or Partition+Map (analysis).
- 50-200 files: Partition+Map with coarse grouping. Consider Recursive Decompose if partitions remain complex.
- 200+ files: Recursive Decompose. Split into domains at depth 0, let workers decide depth-1 strategy.
Do NOT pick a strategy before running Index. Let the data decide.
- 运行索引()命令。候选文件数量有多少?
stats - 少于5个文件:使用Peek或Grep+Read策略。在主上下文中处理,无需Agent。
- 5-50个文件:针对问题使用Fan-out Explore,针对分析使用Partition+Map。
- 50-200个文件:使用粗分组的Partition+Map策略。如果划分后的任务仍复杂,可考虑Recursive Decompose。
- 200个以上文件:使用Recursive Decompose。在第0层按领域拆分,由Worker Agent决定第1层的策略。
在运行索引命令前,请勿选择策略。让数据来决定最合适的方案。
Agent Patterns
Agent模式
Fan-out Explore
Fan-out Explore(扇出探索)
Deploy Explore agents with complementary perspectives.
Task(
description="Trace error propagation paths",
prompt="Search for error handling patterns in this codebase.
Focus on: try/catch, error types, propagation chains.
Write summary to /tmp/rlm_errors.md",
subagent_type="Explore",
run_in_background=true
)Assign each agent a distinct angle: architecture, patterns, specific modules, tests, dependencies.
部署具有互补视角的Explore Agent。
Task(
description="追踪错误传播路径",
prompt="搜索此代码库中的错误处理模式。
重点关注:try/catch、错误类型、传播链。
将总结写入/tmp/rlm_errors.md",
subagent_type="Explore",
run_in_background=true
)为每个Agent分配不同的视角:架构、模式、特定模块、测试、依赖项。
Partition + Map
Partition + Map(划分+映射)
Split files into groups. Each general-purpose agent processes a partition.
Task(
description="Analyze auth module (partition 1/4)",
prompt="Analyze these files for security issues:
[file list from rlm.py chunk output]
Write findings to /tmp/rlm_p1.json as JSON:
{\"partition\": 1, \"findings\": [{\"file\": \"\", \"line\": 0, \"issue\": \"\", \"severity\": \"\"}]}",
subagent_type="general-purpose",
run_in_background=true
)Partition sources: output, directory boundaries, file type grouping.
rlm.py chunk将文件拆分为多个组。每个通用型Agent处理一个分组。
Task(
description="分析认证模块(第1/4分组)",
prompt="分析以下文件中的安全问题:
[来自rlm.py chunk输出的文件列表]
将分析结果以JSON格式写入/tmp/rlm_p1.json:
{\"partition\": 1, \"findings\": [{\"file\": \"\", \"line\": 0, \"issue\": \"\", \"severity\": \"\"}]}",
subagent_type="general-purpose",
run_in_background=true
)分组来源:输出、目录边界、文件类型分组。
rlm.py chunkCollect Results
收集结果
TaskOutput(task_id=<agent_id>, block=true, timeout=120000)TaskOutput(task_id=<agent_id>, block=true, timeout=120000)Recursive Decomposition
递归分解
When a sub-task is too complex for a single agent, that agent spawns its own sub-agents. Only agents can recurse (Explore agents cannot spawn agents).
general-purpose当单个Agent无法处理某个子任务时,该Agent可生成自己的子Agent。只有 Agent可以递归(Explore Agent无法生成子Agent)。
general-purposeWhen to Recurse
递归时机
An agent should recurse when:
- Its assigned partition has 50+ files and the analysis requires understanding, not just scanning
- It discovers distinct sub-problems (e.g., "this module has 3 independent subsystems")
- The prompt explicitly allows recursion
当出现以下情况时,Agent应进行递归:
- 分配的分组包含50+文件,且分析需要理解代码而非仅扫描
- 发现独立的子问题(例如:「此模块包含3个独立的子系统」)
- 提示中明确允许递归
Depth Control
深度控制
| Level | Role | Max Agents | Spawns? |
|---|---|---|---|
| 0 (Main) | Orchestrator | 5 | Yes |
| 1 (Worker) | Domain analyzer | 3 per worker | Yes |
| 2 (Leaf) | Module specialist | 0 | Never |
Hard limits:
- Max recursion depth: 2 (main → worker → leaf)
- Max total agents: 15 across all levels
- Leaf agents MUST NOT spawn sub-agents
| 层级 | 角色 | 最大Agent数量 | 是否可生成子Agent |
|---|---|---|---|
| 0(主Agent) | 编排器 | 5 | 是 |
| 1(Worker) | 领域分析器 | 每个Worker最多3个 | 是 |
| 2(叶子节点) | 模块专家 | 0 | 绝不允许 |
硬限制:
- 最大递归深度:2(主Agent → Worker → 叶子节点)
- 所有层级的最大总Agent数量:15
- 叶子节点Agent绝对不能生成子Agent
Recursive Agent Prompt Template
递归Agent提示模板
Include these instructions when spawning agents that may recurse:
"You are analyzing [SCOPE]. You may spawn up to [N] sub-agents if needed.
RECURSION RULES:
- Current depth: [D]. Max depth: 2.
- If depth=2, you are a leaf. Do NOT spawn agents.
- Only recurse if your scope has 50+ files or distinct sub-problems.
- Each sub-agent writes to /tmp/rlm_d[D+1]_[ID].json
- After sub-agents complete, merge their results into your output file."当生成可能需要递归的Agent时,需包含以下指令:
"你正在分析[SCOPE]。如有需要,你可生成最多[N]个子Agent。
递归规则:
- 当前深度:[D]。最大深度:2。
- 如果深度=2,你是叶子节点Agent,禁止生成子Agent。
- 仅当你的分析范围包含50+文件或存在独立子问题时,才可进行递归。
- 每个子Agent需将结果写入/tmp/rlm_d[D+1]_[ID].json
- 子Agent完成任务后,将其结果合并到你的输出文件中。"Output Routing
输出路由
| Depth | Output Path | Merged By |
|---|---|---|
| 2 (leaf) | | Depth-1 parent |
| 1 (worker) | | Main orchestrator |
| 0 (main) | | Main context |
| 深度 | 输出路径 | 合并者 |
|---|---|---|
| 2(叶子节点) | | 上一级(深度1)的父Agent |
| 1(Worker) | | 主编排器 |
| 0(主Agent) | | 主上下文 |
Guardrails
防护规则
Limits
限制条件
| Metric | Limit |
|---|---|
| Max concurrent agents (any level) | 5 |
| Max total agents (all levels) | 15 |
| Max recursion depth | 2 |
| Max files per leaf agent | 20 |
| Timeout per agent | 120s |
| Max spawn rounds (main orchestrator) | 3 |
| 指标 | 限制值 |
|---|---|
| 任意层级的最大并发Agent数量 | 5 |
| 所有层级的最大总Agent数量 | 15 |
| 最大递归深度 | 2 |
| 每个叶子节点Agent处理的最大文件数量 | 20 |
| 单个Agent的超时时间 | 120秒 |
| 主编排器的最大生成轮次 | 3 |
Iteration Control
迭代控制
- Each round of agent spawning should have a clear purpose
- If 2 rounds produce no new information, stop
- Never "try again" — refine the query or change strategy
- 每一轮Agent生成都应有明确的目标
- 如果连续2轮未产生新信息,停止任务
- 绝不重复「重试」操作 — 应优化查询或更换策略
Token Protection
Token保护
- Agents write to , not to main context
/tmp/rlm_* - Main context reads only summaries and aggregated JSON
- Never agent output files raw into main context
cat
- Agent将结果写入,而非直接写入主上下文
/tmp/rlm_* - 主上下文仅读取摘要和聚合后的JSON
- 绝不将Agent输出文件的原始内容直接加载到主上下文
Constraints
约束条件
Never
禁止操作
- or load entire codebases into context
cat * - Spawn agents without running Index () first
stats - Skip Filter stage for 50+ file codebases
- Exceed depth or agent limits
- Load rlm.py output raw into main context for large results
- 使用或将整个代码库加载到上下文
cat * - 未运行索引()就生成Agent
stats - 针对50+文件的代码库跳过过滤阶段
- 超出深度或Agent数量限制
- 将rlm.py的大型结果直接加载到主上下文
Always
必须操作
- Use before choosing strategy
rlm.py stats - Filter with or
rlm.py grepbefore spawning agentsrg - Write agent outputs to
/tmp/rlm_* - Include recursion depth and limits in recursive agent prompts
- Clean up after delivering results
/tmp/rlm_*
- 选择策略前先运行
rlm.py stats - 生成Agent前先用或
rlm.py grep进行过滤rg - 将Agent输出写入
/tmp/rlm_* - 在递归Agent的提示中包含递归深度和限制条件
- 交付结果后清理文件
/tmp/rlm_*
Fallback (Without rlm.py)
备选方案(无rlm.py时)
If rlm.py is unavailable, use native Claude Code tools:
| rlm.py command | Native equivalent |
|---|---|
| |
| Grep tool or |
| Grep tool with |
| Read tool with offset/limit |
| Glob + manual partitioning |
The pipeline and strategy selection remain the same. Only the tooling changes.
如果rlm.py不可用,可使用原生Claude Code工具:
| rlm.py命令 | 原生等价工具 |
|---|---|
| |
| Grep工具或 |
| 带 |
| 带偏移量/限制的Read工具 |
| Glob + 手动划分 |
流程和策略选择逻辑保持不变,仅工具更换。
Integration
集成说明
- rlm.py for Index/Filter stages
- Explore agents for fan-out investigation
- general-purpose agents for partition+map and recursive decomposition
- rg/grep as rlm.py fallback for Filter
- cli-jq for Reduce stage (merge and filter results)
- rlm.py 用于索引/过滤阶段
- Explore Agent 用于扇出探索
- 通用型Agent 用于划分+映射和递归分解
- rg/grep 作为rlm.py的过滤备选工具
- cli-jq 用于归约阶段(合并和过滤结果)
Quick Reference
快速参考
See for decision tree and command patterns.
quick-reference.md决策树和命令模式详见。
quick-reference.mdCredits
致谢
Based on the RLM paradigm (arXiv:2512.24601). Original skill by BowTiedSwan.
基于RLM范式(arXiv:2512.24601)。本技能由BowTiedSwan原创。