dev-rlm

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Recursive Language Model (RLM) Skill

递归语言模型(RLM)技能

Core Philosophy

核心理念

"Context is an external resource, not a local variable."
Three principles:
  1. Never load what you can query — Filesystem is a database. Use
    rlm.py
    to query it.
  2. The model decides the strategy — No fixed modes. Assess the task, pick the approach.
  3. Recurse when complexity demands it — If a sub-task is too complex for one agent, that agent spawns its own sub-agents.
「上下文是外部资源,而非局部变量。」
三大原则:
  1. 能查询的内容绝不加载 — 文件系统就是数据库。使用
    rlm.py
    进行查询。
  2. 由模型决定策略 — 无固定模式。评估任务后选择合适的方法。
  3. 复杂度要求时进行递归 — 如果单个Agent无法处理某个子任务,该Agent可生成自己的子Agent。

Context Engine (rlm.py)

上下文引擎(rlm.py)

The streaming query engine for filesystem interaction. Never loads all files into RAM.
bash
undefined
用于文件系统交互的流式查询引擎。绝不会将所有文件加载到内存中。
bash
undefined

Codebase overview (no file reads)

代码库概览(不读取文件内容)

python3 ~/.claude/skills/rlm/scripts/dev-rlm.py stats python3 ~/.claude/skills/rlm/scripts/dev-rlm.py stats --type py
python3 ~/.claude/skills/rlm/scripts/dev-rlm.py stats python3 ~/.claude/skills/rlm/scripts/dev-rlm.py stats --type py

Regex search across files (streaming)

跨文件正则搜索(流式处理)

python3 ~/.claude/skills/rlm/scripts/dev-rlm.py grep "pattern" --type py
python3 ~/.claude/skills/rlm/scripts/dev-rlm.py grep "pattern" --type py

Substring search with context window

带上下文窗口的子串搜索

python3 ~/.claude/skills/rlm/scripts/dev-rlm.py peek "error_handler" --context 300
python3 ~/.claude/skills/rlm/scripts/dev-rlm.py peek "error_handler" --context 300

Read single file or line range

读取单个文件或指定行范围

python3 ~/.claude/skills/rlm/scripts/dev-rlm.py read src/auth/login.py --lines 50-100
python3 ~/.claude/skills/rlm/scripts/dev-rlm.py read src/auth/login.py --lines 50-100

Partition files for agent distribution

划分文件以分配给多个Agent处理

python3 ~/.claude/skills/rlm/scripts/dev-rlm.py chunk --type py --size 15 --output /tmp/rlm_chunks.json

All commands support `--output /path/to/file.json` to write results to file.

**Fallback**: If rlm.py unavailable, use native tools: Grep, Glob, Read, `rg`, `find`.
python3 ~/.claude/skills/rlm/scripts/dev-rlm.py chunk --type py --size 15 --output /tmp/rlm_chunks.json

所有命令均支持`--output /path/to/file.json`参数,用于将结果写入文件。

**备选方案**:如果rlm.py不可用,可使用原生工具:Grep、Glob、Read、`rg`、`find`。

Pipeline: Index → Filter → Map → Reduce

流程:索引 → 过滤 → 映射 → 归约

1. Index

1. 索引

Discover structure without reading file content.
bash
python3 ~/.claude/skills/rlm/scripts/dev-rlm.py stats
python3 ~/.claude/skills/rlm/scripts/dev-rlm.py stats --type py
无需读取文件内容即可发现代码库结构。
bash
python3 ~/.claude/skills/rlm/scripts/dev-rlm.py stats
python3 ~/.claude/skills/rlm/scripts/dev-rlm.py stats --type py

2. Filter

2. 过滤

Narrow candidates programmatically.
bash
python3 ~/.claude/skills/rlm/scripts/dev-rlm.py grep "TODO|FIXME|HACK" --type py
rg -l "error" --type py
通过编程方式缩小候选文件范围。
bash
python3 ~/.claude/skills/rlm/scripts/dev-rlm.py grep "TODO|FIXME|HACK" --type py
rg -l "error" --type py

3. Map (Parallel Agents)

3. 映射(并行Agent)

Distribute filtered work across agents. See Strategy Selection below.
将过滤后的任务分配给多个Agent处理。详见下方的策略选择。

4. Reduce

4. 归约

Aggregate results from /tmp/.
bash
jq -s '.' /tmp/rlm_*.json > /tmp/rlm_report.json
jq -s '[.[].findings] | add | group_by(.severity)' /tmp/rlm_*.json
从/tmp目录聚合结果。
bash
jq -s '.' /tmp/rlm_*.json > /tmp/rlm_report.json
jq -s '[.[].findings] | add | group_by(.severity)' /tmp/rlm_*.json

Strategy Selection

策略选择

Assess the task. Pick the strategy that fits. Combine strategies within a single analysis.
评估任务类型,选择最适配的策略。可在单次分析中组合多种策略。

Strategies

策略列表

StrategyWhenAgent TypeAgents
PeekQuick answer, few files relevantNone (main context)0
Grep + ReadPattern in known locationsNone (main context)0
Fan-out ExploreQuestion about code behavior/patternsExplore2-5
Partition + MapSystematic analysis of many filesgeneral-purpose3-8
Recursive DecomposePartitions still complexgeneral-purpose2-4 per level
Summarize + DrillLarge result set needs synthesis firstMixed2-6
策略适用场景Agent类型Agent数量
Peek(快速查看)需要快速获取答案、相关文件数量少无(主上下文处理)0
Grep + Read(搜索+读取)已知位置存在特定模式无(主上下文处理)0
Fan-out Explore(扇出探索)针对代码行为/模式的问题Explore2-5
Partition + Map(划分+映射)对大量文件进行系统性分析通用型3-8
Recursive Decompose(递归分解)划分后的任务仍较复杂通用型每层级2-4个
Summarize + Drill(汇总+深挖)大型结果集需要先进行合成混合型2-6

Selection Logic

选择逻辑

  1. Run Index (
    stats
    ). How many candidate files?
  2. < 5 files: Peek or Grep+Read. Handle in main context. No agents needed.
  3. 5-50 files: Fan-out Explore (questions) or Partition+Map (analysis).
  4. 50-200 files: Partition+Map with coarse grouping. Consider Recursive Decompose if partitions remain complex.
  5. 200+ files: Recursive Decompose. Split into domains at depth 0, let workers decide depth-1 strategy.
Do NOT pick a strategy before running Index. Let the data decide.
  1. 运行索引(
    stats
    )命令。候选文件数量有多少?
  2. 少于5个文件:使用Peek或Grep+Read策略。在主上下文中处理,无需Agent。
  3. 5-50个文件:针对问题使用Fan-out Explore,针对分析使用Partition+Map。
  4. 50-200个文件:使用粗分组的Partition+Map策略。如果划分后的任务仍复杂,可考虑Recursive Decompose。
  5. 200个以上文件:使用Recursive Decompose。在第0层按领域拆分,由Worker Agent决定第1层的策略。
在运行索引命令前,请勿选择策略。让数据来决定最合适的方案。

Agent Patterns

Agent模式

Fan-out Explore

Fan-out Explore(扇出探索)

Deploy Explore agents with complementary perspectives.
Task(
  description="Trace error propagation paths",
  prompt="Search for error handling patterns in this codebase.
  Focus on: try/catch, error types, propagation chains.
  Write summary to /tmp/rlm_errors.md",
  subagent_type="Explore",
  run_in_background=true
)
Assign each agent a distinct angle: architecture, patterns, specific modules, tests, dependencies.
部署具有互补视角的Explore Agent。
Task(
  description="追踪错误传播路径",
  prompt="搜索此代码库中的错误处理模式。
  重点关注:try/catch、错误类型、传播链。
  将总结写入/tmp/rlm_errors.md",
  subagent_type="Explore",
  run_in_background=true
)
为每个Agent分配不同的视角:架构、模式、特定模块、测试、依赖项。

Partition + Map

Partition + Map(划分+映射)

Split files into groups. Each general-purpose agent processes a partition.
Task(
  description="Analyze auth module (partition 1/4)",
  prompt="Analyze these files for security issues:
  [file list from rlm.py chunk output]

  Write findings to /tmp/rlm_p1.json as JSON:
  {\"partition\": 1, \"findings\": [{\"file\": \"\", \"line\": 0, \"issue\": \"\", \"severity\": \"\"}]}",
  subagent_type="general-purpose",
  run_in_background=true
)
Partition sources:
rlm.py chunk
output, directory boundaries, file type grouping.
将文件拆分为多个组。每个通用型Agent处理一个分组。
Task(
  description="分析认证模块(第1/4分组)",
  prompt="分析以下文件中的安全问题:
  [来自rlm.py chunk输出的文件列表]

  将分析结果以JSON格式写入/tmp/rlm_p1.json:
  {\"partition\": 1, \"findings\": [{\"file\": \"\", \"line\": 0, \"issue\": \"\", \"severity\": \"\"}]}",
  subagent_type="general-purpose",
  run_in_background=true
)
分组来源
rlm.py chunk
输出、目录边界、文件类型分组。

Collect Results

收集结果

TaskOutput(task_id=<agent_id>, block=true, timeout=120000)
TaskOutput(task_id=<agent_id>, block=true, timeout=120000)

Recursive Decomposition

递归分解

When a sub-task is too complex for a single agent, that agent spawns its own sub-agents. Only
general-purpose
agents can recurse (Explore agents cannot spawn agents).
当单个Agent无法处理某个子任务时,该Agent可生成自己的子Agent。只有
general-purpose
Agent可以递归(Explore Agent无法生成子Agent)。

When to Recurse

递归时机

An agent should recurse when:
  • Its assigned partition has 50+ files and the analysis requires understanding, not just scanning
  • It discovers distinct sub-problems (e.g., "this module has 3 independent subsystems")
  • The prompt explicitly allows recursion
当出现以下情况时,Agent应进行递归:
  • 分配的分组包含50+文件,且分析需要理解代码而非仅扫描
  • 发现独立的子问题(例如:「此模块包含3个独立的子系统」)
  • 提示中明确允许递归

Depth Control

深度控制

LevelRoleMax AgentsSpawns?
0 (Main)Orchestrator5Yes
1 (Worker)Domain analyzer3 per workerYes
2 (Leaf)Module specialist0Never
Hard limits:
  • Max recursion depth: 2 (main → worker → leaf)
  • Max total agents: 15 across all levels
  • Leaf agents MUST NOT spawn sub-agents
层级角色最大Agent数量是否可生成子Agent
0(主Agent)编排器5
1(Worker)领域分析器每个Worker最多3个
2(叶子节点)模块专家0绝不允许
硬限制:
  • 最大递归深度:2(主Agent → Worker → 叶子节点)
  • 所有层级的最大总Agent数量:15
  • 叶子节点Agent绝对不能生成子Agent

Recursive Agent Prompt Template

递归Agent提示模板

Include these instructions when spawning agents that may recurse:
"You are analyzing [SCOPE]. You may spawn up to [N] sub-agents if needed.

RECURSION RULES:
- Current depth: [D]. Max depth: 2.
- If depth=2, you are a leaf. Do NOT spawn agents.
- Only recurse if your scope has 50+ files or distinct sub-problems.
- Each sub-agent writes to /tmp/rlm_d[D+1]_[ID].json
- After sub-agents complete, merge their results into your output file."
当生成可能需要递归的Agent时,需包含以下指令:
"你正在分析[SCOPE]。如有需要,你可生成最多[N]个子Agent。

递归规则:
- 当前深度:[D]。最大深度:2。
- 如果深度=2,你是叶子节点Agent,禁止生成子Agent。
- 仅当你的分析范围包含50+文件或存在独立子问题时,才可进行递归。
- 每个子Agent需将结果写入/tmp/rlm_d[D+1]_[ID].json
- 子Agent完成任务后,将其结果合并到你的输出文件中。"

Output Routing

输出路由

DepthOutput PathMerged By
2 (leaf)
/tmp/rlm_d2_*.json
Depth-1 parent
1 (worker)
/tmp/rlm_d1_*.json
Main orchestrator
0 (main)
/tmp/rlm_report.json
Main context
深度输出路径合并者
2(叶子节点)
/tmp/rlm_d2_*.json
上一级(深度1)的父Agent
1(Worker)
/tmp/rlm_d1_*.json
主编排器
0(主Agent)
/tmp/rlm_report.json
主上下文

Guardrails

防护规则

Limits

限制条件

MetricLimit
Max concurrent agents (any level)5
Max total agents (all levels)15
Max recursion depth2
Max files per leaf agent20
Timeout per agent120s
Max spawn rounds (main orchestrator)3
指标限制值
任意层级的最大并发Agent数量5
所有层级的最大总Agent数量15
最大递归深度2
每个叶子节点Agent处理的最大文件数量20
单个Agent的超时时间120秒
主编排器的最大生成轮次3

Iteration Control

迭代控制

  • Each round of agent spawning should have a clear purpose
  • If 2 rounds produce no new information, stop
  • Never "try again" — refine the query or change strategy
  • 每一轮Agent生成都应有明确的目标
  • 如果连续2轮未产生新信息,停止任务
  • 绝不重复「重试」操作 — 应优化查询或更换策略

Token Protection

Token保护

  • Agents write to
    /tmp/rlm_*
    , not to main context
  • Main context reads only summaries and aggregated JSON
  • Never
    cat
    agent output files raw into main context
  • Agent将结果写入
    /tmp/rlm_*
    ,而非直接写入主上下文
  • 主上下文仅读取摘要和聚合后的JSON
  • 绝不将Agent输出文件的原始内容直接加载到主上下文

Constraints

约束条件

Never

禁止操作

  • cat *
    or load entire codebases into context
  • Spawn agents without running Index (
    stats
    ) first
  • Skip Filter stage for 50+ file codebases
  • Exceed depth or agent limits
  • Load rlm.py output raw into main context for large results
  • 使用
    cat *
    或将整个代码库加载到上下文
  • 未运行索引(
    stats
    )就生成Agent
  • 针对50+文件的代码库跳过过滤阶段
  • 超出深度或Agent数量限制
  • 将rlm.py的大型结果直接加载到主上下文

Always

必须操作

  • Use
    rlm.py stats
    before choosing strategy
  • Filter with
    rlm.py grep
    or
    rg
    before spawning agents
  • Write agent outputs to
    /tmp/rlm_*
  • Include recursion depth and limits in recursive agent prompts
  • Clean up
    /tmp/rlm_*
    after delivering results
  • 选择策略前先运行
    rlm.py stats
  • 生成Agent前先用
    rlm.py grep
    rg
    进行过滤
  • 将Agent输出写入
    /tmp/rlm_*
  • 在递归Agent的提示中包含递归深度和限制条件
  • 交付结果后清理
    /tmp/rlm_*
    文件

Fallback (Without rlm.py)

备选方案(无rlm.py时)

If rlm.py is unavailable, use native Claude Code tools:
rlm.py commandNative equivalent
stats
find . -type f | wc -l
+
tree -L 2 -I 'node_modules|.git'
grep
Grep tool or
rg -l "pattern" --type py
peek
Grep tool with
-C
context
read
Read tool with offset/limit
chunk
Glob + manual partitioning
The pipeline and strategy selection remain the same. Only the tooling changes.
如果rlm.py不可用,可使用原生Claude Code工具:
rlm.py命令原生等价工具
stats
find . -type f | wc -l
+
tree -L 2 -I 'node_modules|.git'
grep
Grep工具或
rg -l "pattern" --type py
peek
-C
参数的Grep工具
read
带偏移量/限制的Read工具
chunk
Glob + 手动划分
流程和策略选择逻辑保持不变,仅工具更换。

Integration

集成说明

  • rlm.py for Index/Filter stages
  • Explore agents for fan-out investigation
  • general-purpose agents for partition+map and recursive decomposition
  • rg/grep as rlm.py fallback for Filter
  • cli-jq for Reduce stage (merge and filter results)
  • rlm.py 用于索引/过滤阶段
  • Explore Agent 用于扇出探索
  • 通用型Agent 用于划分+映射和递归分解
  • rg/grep 作为rlm.py的过滤备选工具
  • cli-jq 用于归约阶段(合并和过滤结果)

Quick Reference

快速参考

See
quick-reference.md
for decision tree and command patterns.
决策树和命令模式详见
quick-reference.md

Credits

致谢

Based on the RLM paradigm (arXiv:2512.24601). Original skill by BowTiedSwan.
基于RLM范式(arXiv:2512.24601)。本技能由BowTiedSwan原创。