rlm
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseRLM CLI
RLM CLI
Recursive Language Models (RLM) CLI - enables LLMs to handle near-infinite context by recursively decomposing inputs and calling themselves over parts. Supports files, directories, URLs, and stdin.
递归语言模型(RLM)CLI - 让LLM通过递归分解输入并处理各部分内容,从而支持近乎无限的上下文。支持文件、目录、URL和标准输入(stdin)。
Installation
安装
bash
pip install rlm-cli # or: pipx install rlm-cli
uvx rlm-cli ask ... # run without installingSet an API key for your backend (openrouter is default):
bash
export OPENROUTER_API_KEY=... # default backend
export OPENAI_API_KEY=... # for --backend openai
export ANTHROPIC_API_KEY=... # for --backend anthropicbash
pip install rlm-cli # or: pipx install rlm-cli
uvx rlm-cli ask ... # run without installing为你的后端服务设置API密钥(默认使用OpenRouter):
bash
export OPENROUTER_API_KEY=... # default backend
export OPENAI_API_KEY=... # for --backend openai
export ANTHROPIC_API_KEY=... # for --backend anthropicCommands
命令
ask - Query with context
ask - 带上下文的查询
bash
rlm ask <inputs> -q "question"Inputs (combinable):
| Type | Example | Notes |
|---|---|---|
| Directory | | Recursive, respects .gitignore |
| File | | Single file |
| URL | | Auto-converts to markdown |
| stdin | | |
| Literal | | Treat as raw text |
| Multiple | | Combine any types |
Options:
| Flag | Description |
|---|---|
| Question/prompt (required) |
| Provider: |
| Model override (format: |
| Machine-readable output |
| Output format: |
| Show execution summary with depth statistics |
| Filter by extension |
| Glob patterns |
| Limit REPL iterations (default: 30) |
| Recursive RLM depth (default: 1 = no recursion) |
| Spending limit in USD (requires OpenRouter) |
| Time limit in seconds |
| Total token limit (input + output) |
| Consecutive error limit before stopping |
| Skip auto-indexing |
| Enable Exa web search (requires |
| Execute Python code between iterations |
JSON output structure:
json
{"ok": true, "exit_code": 0, "result": {"response": "..."}, "stats": {...}}JSON-tree output ():
Adds execution tree showing nested RLM calls:
--output-format=json-treejson
{
"result": {
"response": "...",
"tree": {
"depth": 0,
"model": "openai/gpt-4",
"duration": 2.3,
"cost": 0.05,
"iterations": [...],
"children": [...]
}
}
}Summary output ():
Shows depth-wise statistics after completion:
--summary- JSON mode: adds field to
summarystats - Text mode: prints summary to stderr
=== RLM Execution Summary ===
Total depth: 2 | Nodes: 3 | Cost: $0.0054 | Duration: 17.38s
Depth 0: 1 call(s) ($0.0047, 13.94s)
Depth 1: 2 call(s) ($0.0007, 3.44s)bash
rlm ask <inputs> -q "question"输入(可组合):
| 类型 | 示例 | 说明 |
|---|---|---|
| 目录 | | 递归遍历,遵循.gitignore规则 |
| 文件 | | 单个文件 |
| URL | | 自动转换为Markdown格式 |
| 标准输入 | | |
| 文本内容 | | 作为原始文本处理 |
| 多类型组合 | | 可组合任意类型的输入 |
选项:
| 参数 | 描述 |
|---|---|
| 问题/提示词(必填) |
| 服务提供商: |
| 模型覆盖(格式: |
| 机器可读格式输出 |
| 输出格式: |
| 显示包含深度统计信息的执行摘要 |
| 按文件扩展名过滤 |
| 通配符模式 |
| 限制REPL迭代次数(默认:30) |
| RLM递归深度(默认:1 = 不递归) |
| 美元计价的消费上限(需要使用OpenRouter) |
| 时间限制(秒) |
| 总令牌限制(输入+输出) |
| 停止前允许的连续错误次数上限 |
| 跳过自动索引 |
| 启用Exa网页搜索(需要 |
| 在迭代之间执行Python代码 |
JSON输出结构:
json
{"ok": true, "exit_code": 0, "result": {"response": "..."}, "stats": {...}}JSON-tree输出():
添加显示嵌套RLM调用的执行树:
--output-format=json-treejson
{
"result": {
"response": "...",
"tree": {
"depth": 0,
"model": "openai/gpt-4",
"duration": 2.3,
"cost": 0.05,
"iterations": [...],
"children": [...]
}
}
}摘要输出():
完成后显示深度统计信息:
--summary- JSON模式:在中添加
stats字段summary - 文本模式:将摘要打印到标准错误输出
=== RLM执行摘要 ===
总深度:2 | 节点数:3 | 花费:$0.0054 | 耗时:17.38s
深度0:1次调用($0.0047,13.94s)
深度1:2次调用($0.0007,3.44s)complete - Query without context
complete - 无上下文查询
bash
rlm complete "prompt text"
rlm complete "Generate SQL" --json --backend openaibash
rlm complete "prompt text"
rlm complete "Generate SQL" --json --backend openaisearch - Search indexed files
search - 搜索已索引文件
bash
rlm search "query" [options]| Flag | Description |
|---|---|
| Max results (default: 20) |
| Filter by language |
| Output file paths only |
| JSON output |
Auto-indexes on first use. Manual index:
rlm index .bash
rlm search "query" [options]| 参数 | 描述 |
|---|---|
| 最大结果数(默认:20) |
| 按编程语言过滤 |
| 仅输出文件路径 |
| JSON格式输出 |
首次使用时自动创建索引。手动创建索引:
rlm index .index - Build search index
index - 构建搜索索引
bash
rlm index . # Index current dir
rlm index ./src --force # Force full reindexbash
rlm index . # 索引当前目录
rlm index ./src --force # 强制全量重新索引doctor - Check setup
doctor - 检查环境配置
bash
rlm doctor # Check config, API keys, deps
rlm doctor --jsonbash
rlm doctor # 检查配置、API密钥、依赖项
rlm doctor --jsonWorkflows
工作流
Git diff review:
bash
git diff | rlm ask - -q "Review for bugs"
git diff --cached | rlm ask - -q "Ready to commit?"
git diff HEAD~3 | rlm ask - -q "Summarize changes"Codebase analysis:
bash
rlm ask . -q "Explain architecture"
rlm ask src/ -q "How does auth work?" --extensions .pySearch + analyze:
bash
rlm search "database" --paths-only
rlm ask src/db.py -q "How is connection pooling done?"Compare files:
bash
rlm ask old.py new.py -q "What changed?"Git Diff评审:
bash
git diff | rlm ask - -q "检查是否存在Bug"
git diff --cached | rlm ask - -q "是否可以提交?"
git diff HEAD~3 | rlm ask - -q "总结变更内容"代码库分析:
bash
rlm ask . -q "解释代码架构"
rlm ask src/ -q "认证功能是如何实现的?" --extensions .py搜索+分析:
bash
rlm search "database" --paths-only
rlm ask src/db.py -q "连接池是如何实现的?"文件对比:
bash
rlm ask old.py new.py -q "有哪些变更?"Configuration
配置
Precedence: CLI flags > env vars > config file > defaults
Config locations: , ,
./rlm.yaml./.rlm.yaml~/.config/rlm/config.yamlyaml
backend: openrouter
model: google/gemini-3-flash-preview
max_iterations: 30Environment variables:
- - Default backend
RLM_BACKEND - - Default model
RLM_MODEL - - Config file path
RLM_CONFIG - - Always output JSON
RLM_JSON=1
优先级: CLI参数 > 环境变量 > 配置文件 > 默认值
配置文件位置: 、、
./rlm.yaml./.rlm.yaml~/.config/rlm/config.yamlyaml
backend: openrouter
model: google/gemini-3-flash-preview
max_iterations: 30环境变量:
- - 默认后端服务
RLM_BACKEND - - 默认模型
RLM_MODEL - - 配置文件路径
RLM_CONFIG - - 始终输出JSON格式
RLM_JSON=1
Recursion and Budget Limits
递归与预算限制
Recursive RLM (--max-depth
)
--max-depth递归RLM(--max-depth
)
--max-depthEnable recursive calls where child RLMs process sub-tasks:
llm_query()bash
undefined启用递归调用,子RLM处理子任务:
llm_query()bash
undefined2 levels of recursion
2层递归
rlm ask . -q "Research thoroughly" --max-depth 2
rlm ask . -q "进行全面调研" --max-depth 2
With budget cap
带预算上限
rlm ask . -q "Analyze codebase" --max-depth 3 --max-budget 0.50
undefinedrlm ask . -q "分析代码库" --max-depth 3 --max-budget 0.50
undefinedBudget Control (--max-budget
)
--max-budget预算控制(--max-budget
)
--max-budgetLimit spending per completion. Raises when exceeded:
BudgetExceededErrorbash
undefined限制每次完成任务的消费。超出上限时会抛出:
BudgetExceededErrorbash
undefinedCap at $1.00
上限1.00美元
rlm complete "Complex task" --max-budget 1.00
rlm complete "复杂任务" --max-budget 1.00
Very low budget (will likely exceed)
极低预算(很可能超出)
rlm ask . -q "Analyze everything" --max-budget 0.001
**Requirements:** OpenRouter backend (returns cost data in responses).rlm ask . -q "分析所有内容" --max-budget 0.001
**要求:** 使用OpenRouter后端(响应中会返回成本数据)。Other Limits
其他限制
Timeout () - Stop after N seconds:
--max-timeoutbash
rlm complete "Complex task" --max-timeout 30Token limit () - Stop after N total tokens:
--max-tokensbash
rlm ask . -q "Analyze" --max-tokens 10000Error threshold () - Stop after N consecutive code errors:
--max-errorsbash
rlm complete "Write code" --max-errors 3超时限制() - N秒后停止执行:
--max-timeoutbash
rlm complete "复杂任务" --max-timeout 30令牌限制() - 总令牌数超过N时停止:
--max-tokensbash
rlm ask . -q "进行分析" --max-tokens 10000错误阈值() - 连续错误次数超过N时停止:
--max-errorsbash
rlm complete "编写代码" --max-errors 3Stop Conditions
停止条件
RLM execution stops when any of these occur:
- Final answer - LLM calls with the NAME of a variable (as a string)
FINAL_VAR("variable_name") - Max iterations - Exceeds (exit code 0, graceful - forces final answer)
--max-iterations
FINAL_VAR usage (common mistake - pass variable NAME, not value):
python
undefined当出现以下任一情况时,RLM执行会停止:
- 最终答案 - LLM调用,传入变量名称(字符串形式)
FINAL_VAR("variable_name") - 最大迭代次数 - 超出限制(退出码0,正常结束 - 强制生成最终答案)
--max-iterations
FINAL_VAR使用注意(常见错误 - 传入变量名称而非值):
python
undefinedCORRECT:
正确用法:
result = {"answer": "hello", "score": 42}
FINAL_VAR("result") # pass the variable NAME as a string
result = {"answer": "hello", "score": 42}
FINAL_VAR("result") # 传入变量名称的字符串形式
WRONG:
错误用法:
FINAL_VAR(result) # passing the dict directly causes AttributeError
3. **Max budget exceeded** - Spending > `--max-budget` (exit code 20, error)
4. **Max timeout exceeded** - Time > `--max-timeout` (exit code 20, error with partial answer)
5. **Max tokens exceeded** - Tokens > `--max-tokens` (exit code 20, error with partial answer)
6. **Max errors exceeded** - Consecutive errors > `--max-errors` (exit code 20, error with partial answer)
7. **User cancellation** - Ctrl+C or SIGUSR1 (exit code 0, returns partial answer as success)
8. **Max depth reached** - Child RLM at depth 0 cannot recurse further
**Note on max iterations:** This is a soft limit. When exceeded, RLM prompts the LLM one more time to provide a final answer. Modern LLMs typically complete in 1-2 iterations.
**Partial answers:** When timeout, tokens, or errors stop execution, the error includes `partial_answer` if any response was generated before stopping.
**Early exit (Ctrl+C):** Pressing Ctrl+C (or sending SIGUSR1) returns the partial answer as success (exit code 0) with `early_exit: true` in the result.FINAL_VAR(result) # 直接传入字典会导致AttributeError
3. **超出预算上限** - 消费金额 > `--max-budget`(退出码20,错误)
4. **超出超时限制** - 耗时 > `--max-timeout`(退出码20,错误,返回部分答案)
5. **超出令牌限制** - 令牌数 > `--max-tokens`(退出码20,错误,返回部分答案)
6. **超出错误阈值** - 连续错误次数 > `--max-errors`(退出码20,错误,返回部分答案)
7. **用户取消** - 按下Ctrl+C或发送SIGUSR1信号(退出码0,返回部分答案作为成功结果)
8. **达到最大深度** - 深度为0的子RLM无法继续递归
**关于最大迭代次数的说明:** 这是一个软限制。当超出限制时,RLM会再提示LLM一次,要求其提供最终答案。现代LLM通常在1-2次迭代内即可完成任务。
**部分答案:** 当因超时、令牌不足或错误导致执行停止时,如果在停止前已生成部分响应,错误信息中会包含`partial_answer`字段。
**提前退出(Ctrl+C):** 按下Ctrl+C(或发送SIGUSR1信号)会将部分答案作为成功结果返回(退出码0),结果中会包含`early_exit: true`。Inject File (--inject-file)
注入文件(--inject-file)
Update REPL variables mid-run by modifying an inject file:
bash
undefined通过修改注入文件,在运行过程中更新REPL变量:
bash
undefinedCreate inject file
创建注入文件
echo 'focus = "authentication"' > inject.py
echo 'focus = "authentication"' > inject.py
Run with inject file
运行时指定注入文件
rlm ask . -q "Analyze based on 'focus'" --inject-file inject.py
rlm ask . -q "基于'focus'进行分析" --inject-file inject.py
In another terminal, update mid-run
在另一个终端中,运行过程中更新注入文件
echo 'focus = "authorization"' > inject.py
The file is checked before each iteration and executed if modified.echo 'focus = "authorization"' > inject.py
每次迭代前会检查该文件,如果有修改则执行文件内容。Exit Codes
退出码
| Code | Meaning |
|---|---|
| 0 | Success |
| 2 | CLI usage error |
| 10 | Input error (file not found) |
| 11 | Config error (missing API key) |
| 20 | Backend/API error (includes budget exceeded) |
| 30 | Runtime error |
| 40 | Index/search error |
| 代码 | 含义 |
|---|---|
| 0 | 成功 |
| 2 | CLI使用错误 |
| 10 | 输入错误(文件未找到) |
| 11 | 配置错误(缺少API密钥) |
| 20 | 后端/API错误(包含预算超出) |
| 30 | 运行时错误 |
| 40 | 索引/搜索错误 |
LLM Search Tools
LLM搜索工具
When runs on a directory, the LLM gets search tools:
rlm ask| Tool | Cost | Privacy | Use For |
|---|---|---|---|
| Free | Local | Exact patterns, function names, imports |
| Free | Local | Topics, concepts, related files |
| $ | API | Web search (requires |
| $$$ | API | Hierarchical PDF/document navigation |
当在目录上运行时,LLM会获得以下搜索工具:
rlm ask| 工具 | 成本 | 隐私性 | 适用场景 |
|---|---|---|---|
| 免费 | 本地处理 | 精确匹配模式、函数名、导入语句 |
| 免费 | 本地处理 | 主题、概念、相关文件搜索 |
| 付费 | API调用 | 网页搜索(需要 |
| 高额付费 | API调用 | 分层PDF/文档导航 |
Free Local Tools (auto-loaded)
免费本地工具(自动加载)
- rg.search(pattern, paths, globs) - ripgrep for exact patterns
- tv.search(query, limit) - Tantivy BM25 for concepts
- rg.search(pattern, paths, globs) - 使用ripgrep进行精确模式匹配
- tv.search(query, limit) - 使用Tantivy BM25进行概念搜索
Exa Web Search (--exa flag, Costs Money)
Exa网页搜索(--exa参数,需付费)
⚠️ Opt-in: Requires flag and environment variable.
--exaEXA_API_KEYSetup:
bash
export EXA_API_KEY=... # Get from https://exa.aiUsage in REPL:
python
from rlm_cli.tools_search import exa, web⚠️ 需主动启用:需要参数和环境变量。
--exaEXA_API_KEY配置:
bash
export EXA_API_KEY=... # 从https://exa.ai获取REPL中使用:
python
from rlm_cli.tools_search import exa, webBasic search
基础搜索
results = exa.search(query="Python async patterns", limit=5)
for r in results:
print(f"{r['title']}: {r['url']}")
results = exa.search(query="Python async patterns", limit=5)
for r in results:
print(f"{r['title']}: {r['url']}")
With highlights (relevant excerpts)
包含高亮显示(相关片段)
results = exa.search(
query="error handling best practices",
limit=3,
include_highlights=True
)
results = exa.search(
query="error handling best practices",
limit=3,
include_highlights=True
)
Semantic alias
语义别名
results = web(query="machine learning tutorial", limit=5)
results = web(query="machine learning tutorial", limit=5)
Find similar pages
查找相似页面
results = exa.find_similar(url="https://example.com/article", limit=5)
**exa.search() parameters:**
| Param | Default | Description |
|-------|---------|-------------|
| `query` | required | Search query |
| `limit` | 10 | Max results |
| `search_type` | "auto" | "auto", "neural", or "keyword" |
| `include_domains` | None | Only these domains |
| `exclude_domains` | None | Exclude these domains |
| `include_text` | False | Include full page text |
| `include_highlights` | True | Include relevant excerpts |
| `category` | None | "company", "research paper", "news", etc. |
**When to use exa.search() / web():**
- Finding external documentation, tutorials, articles
- Researching topics beyond the local codebase
- Finding similar pages to a reference URLresults = exa.find_similar(url="https://example.com/article", limit=5)
**exa.search()参数:**
| 参数 | 默认值 | 描述 |
|-------|---------|-------------|
| `query` | 必填 | 搜索查询词 |
| `limit` | 10 | 最大结果数 |
| `search_type` | "auto" | "auto"、"neural" 或 "keyword" |
| `include_domains` | None | 仅包含这些域名 |
| `exclude_domains` | None | 排除这些域名 |
| `include_text` | False | 包含完整页面文本 |
| `include_highlights` | True | 包含相关片段 |
| `category` | None | "company"、"research paper"、"news"等 |
**何时使用exa.search() / web():**
- 查找外部文档、教程、文章
- 研究本地代码库之外的主题
- 查找与参考URL相似的页面PageIndex (pi.* - Opt-in, Costs Money)
PageIndex(pi.* - 需主动启用,需付费)
⚠️ WARNING: PageIndex sends document content to LLM APIs and costs money.
Only use when:
- User explicitly requests document/PDF analysis
- Document has hierarchical structure (reports, manuals)
- User accepts cost/privacy tradeoffs
Prerequisites:
- (or other backend key) must be set in environment
OPENROUTER_API_KEY - PageIndex submodule must be initialized
- Run within rlm-cli's virtual environment (has required dependencies)
Setup (REQUIRED before any pi. operation):*
python
import sys
sys.path.insert(0, "/path/to/rlm-cli/rlm") # rlm submodule
sys.path.insert(0, "/path/to/rlm-cli/pageindex") # pageindex submodule
from rlm.clients import get_client
from rlm_cli.tools_pageindex import pi⚠️ 注意:PageIndex会将文档内容发送到LLM API,产生费用。
仅在以下场景使用:
- 用户明确要求分析文档/PDF
- 文档具有分层结构(报告、手册)
- 用户接受成本/隐私权衡
前提条件:
- 环境中必须设置(或其他后端服务密钥)
OPENROUTER_API_KEY - 必须初始化PageIndex子模块
- 在rlm-cli的虚拟环境中运行(已安装所需依赖)
*配置(所有pi.操作前必须完成):
python
import sys
sys.path.insert(0, "/path/to/rlm-cli/rlm") # rlm子模块路径
sys.path.insert(0, "/path/to/rlm-cli/pageindex") # pageindex子模块路径
from rlm.clients import get_client
from rlm_cli.tools_pageindex import piConfigure with existing rlm backend
使用已有的rlm后端配置
client = get_client(backend="openrouter", backend_kwargs={"model_name": "google/gemini-2.0-flash-001"})
pi.configure(client)
**Indexing (costs $$$):**
```pythonclient = get_client(backend="openrouter", backend_kwargs={"model_name": "google/gemini-2.0-flash-001"})
pi.configure(client)
**索引构建(高额费用):**
```pythonBuild tree index - THIS COSTS MONEY (no caching, re-indexes each call)
构建树状索引 - 会产生费用(无缓存,每次调用都会重新索引)
tree = pi.index(path="report.pdf")
tree = pi.index(path="report.pdf")
Returns: PITree object with doc_name, nodes, doc_description, raw
返回:PITree对象,包含doc_name、nodes、doc_description、raw字段
**Viewing structure (free after indexing):**
```python
**查看结构(索引构建后免费):**
```pythonDisplay table of contents
显示目录
print(pi.toc(tree))
print(pi.toc(tree))
Get section by node_id (IDs are "0000", "0001", "0002", etc.)
通过node_id获取章节(ID格式为"0000"、"0001"、"0002"等)
section = pi.get_section(tree, "0003")
section = pi.get_section(tree, "0003")
Returns: PINode with title, node_id, start_index, end_index, summary, children
返回:PINode对象,包含title、node_id、start_index、end_index、summary、children字段
Returns: None if not found
未找到时返回None
if section:
print(f"{section.title}: pages {section.start_index}-{section.end_index}")
**Finding node IDs:**
Node IDs are assigned sequentially ("0000", "0001", ...) in tree traversal order.
To see all node IDs, access the raw tree structure:
```python
import json
print(json.dumps(tree.raw["structure"], indent=2))if section:
print(f"{section.title}: 第{section.start_index}-{section.end_index}页")
**查找node_id:**
node_id按树遍历顺序依次分配("0000"、"0001"...)。要查看所有node_id,可访问原始树结构:
```python
import json
print(json.dumps(tree.raw["structure"], indent=2))Each node has: title, node_id, start_index, end_index
每个节点包含:title、node_id、start_index、end_index
**pi.* API Reference:**
| Method | Cost | Returns | Description |
|--------|------|---------|-------------|
| `pi.configure(client)` | Free | None | Set rlm backend (REQUIRED first) |
| `pi.status()` | Free | dict | Check availability, config, warning |
| `pi.index(path=str)` | $$$ | PITree | Build tree from PDF |
| `pi.toc(tree, max_depth=3)` | Free | str | Formatted table of contents |
| `pi.get_section(tree, node_id)` | Free | PINode or None | Get section by ID |
| `pi.available()` | Free | bool | Check if PageIndex installed |
| `pi.configured()` | Free | bool | Check if client configured |
**PITree attributes:** `doc_name`, `nodes` (list of PINode), `doc_description`, `raw` (dict)
**PINode attributes:** `title`, `node_id`, `start_index`, `end_index`, `summary` (may be None), `children` (may be None)
**Notes:**
- `summary` is only populated if `add_summaries=True` in `pi.index()`
- `children` is None for leaf nodes (sections with no subsections)
- `tree.raw["structure"]` is a flat list; hierarchy is in PINode.children
- PageIndex extracts document structure (TOC), not content. Use page numbers to locate sections in the original PDF.
**Example output from pi.toc():**📄 annual_report.pdf
• Executive Summary (p.1-5)
• Financial Overview (p.6-20)
• Revenue (p.6-10)
• Expenses (p.11-15)
• Projections (p.16-20)
• Risk Factors (p.21-35)
undefined
**pi.* API参考:**
| 方法 | 成本 | 返回值 | 描述 |
|--------|------|---------|-------------|
| `pi.configure(client)` | 免费 | None | 设置rlm后端(必须首先完成) |
| `pi.status()` | 免费 | dict | 检查可用性、配置、警告信息 |
| `pi.index(path=str)` | 高额付费 | PITree | 从PDF构建树状索引 |
| `pi.toc(tree, max_depth=3)` | 免费 | str | 格式化的目录 |
| `pi.get_section(tree, node_id)` | 免费 | PINode或None | 通过ID获取章节 |
| `pi.available()` | 免费 | bool | 检查PageIndex是否已安装 |
| `pi.configured()` | 免费 | bool | 检查客户端是否已配置 |
**PITree属性:** `doc_name`、`nodes`(PINode列表)、`doc_description`、`raw`(字典)
**PINode属性:** `title`、`node_id`、`start_index`、`end_index`、`summary`(可能为None)、`children`(可能为None)
**注意事项:**
- 只有在`pi.index()`中设置`add_summaries=True`时,`summary`字段才会被填充
- 叶子节点(无子章节)的`children`字段为None
- `tree.raw["structure"]`是一个扁平列表;层级关系存储在PINode.children中
- PageIndex仅提取文档结构(目录),不提取内容。需根据页码在原始PDF中定位章节。
**pi.toc()示例输出:**📄 annual_report.pdf
• 执行摘要(第1-5页)
• 财务概述(第6-20页)
• 收入(第6-10页)
• 支出(第11-15页)
• 预测(第16-20页)
• 风险因素(第21-35页)
undefined