tavily
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseTavily CLI
Tavily CLI
Search, extract, map, crawl, and research from the terminal with Tavily.
Run or for full option details.
tavily --helptavily <command> --help可在终端中使用Tavily完成搜索、内容提取、站点映射、爬取和调研工作。
运行或查看完整的选项说明。
tavily --helptavily <command> --helpPrerequisites
前置要求
Must be installed and authenticated.
Check:
bash
tavily statusIf not authenticated:
bash
tavily loginIf not installed/configured, follow rules/install.md.
For output handling and untrusted content guidelines, follow rules/security.md.
必须完成安装和认证。
检查方式:
bash
tavily status如果未认证:
bash
tavily login如果未安装/配置,请参考rules/install.md。
输出处理和非信任内容规范请参考rules/security.md。
Workflow
工作流
Use this escalation pattern:
- Search - no exact URL yet, need discovery.
- Extract - already have URL(s), need content.
- Map - need URL discovery within one site before extracting.
- Crawl - need bulk content from a site section.
- Research - need synthesized output and citations from Tavily research API.
- Research Status - track/poll an existing research request.
| Need | Command | When |
|---|---|---|
| Discover sources on a topic | | No trusted URL yet |
| Get page content from URL(s) | | URL(s) already known |
| Discover pages inside a domain | | Need URL inventory/filtering |
| Bulk extract many pages | | Need multi-page dataset |
| Run guided long-form research | | Need synthesis, not raw pages |
| Poll existing research job | | Have |
What this CLI does not provide today:
- No dedicated automation command.
browser - No dedicated command.
download - No dedicated command (use
agentinstead).research
推荐按以下优先级使用功能:
- 搜索 - 尚未确定具体URL,需要查找资源
- 提取 - 已有URL,需要获取内容
- 映射 - 提取前需要在单个站点内发现可用URL
- 爬取 - 需要获取站点某个板块的批量内容
- 调研 - 需要通过Tavily调研API生成带引用的合成输出
- 调研状态 - 追踪/轮询已提交的调研请求
| 需求 | 命令 | 适用场景 |
|---|---|---|
| 查找某个主题的相关资源 | | 尚未获得可信URL |
| 从URL获取页面内容 | | 已知要处理的URL |
| 发现某个域名下的所有页面 | | 需要盘点/过滤站点URL |
| 批量提取多个页面内容 | | 需要生成多页面数据集 |
| 运行引导式长篇调研 | | 需要合成结果,而非原始页面内容 |
| 轮询已提交的调研任务 | | 已有 |
当前CLI暂不支持的功能:
- 无专门的自动化命令
browser - 无专门的命令
download - 无专门的命令(可使用
agent替代)research
Output and Organization
输出与文件管理
Unless user requests inline output, write to :
.tavily/bash
mkdir -p .tavily
tavily search "latest AI agent frameworks" --json -o .tavily/search-agent-frameworks.json
tavily extract "https://docs.tavily.com" -o .tavily/docs-tavily.mdKeep in .
.tavily/.gitignoreAlways quote URLs because and are shell-sensitive:
?&bash
tavily extract "https://example.com/docs?page=2&lang=en" -o .tavily/page2.mdRead large outputs incrementally:
bash
wc -l .tavily/result.json
head -n 60 .tavily/result.json
rg -n "keyword|error|failed" .tavily/result.jsonSuggested naming:
bash
.tavily/search-{topic}.json
.tavily/map-{site}.json
.tavily/crawl-{site}.json
.tavily/extract-{site}-{slug}.md
.tavily/research-{topic}.json除非用户要求行内输出,否则默认写入目录:
.tavily/bash
mkdir -p .tavily
tavily search "latest AI agent frameworks" --json -o .tavily/search-agent-frameworks.json
tavily extract "https://docs.tavily.com" -o .tavily/docs-tavily.md请将加入。
.tavily/.gitignore请始终给URL加上引号,因为和是Shell敏感字符:
?&bash
tavily extract "https://example.com/docs?page=2&lang=en" -o .tavily/page2.md增量读取大输出文件:
bash
wc -l .tavily/result.json
head -n 60 .tavily/result.json
rg -n "keyword|error|failed" .tavily/result.json推荐的文件命名规则:
bash
.tavily/search-{topic}.json
.tavily/map-{site}.json
.tavily/crawl-{site}.json
.tavily/extract-{site}-{slug}.md
.tavily/research-{topic}.jsonCommands
命令说明
search
search(搜索)
Discover results for a query. Useful first step before extraction/crawl.
bash
undefined根据查询词查找相关结果,是提取/爬取前的实用第一步。
bash
undefinedBasic JSON search output
基础JSON搜索输出
tavily search "best vector databases 2026" --json --pretty -o .tavily/search-vector-db.json
tavily search "best vector databases 2026" --json --pretty -o .tavily/search-vector-db.json
News-focused search in recent time window
聚焦新闻的近期搜索
tavily search "AI regulation US" --topic news --time-range month --max-results 8 --json -o .tavily/search-ai-regulation.json
tavily search "AI regulation US" --topic news --time-range month --max-results 8 --json -o .tavily/search-ai-regulation.json
Domain-constrained search with answer/raw content
指定域名范围的搜索,返回答案和原始内容
tavily search "authentication pattern"
--include-domains "docs.tavily.com,developer.mozilla.org"
--include-answer
--include-raw-content
--json -o .tavily/search-auth.json
--include-domains "docs.tavily.com,developer.mozilla.org"
--include-answer
--include-raw-content
--json -o .tavily/search-auth.json
Key options: `--max-results`, `--search-depth`, `--topic`, `--time-range`, `--start-date`, `--end-date`, `--include-domains`, `--exclude-domains`, `--country`, `--include-raw-content`, `--include-images`, `--include-answer`, `-o`, `--json`, `--pretty`.tavily search "authentication pattern"
--include-domains "docs.tavily.com,developer.mozilla.org"
--include-answer
--include-raw-content
--json -o .tavily/search-auth.json
--include-domains "docs.tavily.com,developer.mozilla.org"
--include-answer
--include-raw-content
--json -o .tavily/search-auth.json
核心选项:`--max-results`, `--search-depth`, `--topic`, `--time-range`, `--start-date`, `--end-date`, `--include-domains`, `--exclude-domains`, `--country`, `--include-raw-content`, `--include-images`, `--include-answer`, `-o`, `--json`, `--pretty`。extract
extract(内容提取)
Extract content from one or many known URLs.
bash
undefined从一个或多个已知URL中提取内容。
bash
undefinedSingle URL
单个URL提取
tavily extract "https://docs.tavily.com" --format markdown -o .tavily/extract-docs.md
tavily extract "https://docs.tavily.com" --format markdown -o .tavily/extract-docs.md
Multiple URLs in one call
单次调用处理多个URL
tavily extract "https://docs.tavily.com" "https://docs.tavily.com/api-reference"
--extract-depth advanced
--include-images
--json -o .tavily/extract-docs-multi.json
--extract-depth advanced
--include-images
--json -o .tavily/extract-docs-multi.json
tavily extract "https://docs.tavily.com" "https://docs.tavily.com/api-reference"
--extract-depth advanced
--include-images
--json -o .tavily/extract-docs-multi.json
--extract-depth advanced
--include-images
--json -o .tavily/extract-docs-multi.json
Add relevance query and chunk control
增加相关性查询和分片控制
tavily extract "https://example.com/long-page"
--query "pricing limits and rate limits"
--chunks-per-source 5
--timeout 30
--json -o .tavily/extract-pricing.json
--query "pricing limits and rate limits"
--chunks-per-source 5
--timeout 30
--json -o .tavily/extract-pricing.json
Key options: `-u, --url`, positional `[urls...]`, `--extract-depth`, `--format`, `--include-images`, `--timeout`, `--query`, `--chunks-per-source`, `-o`, `--json`, `--pretty`.tavily extract "https://example.com/long-page"
--query "pricing limits and rate limits"
--chunks-per-source 5
--timeout 30
--json -o .tavily/extract-pricing.json
--query "pricing limits and rate limits"
--chunks-per-source 5
--timeout 30
--json -o .tavily/extract-pricing.json
核心选项:`-u, --url`, 位置参数 `[urls...]`, `--extract-depth`, `--format`, `--include-images`, `--timeout`, `--query`, `--chunks-per-source`, `-o`, `--json`, `--pretty`。map
map(站点映射)
Discover URLs on a site before extraction/crawl.
bash
undefined在提取/爬取前发现站点内的所有URL。
bash
undefinedFull map with limits
带限制的全站点映射
tavily map "https://docs.tavily.com" --max-depth 2 --limit 200 --json -o .tavily/map-docs.json
tavily map "https://docs.tavily.com" --max-depth 2 --limit 200 --json -o .tavily/map-docs.json
Filter by paths/domains
按路径/域名过滤
tavily map "https://example.com"
--select-paths "/docs,/api"
--exclude-paths "/blog,/changelog"
--json -o .tavily/map-filtered.json
--select-paths "/docs,/api"
--exclude-paths "/blog,/changelog"
--json -o .tavily/map-filtered.json
Key options: `--max-depth`, `--max-breadth`, `--limit`, `--select-paths`, `--select-domains`, `--exclude-paths`, `--exclude-domains`, `--allow-external`, `--instructions`, `--timeout`, `-o`, `--json`, `--pretty`.tavily map "https://example.com"
--select-paths "/docs,/api"
--exclude-paths "/blog,/changelog"
--json -o .tavily/map-filtered.json
--select-paths "/docs,/api"
--exclude-paths "/blog,/changelog"
--json -o .tavily/map-filtered.json
核心选项:`--max-depth`, `--max-breadth`, `--limit`, `--select-paths`, `--select-domains`, `--exclude-paths`, `--exclude-domains`, `--allow-external`, `--instructions`, `--timeout`, `-o`, `--json`, `--pretty`。crawl
crawl(爬取)
Bulk extract many pages from a site section.
bash
undefined批量提取站点某个板块的多个页面内容。
bash
undefinedCrawl docs section only
仅爬取文档板块
tavily crawl "https://docs.tavily.com"
--select-paths "/docs,/api-reference"
--max-depth 2
--limit 80
--json -o .tavily/crawl-docs.json
--select-paths "/docs,/api-reference"
--max-depth 2
--limit 80
--json -o .tavily/crawl-docs.json
tavily crawl "https://docs.tavily.com"
--select-paths "/docs,/api-reference"
--max-depth 2
--limit 80
--json -o .tavily/crawl-docs.json
--select-paths "/docs,/api-reference"
--max-depth 2
--limit 80
--json -o .tavily/crawl-docs.json
Crawl with advanced extract options
使用高级提取选项爬取
tavily crawl "https://example.com"
--extract-depth advanced
--include-images
--chunks-per-source 4
--timeout 45
--json -o .tavily/crawl-example.json
--extract-depth advanced
--include-images
--chunks-per-source 4
--timeout 45
--json -o .tavily/crawl-example.json
Key options: `--max-depth`, `--max-breadth`, `--limit`, `--extract-depth`, `--select-paths`, `--exclude-paths`, `--allow-external`, `--include-images`, `--instructions`, `--format`, `--timeout`, `--chunks-per-source`, `-o`, `--json`, `--pretty`.tavily crawl "https://example.com"
--extract-depth advanced
--include-images
--chunks-per-source 4
--timeout 45
--json -o .tavily/crawl-example.json
--extract-depth advanced
--include-images
--chunks-per-source 4
--timeout 45
--json -o .tavily/crawl-example.json
核心选项:`--max-depth`, `--max-breadth`, `--limit`, `--extract-depth`, `--select-paths`, `--exclude-paths`, `--allow-external`, `--include-images`, `--instructions`, `--format`, `--timeout`, `--chunks-per-source`, `-o`, `--json`, `--pretty`。research
research(调研)
Start Tavily research generation.
bash
undefined启动Tavily调研生成任务。
bash
undefinedBasic research request
基础调研请求
tavily research "Compare RAG chunking strategies for legal documents"
--model pro
--json -o .tavily/research-rag.json
--model pro
--json -o .tavily/research-rag.json
tavily research "Compare RAG chunking strategies for legal documents"
--model pro
--json -o .tavily/research-rag.json
--model pro
--json -o .tavily/research-rag.json
Stream output while running
运行时流式输出
tavily research "Summarize latest agent memory patterns"
--stream
-o .tavily/research-agent-memory.txt
--stream
-o .tavily/research-agent-memory.txt
tavily research "Summarize latest agent memory patterns"
--stream
-o .tavily/research-agent-memory.txt
--stream
-o .tavily/research-agent-memory.txt
Structured output schema
结构化输出Schema
tavily research "List top observability tools"
--output-schema '{"type":"object","properties":{"tools":{"type":"array","items":{"type":"string"}}}}'
--json -o .tavily/research-tools-structured.json
--output-schema '{"type":"object","properties":{"tools":{"type":"array","items":{"type":"string"}}}}'
--json -o .tavily/research-tools-structured.json
Key options: `--model`, `--citation-format`, `--timeout`, `--stream`, `--output-schema`, `-o`, `--json`, `--pretty`.tavily research "List top observability tools"
--output-schema '{"type":"object","properties":{"tools":{"type":"array","items":{"type":"string"}}}}'
--json -o .tavily/research-tools-structured.json
--output-schema '{"type":"object","properties":{"tools":{"type":"array","items":{"type":"string"}}}}'
--json -o .tavily/research-tools-structured.json
核心选项:`--model`, `--citation-format`, `--timeout`, `--stream`, `--output-schema`, `-o`, `--json`, `--pretty`。research-status
research-status(调研状态)
Poll an existing research request by id.
bash
tavily research-status "req_123456789" --json --pretty -o .tavily/research-status.jsonKey options: , , .
-o--json--pretty通过ID轮询已提交的调研请求状态。
bash
tavily research-status "req_123456789" --json --pretty -o .tavily/research-status.json核心选项:, , 。
-o--json--prettyauth and setup commands
认证与设置命令
bash
undefinedbash
undefinedAuthenticate interactively (never pass secrets as CLI args)
交互式认证(切勿将密钥作为CLI参数传入)
tavily login
tavily login
Check auth/version status
检查认证/版本状态
tavily status
tavily status
Remove saved credentials
删除已保存的凭证
tavily logout
tavily logout
Pull key into local .env
将密钥写入本地.env文件
tavily env --file .env --overwrite
tavily env --file .env --overwrite
Install integrations independently
独立安装集成组件
tavily setup skills
tavily setup mcp
undefinedtavily setup skills
tavily setup mcp
undefinedWorking with Results
处理结果数据
Use , , and targeted reads for analysis:
jqrgbash
undefined使用、和定向读取进行分析:
jqrgbash
undefinedSearch: list title + URL
搜索:列出标题+URL
jq -r '.results[] | "(.title)\t(.url)"' .tavily/search-agent-frameworks.json
jq -r '.results[] | "(.title)\t(.url)"' .tavily/search-agent-frameworks.json
Map: list URLs
映射:列出所有URL
jq -r '.results[]' .tavily/map-docs.json
jq -r '.results[]' .tavily/map-docs.json
Crawl: collect crawled URLs
爬取:收集已爬取的URL
jq -r '.results[]?.url // empty' .tavily/crawl-docs.json
jq -r '.results[]?.url // empty' .tavily/crawl-docs.json
Extract: show failures only
提取:仅展示失败结果
jq -r '.failedResults[]? | "(.url)\t(.error)"' .tavily/extract-docs-multi.json
jq -r '.failedResults[]? | "(.url)\t(.error)"' .tavily/extract-docs-multi.json
Research: inspect status and request id
调研:查看状态和请求ID
jq -r '.requestId, .status' .tavily/research-rag.json
undefinedjq -r '.requestId, .status' .tavily/research-rag.json
undefinedParallelization
并行处理
Prefer these patterns:
- Use one call with multiple URLs when task scope is one batch.
extract - Use shell parallelism only for independent targets and moderate concurrency.
bash
undefined推荐使用以下模式:
- 当任务属于同一批次时,在单次调用中传入多个URL。
extract - 仅针对独立目标使用Shell并行,且保持适度并发。
bash
undefinedBuilt-in multi URL batch in one request
单次请求内置多URL批量处理
tavily extract "https://site-a.com" "https://site-b.com" "https://site-c.com"
--json -o .tavily/extract-batch.json
--json -o .tavily/extract-batch.json
tavily extract "https://site-a.com" "https://site-b.com" "https://site-c.com"
--json -o .tavily/extract-batch.json
--json -o .tavily/extract-batch.json
Independent parallel jobs (throttle conservatively)
独立并行任务(请保守控制并发量)
tavily crawl "https://docs.site-a.com" --limit 40 --json -o .tavily/crawl-a.json &
tavily crawl "https://docs.site-b.com" --limit 40 --json -o .tavily/crawl-b.json &
wait
Do not launch large parallel bursts blindly. Respect API quota/rate limits and monitor failures before scaling up.tavily crawl "https://docs.site-a.com" --limit 40 --json -o .tavily/crawl-a.json &
tavily crawl "https://docs.site-b.com" --limit 40 --json -o .tavily/crawl-b.json &
wait
请勿盲目发起大量并行请求,遵守API配额/速率限制,扩容前先监控失败情况。Bulk Workflows (Tavily Equivalent)
批量工作流(Tavily推荐方案)
There is no dedicated command. Use map/crawl/extract pipelines:
downloadbash
undefined无专门的命令,可使用映射/爬取/提取流水线实现:
downloadbash
undefined1) Map docs URLs
1) 映射文档URL
tavily map "https://docs.example.com" --select-paths "/docs" --json -o .tavily/map-docs.json
tavily map "https://docs.example.com" --select-paths "/docs" --json -o .tavily/map-docs.json
2) Extract mapped URLs in controlled batches
2) 分批提取已映射的URL
jq -r '.results[]' .tavily/map-docs.json | head -n 30 > .tavily/map-docs-30.txt
while IFS= read -r url; do
slug="$(echo "$url" | sed 's#^https://##; s#^http://##; s#[^a-zA-Z0-9._-]#-#g')"
tavily extract "$url" --json -o ".tavily/extract-${slug}.json"
done < .tavily/map-docs-30.txt
Alternative bulk path:
```bash
tavily crawl "https://docs.example.com" --select-paths "/docs" --max-depth 2 --limit 100 --json -o .tavily/crawl-docs.jsonjq -r '.results[]' .tavily/map-docs.json | head -n 30 > .tavily/map-docs-30.txt
while IFS= read -r url; do
slug="$(echo "$url" | sed 's#^https://##; s#^http://##; s#[^a-zA-Z0-9._-]#-#g')"
tavily extract "$url" --json -o ".tavily/extract-${slug}.json"
done < .tavily/map-docs-30.txt
替代批量方案:
```bash
tavily crawl "https://docs.example.com" --select-paths "/docs" --max-depth 2 --limit 100 --json -o .tavily/crawl-docs.jsonFailure Handling
错误处理
- If command fails with auth error: run , then
tavily status.tavily login - If URL extraction fails: inspect from JSON output and retry only failed URLs.
.failedResults - If output is too large: reduce / depth and split into multiple focused runs.
--limit - If parsing JSON outputs: ensure was used or output file uses
--jsonextension..json
- 如果命令返回认证错误:运行,然后执行
tavily status。tavily login - 如果URL提取失败:查看JSON输出中的,仅重试失败的URL。
.failedResults - 如果输出过大:降低/深度,拆分为多个针对性的小任务运行。
--limit - 如果解析JSON输出失败:确认使用了参数,或输出文件使用
--json后缀。.json