firecrawl
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseFirecrawl CLI
Firecrawl CLI
Web scraping, search, and browser automation CLI. Returns clean markdown optimized for LLM context windows.
Run or for full option details.
firecrawl --helpfirecrawl <command> --help一款用于网页抓取、搜索和浏览器自动化的CLI工具。返回针对LLM上下文窗口优化的整洁markdown内容。
运行或查看完整选项详情。
firecrawl --helpfirecrawl <command> --helpPrerequisites
前置条件
Must be installed and authenticated. Check with .
firecrawl --status 🔥 firecrawl cli v1.8.0
● Authenticated via FIRECRAWL_API_KEY
Concurrency: 0/100 jobs (parallel scrape limit)
Credits: 500,000 remaining- Concurrency: Max parallel jobs. Run parallel operations up to this limit.
- Credits: Remaining API credits. Each scrape/crawl consumes credits.
If not ready, see rules/install.md. For output handling guidelines, see rules/security.md.
bash
firecrawl search "query" --scrape --limit 3必须完成安装和身份验证。运行检查状态。
firecrawl --status 🔥 firecrawl cli v1.8.0
● Authenticated via FIRECRAWL_API_KEY
Concurrency: 0/100 jobs (parallel scrape limit)
Credits: 500,000 remaining- Concurrency(并发数):最大并行任务数。可运行的并行抓取操作上限为此数值。
- Credits(积分):剩余API积分。每次抓取/爬取都会消耗积分。
如果尚未准备就绪,请查看rules/install.md。输出处理指南请查看rules/security.md。
bash
firecrawl search "query" --scrape --limit 3Workflow
工作流程
Follow this escalation pattern:
- Search - No specific URL yet. Find pages, answer questions, discover sources.
- Scrape - Have a URL. Extract its content directly.
- Map + Scrape - Large site or need a specific subpage. Use to find the right URL, then scrape it.
map --search - Crawl - Need bulk content from an entire site section (e.g., all /docs/).
- Browser - Scrape failed because content is behind interaction (pagination, modals, form submissions, multi-step navigation).
| Need | Command | When |
|---|---|---|
| Find pages on a topic | | No specific URL yet |
| Get a page's content | | Have a URL, page is static or JS-rendered |
| Find URLs within a site | | Need to locate a specific subpage |
| Bulk extract a site section | | Need many pages (e.g., all /docs/) |
| AI-powered data extraction | | Need structured data from complex sites |
| Interact with a page | | Content requires clicks, form fills, pagination, or login |
See also: -- a convenience command that combines + to save an entire site to local files.
downloadmapscrapeScrape vs browser:
- Use first. It handles static pages and JS-rendered SPAs.
scrape - Use when you need to interact with a page, such as clicking buttons, filling out forms, navigating through a complex site, infinite scroll, or when scrape fails to grab all the content you need.
browser - Never use browser for web searches - use instead.
search
Avoid redundant fetches:
- already fetches full page content. Don't re-scrape those URLs.
search --scrape - Check for existing data before fetching again.
.firecrawl/
Example: fetching API docs from a large site
search "site:docs.example.com authentication API" → found the docs domain
map https://docs.example.com --search "auth" → found /docs/api/authentication
scrape https://docs.example.com/docs/api/auth... → got the contentExample: data behind pagination
scrape https://example.com/products → only shows first 10 items, no next-page links
browser "open https://example.com/products" → open in browser
browser "snapshot -i" → find the pagination button
browser "click @e12" → click "Next Page"
browser "scrape" -o .firecrawl/products-p2.md → extract page 2 contentExample: login then scrape authenticated content
browser launch-session --profile my-app → create a named profile
browser "open https://app.example.com/login" → navigate to login
browser "snapshot -i" → find form fields
browser "fill @e3 'user@example.com'" → fill email
browser "click @e7" → click Login
browser "wait 2" → wait for redirect
browser close → disconnect, state persisted
browser launch-session --profile my-app → reconnect, cookies intact
browser "open https://app.example.com/dashboard" → already logged in
browser "scrape" -o .firecrawl/dashboard.md → extract authenticated content
browser closeExample: research task
search "firecrawl vs competitors 2024" --scrape -o .firecrawl/search-comparison-scraped.json
→ full content already fetched for each result
grep -n "pricing\|features" .firecrawl/search-comparison-scraped.json
head -200 .firecrawl/search-comparison-scraped.json → read and process what you have
→ notice a relevant URL in the content
scrape https://newsite.com/comparison -o .firecrawl/newsite-comparison.md
→ only scrape this new URL遵循以下递进模式:
- Search(搜索) - 尚无特定URL。查找页面、解答问题、发现数据源。
- Scrape(抓取) - 已有URL。直接提取其内容。
- Map + Scrape(映射+抓取) - 针对大型网站或需要特定子页面的情况。使用找到正确的URL,然后抓取它。
map --search - Crawl(爬取) - 需要从整个网站板块提取批量内容(例如所有/docs/页面)。
- Browser(浏览器) - 抓取失败,因为内容需要交互操作(分页、弹窗、表单提交、多步骤导航)。
| 需求场景 | 命令 | 使用时机 |
|---|---|---|
| 查找特定主题的页面 | | 尚无具体URL时 |
| 获取单个页面的内容 | | 已有URL,页面为静态或JS渲染的SPA时 |
| 在网站内查找URL | | 需要定位特定子页面时 |
| 批量提取网站板块内容 | | 需要获取大量页面(例如所有/docs/页面)时 |
| AI驱动的数据提取 | | 需要从复杂网站提取结构化数据时 |
| 与页面进行交互操作 | | 内容需要点击、表单填写、分页或登录等交互才能获取时 |
Scrape与browser的区别:
- 优先使用。它支持静态页面和JS渲染的SPA。
scrape - 当需要与页面交互时使用,例如点击按钮、填写表单、导航复杂网站、无限滚动,或者
browser无法获取所需全部内容时。scrape - 网页搜索绝不要使用browser,应使用。
search
避免重复抓取:
- 已经获取了完整页面内容。不要重新抓取这些URL。
search --scrape - 再次抓取前,请检查目录中是否已有数据。
.firecrawl/
示例:从大型网站获取API文档
search "site:docs.example.com authentication API" → 找到文档域名
map https://docs.example.com --search "auth" → 找到/docs/api/authentication
scrape https://docs.example.com/docs/api/auth... → 获取内容示例:分页后的内容
scrape https://example.com/products → 仅显示前10个商品,无下一页链接
browser "open https://example.com/products" → 在浏览器中打开
browser "snapshot -i" → 查找分页按钮
browser "click @e12" → 点击“下一页”
browser "scrape" -o .firecrawl/products-p2.md → 提取第2页内容示例:登录后抓取需认证的内容
browser launch-session --profile my-app → 创建命名配置文件
browser "open https://app.example.com/login" → 导航到登录页
browser "snapshot -i" → 查找表单字段
browser "fill @e3 'user@example.com'" → 填写邮箱
browser "click @e7" → 点击“登录”
browser "wait 2" → 等待重定向
browser close → 断开连接,状态已保存
browser launch-session --profile my-app → 重新连接,Cookie保持有效
browser "open https://app.example.com/dashboard" → 已处于登录状态
browser "scrape" -o .firecrawl/dashboard.md → 提取需认证的内容
browser close示例:调研任务
search "firecrawl vs competitors 2024" --scrape -o .firecrawl/search-comparison-scraped.json
→ 已获取每个结果的完整内容
grep -n "pricing\|features" .firecrawl/search-comparison-scraped.json
head -200 .firecrawl/search-comparison-scraped.json → 读取并处理已获取的内容
→ 注意到内容中的相关URL
scrape https://newsite.com/comparison -o .firecrawl/newsite-comparison.md
→ 仅抓取这个新URLOutput & Organization
输出与组织
Unless the user specifies to return in context, write results to with . Add to . Always quote URLs - shell interprets and as special characters.
.firecrawl/-o.firecrawl/.gitignore?&bash
firecrawl search "react hooks" -o .firecrawl/search-react-hooks.json --json
firecrawl scrape "<url>" -o .firecrawl/page.mdNaming conventions:
.firecrawl/search-{query}.json
.firecrawl/search-{query}-scraped.json
.firecrawl/{site}-{path}.mdNever read entire output files at once. Use , , or incremental reads:
grepheadbash
wc -l .firecrawl/file.md && head -50 .firecrawl/file.md
grep -n "keyword" .firecrawl/file.mdSingle format outputs raw content. Multiple formats (e.g., ) output JSON.
--format markdown,links除非用户指定在上下文返回结果,否则请使用参数将结果写入目录。将添加到中。URL务必使用引号包裹——Shell会将和视为特殊字符。
-o.firecrawl/.firecrawl/.gitignore?&bash
firecrawl search "react hooks" -o .firecrawl/search-react-hooks.json --json
firecrawl scrape "<url>" -o .firecrawl/page.md命名规范:
.firecrawl/search-{query}.json
.firecrawl/search-{query}-scraped.json
.firecrawl/{site}-{path}.md切勿一次性读取整个输出文件。请使用、或增量读取:
grepheadbash
wc -l .firecrawl/file.md && head -50 .firecrawl/file.md
grep -n "keyword" .firecrawl/file.md单一格式输出原始内容。多种格式(例如)输出JSON。
--format markdown,linksCommands
命令
search
search
Web search with optional content scraping. Run for all options.
firecrawl search --helpbash
undefined支持可选内容抓取的网页搜索。运行查看所有选项。
firecrawl search --helpbash
undefinedBasic search
基础搜索
firecrawl search "your query" -o .firecrawl/result.json --json
firecrawl search "your query" -o .firecrawl/result.json --json
Search and scrape full page content from results
搜索并抓取结果中的完整页面内容
firecrawl search "your query" --scrape -o .firecrawl/scraped.json --json
firecrawl search "your query" --scrape -o .firecrawl/scraped.json --json
News from the past day
过去一天的新闻
firecrawl search "your query" --sources news --tbs qdr:d -o .firecrawl/news.json --json
Options: `--limit <n>`, `--sources <web,images,news>`, `--categories <github,research,pdf>`, `--tbs <qdr:h|d|w|m|y>`, `--location`, `--country <code>`, `--scrape`, `--scrape-formats`, `-o`firecrawl search "your query" --sources news --tbs qdr:d -o .firecrawl/news.json --json
选项:`--limit <n>`, `--sources <web,images,news>`, `--categories <github,research,pdf>`, `--tbs <qdr:h|d|w|m|y>`, `--location`, `--country <code>`, `--scrape`, `--scrape-formats`, `-o`scrape
scrape
Scrape one or more URLs. Multiple URLs are scraped concurrently and each result is saved to . Run for all options.
.firecrawl/firecrawl scrape --helpbash
undefined抓取一个或多个URL。多个URL会并发抓取,每个结果都会保存到目录。运行查看所有选项。
.firecrawl/firecrawl scrape --helpbash
undefinedBasic markdown extraction
基础markdown提取
firecrawl scrape "<url>" -o .firecrawl/page.md
firecrawl scrape "<url>" -o .firecrawl/page.md
Main content only, no nav/footer
仅提取主内容,不含导航/页脚
firecrawl scrape "<url>" --only-main-content -o .firecrawl/page.md
firecrawl scrape "<url>" --only-main-content -o .firecrawl/page.md
Wait for JS to render, then scrape
等待JS渲染后再抓取
firecrawl scrape "<url>" --wait-for 3000 -o .firecrawl/page.md
firecrawl scrape "<url>" --wait-for 3000 -o .firecrawl/page.md
Multiple URLs (each saved to .firecrawl/)
多个URL(每个都会保存到.firecrawl/)
Get markdown and links together
同时获取markdown和链接
firecrawl scrape "<url>" --format markdown,links -o .firecrawl/page.json
Options: `-f <markdown,html,rawHtml,links,screenshot,json>`, `-H`, `--only-main-content`, `--wait-for <ms>`, `--include-tags`, `--exclude-tags`, `-o`firecrawl scrape "<url>" --format markdown,links -o .firecrawl/page.json
选项:`-f <markdown,html,rawHtml,links,screenshot,json>`, `-H`, `--only-main-content`, `--wait-for <ms>`, `--include-tags`, `--exclude-tags`, `-o`map
map
Discover URLs on a site. Run for all options.
firecrawl map --helpbash
undefined发现网站中的URL。运行查看所有选项。
firecrawl map --helpbash
undefinedFind a specific page on a large site
在大型网站中查找特定页面
firecrawl map "<url>" --search "authentication" -o .firecrawl/filtered.txt
firecrawl map "<url>" --search "authentication" -o .firecrawl/filtered.txt
Get all URLs
获取所有URL
firecrawl map "<url>" --limit 500 --json -o .firecrawl/urls.json
Options: `--limit <n>`, `--search <query>`, `--sitemap <include|skip|only>`, `--include-subdomains`, `--json`, `-o`firecrawl map "<url>" --limit 500 --json -o .firecrawl/urls.json
选项:`--limit <n>`, `--search <query>`, `--sitemap <include|skip|only>`, `--include-subdomains`, `--json`, `-o`crawl
crawl
Bulk extract from a website. Run for all options.
firecrawl crawl --helpbash
undefined从网站批量提取内容。运行查看所有选项。
firecrawl crawl --helpbash
undefinedCrawl a docs section
爬取文档板块
firecrawl crawl "<url>" --include-paths /docs --limit 50 --wait -o .firecrawl/crawl.json
firecrawl crawl "<url>" --include-paths /docs --limit 50 --wait -o .firecrawl/crawl.json
Full crawl with depth limit
完整爬取并设置深度限制
firecrawl crawl "<url>" --max-depth 3 --wait --progress -o .firecrawl/crawl.json
firecrawl crawl "<url>" --max-depth 3 --wait --progress -o .firecrawl/crawl.json
Check status of a running crawl
检查正在运行的爬取任务状态
firecrawl crawl <job-id>
Options: `--wait`, `--progress`, `--limit <n>`, `--max-depth <n>`, `--include-paths`, `--exclude-paths`, `--delay <ms>`, `--max-concurrency <n>`, `--pretty`, `-o`firecrawl crawl <job-id>
选项:`--wait`, `--progress`, `--limit <n>`, `--max-depth <n>`, `--include-paths`, `--exclude-paths`, `--delay <ms>`, `--max-concurrency <n>`, `--pretty`, `-o`agent
agent
AI-powered autonomous extraction (2-5 minutes). Run for all options.
firecrawl agent --helpbash
undefinedAI驱动的自主提取(耗时2-5分钟)。运行查看所有选项。
firecrawl agent --helpbash
undefinedExtract structured data
提取结构化数据
firecrawl agent "extract all pricing tiers" --wait -o .firecrawl/pricing.json
firecrawl agent "extract all pricing tiers" --wait -o .firecrawl/pricing.json
With a JSON schema for structured output
使用JSON schema生成结构化输出
firecrawl agent "extract products" --schema '{"type":"object","properties":{"name":{"type":"string"},"price":{"type":"number"}}}' --wait -o .firecrawl/products.json
firecrawl agent "extract products" --schema '{"type":"object","properties":{"name":{"type":"string"},"price":{"type":"number"}}}' --wait -o .firecrawl/products.json
Focus on specific pages
聚焦特定页面
firecrawl agent "get feature list" --urls "<url>" --wait -o .firecrawl/features.json
Options: `--urls`, `--model <spark-1-mini|spark-1-pro>`, `--schema <json>`, `--schema-file`, `--max-credits <n>`, `--wait`, `--pretty`, `-o`firecrawl agent "get feature list" --urls "<url>" --wait -o .firecrawl/features.json
选项:`--urls`, `--model <spark-1-mini|spark-1-pro>`, `--schema <json>`, `--schema-file`, `--max-credits <n>`, `--wait`, `--pretty`, `-o`browser
browser
Cloud Chromium sessions in Firecrawl's remote sandboxed environment. Run and for all options.
firecrawl browser --helpfirecrawl browser "agent-browser --help"bash
undefinedFirecrawl远程沙箱环境中的云端Chromium会话。运行和查看所有选项。
firecrawl browser --helpfirecrawl browser "agent-browser --help"bash
undefinedTypical browser workflow
典型浏览器工作流
firecrawl browser "open <url>"
firecrawl browser "snapshot -i" # see interactive elements with @ref IDs
firecrawl browser "click @e5" # interact with elements
firecrawl browser "fill @e3 'search query'" # fill form fields
firecrawl browser "scrape" -o .firecrawl/page.md # extract content
firecrawl browser close
Shorthand auto-launches a session if none exists - no setup required.
**Core agent-browser commands:**
| Command | Description |
| -------------------- | ---------------------------------------- |
| `open <url>` | Navigate to a URL |
| `snapshot -i` | Get interactive elements with `@ref` IDs |
| `screenshot` | Capture a PNG screenshot |
| `click <@ref>` | Click an element by ref |
| `type <@ref> <text>` | Type into an element |
| `fill <@ref> <text>` | Fill a form field (clears first) |
| `scrape` | Extract page content as markdown |
| `scroll <direction>` | Scroll up/down/left/right |
| `wait <seconds>` | Wait for a duration |
| `eval <js>` | Evaluate JavaScript on the page |
Session management: `launch-session --ttl 600`, `list`, `close`
Options: `--ttl <seconds>`, `--ttl-inactivity <seconds>`, `--session <id>`, `--profile <name>`, `--no-save-changes`, `-o`
**Profiles** survive close and can be reconnected by name. Use them when you need to login first, then come back later to do work while already authenticated:
```bashfirecrawl browser "open <url>"
firecrawl browser "snapshot -i" # 查看带@ref ID的交互元素
firecrawl browser "click @e5" # 与元素交互
firecrawl browser "fill @e3 'search query'" # 填写表单字段
firecrawl browser "scrape" -o .firecrawl/page.md # 提取内容
firecrawl browser close
简写模式会自动启动会话(如果不存在)——无需额外设置。
**核心agent-browser命令:**
| 命令 | 描述 |
| -------------------- | ---------------------------------------- |
| `open <url>` | 导航到指定URL |
| `snapshot -i` | 获取带`@ref` ID的交互元素 |
| `screenshot` | 捕获PNG截图 |
| `click <@ref>` | 通过ref ID点击元素 |
| `type <@ref> <text>` | 在元素中输入文本 |
| `fill <@ref> <text>` | 填写表单字段(先清空原有内容) |
| `scrape` | 提取页面内容为markdown格式 |
| `scroll <direction>` | 向上/下/左/右滚动页面 |
| `wait <seconds>` | 等待指定时长 |
| `eval <js>` | 在页面上执行JavaScript代码 |
会话管理:`launch-session --ttl 600`, `list`, `close`
选项:`--ttl <seconds>`, `--ttl-inactivity <seconds>`, `--session <id>`, `--profile <name>`, `--no-save-changes`, `-o`
**配置文件(Profiles)**会在关闭后保留,可通过名称重新连接。当你需要先登录,之后再返回并在已认证状态下工作时,请使用配置文件:
```bashSession 1: Login and save state
会话1:登录并保存状态
firecrawl browser launch-session --profile my-app
firecrawl browser "open https://app.example.com/login"
firecrawl browser "snapshot -i"
firecrawl browser "fill @e3 'user@example.com'"
firecrawl browser "click @e7"
firecrawl browser "wait 2"
firecrawl browser close
firecrawl browser launch-session --profile my-app
firecrawl browser "open https://app.example.com/login"
firecrawl browser "snapshot -i"
firecrawl browser "fill @e3 'user@example.com'"
firecrawl browser "click @e7"
firecrawl browser "wait 2"
firecrawl browser close
Session 2: Come back authenticated
会话2:已认证状态返回
firecrawl browser launch-session --profile my-app
firecrawl browser "open https://app.example.com/dashboard"
firecrawl browser "scrape" -o .firecrawl/dashboard.md
firecrawl browser close
Read-only reconnect (no writes to session state):
```bash
firecrawl browser launch-session --profile my-app --no-save-changesShorthand with profile:
bash
firecrawl browser --profile my-app "open https://example.com"If you get forbidden errors in the browser, you may need to create a new session as the old one may have expired.
firecrawl browser launch-session --profile my-app
firecrawl browser "open https://app.example.com/dashboard"
firecrawl browser "scrape" -o .firecrawl/dashboard.md
firecrawl browser close
只读模式重新连接(不修改会话状态):
```bash
firecrawl browser launch-session --profile my-app --no-save-changes使用配置文件的简写模式:
bash
firecrawl browser --profile my-app "open https://example.com"如果在浏览器中收到禁止访问错误,你可能需要创建新会话,因为旧会话可能已过期。
credit-usage
credit-usage
bash
firecrawl credit-usage
firecrawl credit-usage --json --pretty -o .firecrawl/credits.jsonbash
firecrawl credit-usage
firecrawl credit-usage --json --pretty -o .firecrawl/credits.jsonWorking with Results
处理结果
These patterns are useful when working with file-based output ( flag) for complex tasks:
-obash
undefined当处理基于文件的输出(使用参数)完成复杂任务时,以下模式非常实用:
-obash
undefinedExtract URLs from search
从搜索结果中提取URL
jq -r '.data.web[].url' .firecrawl/search.json
jq -r '.data.web[].url' .firecrawl/search.json
Get titles and URLs
获取标题和URL
jq -r '.data.web[] | "(.title): (.url)"' .firecrawl/search.json
undefinedjq -r '.data.web[] | "(.title): (.url)"' .firecrawl/search.json
undefinedParallelization
并行化
Run independent operations in parallel. Check for concurrency limit:
firecrawl --statusbash
firecrawl scrape "<url-1>" -o .firecrawl/1.md &
firecrawl scrape "<url-2>" -o .firecrawl/2.md &
firecrawl scrape "<url-3>" -o .firecrawl/3.md &
waitFor browser, launch separate sessions for independent tasks and operate them in parallel via .
--session <id>并行运行独立操作。请运行查看并发限制:
firecrawl --statusbash
firecrawl scrape "<url-1>" -o .firecrawl/1.md &
firecrawl scrape "<url-2>" -o .firecrawl/2.md &
firecrawl scrape "<url-3>" -o .firecrawl/3.md &
wait对于browser,请为独立任务启动单独的会话,并通过并行操作。
--session <id>Bulk Download
批量下载
download
download
Convenience command that combines + to save a site as local files. Maps the site first to discover pages, then scrapes each one into nested directories under . All scrape options work with download. Always pass to skip the confirmation prompt. Run for all options.
mapscrape.firecrawl/-yfirecrawl download --helpbash
undefined组合了 + 的便捷命令,可将网站保存为本地文件。先映射网站以发现页面,然后将每个页面抓取到下的嵌套目录中。所有scrape选项都可用于download。请始终传递参数跳过确认提示。运行查看所有选项。
mapscrape.firecrawl/-yfirecrawl download --helpbash
undefinedInteractive wizard (picks format, screenshots, paths for you)
交互式向导(为你选择格式、截图、路径)
firecrawl download https://docs.firecrawl.dev
firecrawl download https://docs.firecrawl.dev
With screenshots
包含截图
firecrawl download https://docs.firecrawl.dev --screenshot --limit 20 -y
firecrawl download https://docs.firecrawl.dev --screenshot --limit 20 -y
Multiple formats (each saved as its own file per page)
多种格式(每个页面保存为单独的文件)
firecrawl download https://docs.firecrawl.dev --format markdown,links --screenshot --limit 20 -y
firecrawl download https://docs.firecrawl.dev --format markdown,links --screenshot --limit 20 -y
Creates per page: index.md + links.txt + screenshot.png
为每个页面生成:index.md + links.txt + screenshot.png
Filter to specific sections
过滤特定板块
firecrawl download https://docs.firecrawl.dev --include-paths "/features,/sdks"
firecrawl download https://docs.firecrawl.dev --include-paths "/features,/sdks"
Skip translations
跳过翻译页面
firecrawl download https://docs.firecrawl.dev --exclude-paths "/zh,/ja,/fr,/es,/pt-BR"
firecrawl download https://docs.firecrawl.dev --exclude-paths "/zh,/ja,/fr,/es,/pt-BR"
Full combo
完整组合示例
firecrawl download https://docs.firecrawl.dev
--include-paths "/features,/sdks"
--exclude-paths "/zh,/ja"
--only-main-content
--screenshot
-y
--include-paths "/features,/sdks"
--exclude-paths "/zh,/ja"
--only-main-content
--screenshot
-y
Download options: `--limit <n>`, `--search <query>`, `--include-paths <paths>`, `--exclude-paths <paths>`, `--allow-subdomains`, `-y`
Scrape options (all work with download): `-f <formats>`, `-H`, `-S`, `--screenshot`, `--full-page-screenshot`, `--only-main-content`, `--include-tags`, `--exclude-tags`, `--wait-for`, `--max-age`, `--country`, `--languages`firecrawl download https://docs.firecrawl.dev
--include-paths "/features,/sdks"
--exclude-paths "/zh,/ja"
--only-main-content
--screenshot
-y
--include-paths "/features,/sdks"
--exclude-paths "/zh,/ja"
--only-main-content
--screenshot
-y
Download选项:`--limit <n>`, `--search <query>`, `--include-paths <paths>`, `--exclude-paths <paths>`, `--allow-subdomains`, `-y`
Scrape选项(所有选项均适用于download):`-f <formats>`, `-H`, `-S`, `--screenshot`, `--full-page-screenshot`, `--only-main-content`, `--include-tags`, `--exclude-tags`, `--wait-for`, `--max-age`, `--country`, `--languages`