firecrawl

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Firecrawl CLI

Firecrawl CLI

Web scraping, search, and browser automation CLI. Returns clean markdown optimized for LLM context windows.
Run
firecrawl --help
or
firecrawl <command> --help
for full option details.
一款用于网页抓取、搜索和浏览器自动化的CLI工具。返回针对LLM上下文窗口优化的整洁markdown内容。
运行
firecrawl --help
firecrawl <command> --help
查看完整选项详情。

Prerequisites

前置条件

Must be installed and authenticated. Check with
firecrawl --status
.
  🔥 firecrawl cli v1.8.0

  ● Authenticated via FIRECRAWL_API_KEY
  Concurrency: 0/100 jobs (parallel scrape limit)
  Credits: 500,000 remaining
  • Concurrency: Max parallel jobs. Run parallel operations up to this limit.
  • Credits: Remaining API credits. Each scrape/crawl consumes credits.
If not ready, see rules/install.md. For output handling guidelines, see rules/security.md.
bash
firecrawl search "query" --scrape --limit 3
必须完成安装和身份验证。运行
firecrawl --status
检查状态。
  🔥 firecrawl cli v1.8.0

  ● Authenticated via FIRECRAWL_API_KEY
  Concurrency: 0/100 jobs (parallel scrape limit)
  Credits: 500,000 remaining
  • Concurrency(并发数):最大并行任务数。可运行的并行抓取操作上限为此数值。
  • Credits(积分):剩余API积分。每次抓取/爬取都会消耗积分。
如果尚未准备就绪,请查看rules/install.md。输出处理指南请查看rules/security.md
bash
firecrawl search "query" --scrape --limit 3

Workflow

工作流程

Follow this escalation pattern:
  1. Search - No specific URL yet. Find pages, answer questions, discover sources.
  2. Scrape - Have a URL. Extract its content directly.
  3. Map + Scrape - Large site or need a specific subpage. Use
    map --search
    to find the right URL, then scrape it.
  4. Crawl - Need bulk content from an entire site section (e.g., all /docs/).
  5. Browser - Scrape failed because content is behind interaction (pagination, modals, form submissions, multi-step navigation).
NeedCommandWhen
Find pages on a topic
search
No specific URL yet
Get a page's content
scrape
Have a URL, page is static or JS-rendered
Find URLs within a site
map
Need to locate a specific subpage
Bulk extract a site section
crawl
Need many pages (e.g., all /docs/)
AI-powered data extraction
agent
Need structured data from complex sites
Interact with a page
browser
Content requires clicks, form fills, pagination, or login
See also:
download
-- a convenience command that combines
map
+
scrape
to save an entire site to local files.
Scrape vs browser:
  • Use
    scrape
    first. It handles static pages and JS-rendered SPAs.
  • Use
    browser
    when you need to interact with a page, such as clicking buttons, filling out forms, navigating through a complex site, infinite scroll, or when scrape fails to grab all the content you need.
  • Never use browser for web searches - use
    search
    instead.
Avoid redundant fetches:
  • search --scrape
    already fetches full page content. Don't re-scrape those URLs.
  • Check
    .firecrawl/
    for existing data before fetching again.
Example: fetching API docs from a large site
search "site:docs.example.com authentication API"  →  found the docs domain
map https://docs.example.com --search "auth"        →  found /docs/api/authentication
scrape https://docs.example.com/docs/api/auth...    →  got the content
Example: data behind pagination
scrape https://example.com/products                 →  only shows first 10 items, no next-page links
browser "open https://example.com/products"         →  open in browser
browser "snapshot -i"                               →  find the pagination button
browser "click @e12"                                →  click "Next Page"
browser "scrape" -o .firecrawl/products-p2.md       →  extract page 2 content
Example: login then scrape authenticated content
browser launch-session --profile my-app  →  create a named profile
browser "open https://app.example.com/login"        →  navigate to login
browser "snapshot -i"                               →  find form fields
browser "fill @e3 'user@example.com'"               →  fill email
browser "click @e7"                                 →  click Login
browser "wait 2"                                    →  wait for redirect
browser close                                       →  disconnect, state persisted

browser launch-session --profile my-app  →  reconnect, cookies intact
browser "open https://app.example.com/dashboard"    →  already logged in
browser "scrape" -o .firecrawl/dashboard.md         →  extract authenticated content
browser close
Example: research task
search "firecrawl vs competitors 2024" --scrape -o .firecrawl/search-comparison-scraped.json
                                                    →  full content already fetched for each result
grep -n "pricing\|features" .firecrawl/search-comparison-scraped.json
head -200 .firecrawl/search-comparison-scraped.json →  read and process what you have
                                                    →  notice a relevant URL in the content
scrape https://newsite.com/comparison -o .firecrawl/newsite-comparison.md
                                                    →  only scrape this new URL
遵循以下递进模式:
  1. Search(搜索) - 尚无特定URL。查找页面、解答问题、发现数据源。
  2. Scrape(抓取) - 已有URL。直接提取其内容。
  3. Map + Scrape(映射+抓取) - 针对大型网站或需要特定子页面的情况。使用
    map --search
    找到正确的URL,然后抓取它。
  4. Crawl(爬取) - 需要从整个网站板块提取批量内容(例如所有/docs/页面)。
  5. Browser(浏览器) - 抓取失败,因为内容需要交互操作(分页、弹窗、表单提交、多步骤导航)。
需求场景命令使用时机
查找特定主题的页面
search
尚无具体URL时
获取单个页面的内容
scrape
已有URL,页面为静态或JS渲染的SPA时
在网站内查找URL
map
需要定位特定子页面时
批量提取网站板块内容
crawl
需要获取大量页面(例如所有/docs/页面)时
AI驱动的数据提取
agent
需要从复杂网站提取结构化数据时
与页面进行交互操作
browser
内容需要点击、表单填写、分页或登录等交互才能获取时
另请参阅:
download
—— 一个组合了
map
+
scrape
的便捷命令,可将整个网站保存到本地文件。
Scrape与browser的区别:
  • 优先使用
    scrape
    。它支持静态页面和JS渲染的SPA。
  • 当需要与页面交互时使用
    browser
    ,例如点击按钮、填写表单、导航复杂网站、无限滚动,或者
    scrape
    无法获取所需全部内容时。
  • 网页搜索绝不要使用browser,应使用
    search
避免重复抓取:
  • search --scrape
    已经获取了完整页面内容。不要重新抓取这些URL。
  • 再次抓取前,请检查
    .firecrawl/
    目录中是否已有数据。
示例:从大型网站获取API文档
search "site:docs.example.com authentication API"  →  找到文档域名
map https://docs.example.com --search "auth"        →  找到/docs/api/authentication
scrape https://docs.example.com/docs/api/auth...    →  获取内容
示例:分页后的内容
scrape https://example.com/products                 →  仅显示前10个商品,无下一页链接
browser "open https://example.com/products"         →  在浏览器中打开
browser "snapshot -i"                               →  查找分页按钮
browser "click @e12"                                →  点击“下一页”
browser "scrape" -o .firecrawl/products-p2.md       →  提取第2页内容
示例:登录后抓取需认证的内容
browser launch-session --profile my-app  →  创建命名配置文件
browser "open https://app.example.com/login"        →  导航到登录页
browser "snapshot -i"                               →  查找表单字段
browser "fill @e3 'user@example.com'"               →  填写邮箱
browser "click @e7"                                 →  点击“登录”
browser "wait 2"                                    →  等待重定向
browser close                                       →  断开连接,状态已保存

browser launch-session --profile my-app  →  重新连接,Cookie保持有效
browser "open https://app.example.com/dashboard"    →  已处于登录状态
browser "scrape" -o .firecrawl/dashboard.md         →  提取需认证的内容
browser close
示例:调研任务
search "firecrawl vs competitors 2024" --scrape -o .firecrawl/search-comparison-scraped.json
                                                    →  已获取每个结果的完整内容
grep -n "pricing\|features" .firecrawl/search-comparison-scraped.json
head -200 .firecrawl/search-comparison-scraped.json →  读取并处理已获取的内容
                                                    →  注意到内容中的相关URL
scrape https://newsite.com/comparison -o .firecrawl/newsite-comparison.md
                                                    →  仅抓取这个新URL

Output & Organization

输出与组织

Unless the user specifies to return in context, write results to
.firecrawl/
with
-o
. Add
.firecrawl/
to
.gitignore
. Always quote URLs - shell interprets
?
and
&
as special characters.
bash
firecrawl search "react hooks" -o .firecrawl/search-react-hooks.json --json
firecrawl scrape "<url>" -o .firecrawl/page.md
Naming conventions:
.firecrawl/search-{query}.json
.firecrawl/search-{query}-scraped.json
.firecrawl/{site}-{path}.md
Never read entire output files at once. Use
grep
,
head
, or incremental reads:
bash
wc -l .firecrawl/file.md && head -50 .firecrawl/file.md
grep -n "keyword" .firecrawl/file.md
Single format outputs raw content. Multiple formats (e.g.,
--format markdown,links
) output JSON.
除非用户指定在上下文返回结果,否则请使用
-o
参数将结果写入
.firecrawl/
目录。将
.firecrawl/
添加到
.gitignore
中。URL务必使用引号包裹——Shell会将
?
&
视为特殊字符。
bash
firecrawl search "react hooks" -o .firecrawl/search-react-hooks.json --json
firecrawl scrape "<url>" -o .firecrawl/page.md
命名规范:
.firecrawl/search-{query}.json
.firecrawl/search-{query}-scraped.json
.firecrawl/{site}-{path}.md
切勿一次性读取整个输出文件。请使用
grep
head
或增量读取:
bash
wc -l .firecrawl/file.md && head -50 .firecrawl/file.md
grep -n "keyword" .firecrawl/file.md
单一格式输出原始内容。多种格式(例如
--format markdown,links
)输出JSON。

Commands

命令

search

search

Web search with optional content scraping. Run
firecrawl search --help
for all options.
bash
undefined
支持可选内容抓取的网页搜索。运行
firecrawl search --help
查看所有选项。
bash
undefined

Basic search

基础搜索

firecrawl search "your query" -o .firecrawl/result.json --json
firecrawl search "your query" -o .firecrawl/result.json --json

Search and scrape full page content from results

搜索并抓取结果中的完整页面内容

firecrawl search "your query" --scrape -o .firecrawl/scraped.json --json
firecrawl search "your query" --scrape -o .firecrawl/scraped.json --json

News from the past day

过去一天的新闻

firecrawl search "your query" --sources news --tbs qdr:d -o .firecrawl/news.json --json

Options: `--limit <n>`, `--sources <web,images,news>`, `--categories <github,research,pdf>`, `--tbs <qdr:h|d|w|m|y>`, `--location`, `--country <code>`, `--scrape`, `--scrape-formats`, `-o`
firecrawl search "your query" --sources news --tbs qdr:d -o .firecrawl/news.json --json

选项:`--limit <n>`, `--sources <web,images,news>`, `--categories <github,research,pdf>`, `--tbs <qdr:h|d|w|m|y>`, `--location`, `--country <code>`, `--scrape`, `--scrape-formats`, `-o`

scrape

scrape

Scrape one or more URLs. Multiple URLs are scraped concurrently and each result is saved to
.firecrawl/
. Run
firecrawl scrape --help
for all options.
bash
undefined
抓取一个或多个URL。多个URL会并发抓取,每个结果都会保存到
.firecrawl/
目录。运行
firecrawl scrape --help
查看所有选项。
bash
undefined

Basic markdown extraction

基础markdown提取

firecrawl scrape "<url>" -o .firecrawl/page.md
firecrawl scrape "<url>" -o .firecrawl/page.md

Main content only, no nav/footer

仅提取主内容,不含导航/页脚

firecrawl scrape "<url>" --only-main-content -o .firecrawl/page.md
firecrawl scrape "<url>" --only-main-content -o .firecrawl/page.md

Wait for JS to render, then scrape

等待JS渲染后再抓取

firecrawl scrape "<url>" --wait-for 3000 -o .firecrawl/page.md
firecrawl scrape "<url>" --wait-for 3000 -o .firecrawl/page.md

Multiple URLs (each saved to .firecrawl/)

多个URL(每个都会保存到.firecrawl/)

Get markdown and links together

同时获取markdown和链接

firecrawl scrape "<url>" --format markdown,links -o .firecrawl/page.json

Options: `-f <markdown,html,rawHtml,links,screenshot,json>`, `-H`, `--only-main-content`, `--wait-for <ms>`, `--include-tags`, `--exclude-tags`, `-o`
firecrawl scrape "<url>" --format markdown,links -o .firecrawl/page.json

选项:`-f <markdown,html,rawHtml,links,screenshot,json>`, `-H`, `--only-main-content`, `--wait-for <ms>`, `--include-tags`, `--exclude-tags`, `-o`

map

map

Discover URLs on a site. Run
firecrawl map --help
for all options.
bash
undefined
发现网站中的URL。运行
firecrawl map --help
查看所有选项。
bash
undefined

Find a specific page on a large site

在大型网站中查找特定页面

firecrawl map "<url>" --search "authentication" -o .firecrawl/filtered.txt
firecrawl map "<url>" --search "authentication" -o .firecrawl/filtered.txt

Get all URLs

获取所有URL

firecrawl map "<url>" --limit 500 --json -o .firecrawl/urls.json

Options: `--limit <n>`, `--search <query>`, `--sitemap <include|skip|only>`, `--include-subdomains`, `--json`, `-o`
firecrawl map "<url>" --limit 500 --json -o .firecrawl/urls.json

选项:`--limit <n>`, `--search <query>`, `--sitemap <include|skip|only>`, `--include-subdomains`, `--json`, `-o`

crawl

crawl

Bulk extract from a website. Run
firecrawl crawl --help
for all options.
bash
undefined
从网站批量提取内容。运行
firecrawl crawl --help
查看所有选项。
bash
undefined

Crawl a docs section

爬取文档板块

firecrawl crawl "<url>" --include-paths /docs --limit 50 --wait -o .firecrawl/crawl.json
firecrawl crawl "<url>" --include-paths /docs --limit 50 --wait -o .firecrawl/crawl.json

Full crawl with depth limit

完整爬取并设置深度限制

firecrawl crawl "<url>" --max-depth 3 --wait --progress -o .firecrawl/crawl.json
firecrawl crawl "<url>" --max-depth 3 --wait --progress -o .firecrawl/crawl.json

Check status of a running crawl

检查正在运行的爬取任务状态

firecrawl crawl <job-id>

Options: `--wait`, `--progress`, `--limit <n>`, `--max-depth <n>`, `--include-paths`, `--exclude-paths`, `--delay <ms>`, `--max-concurrency <n>`, `--pretty`, `-o`
firecrawl crawl <job-id>

选项:`--wait`, `--progress`, `--limit <n>`, `--max-depth <n>`, `--include-paths`, `--exclude-paths`, `--delay <ms>`, `--max-concurrency <n>`, `--pretty`, `-o`

agent

agent

AI-powered autonomous extraction (2-5 minutes). Run
firecrawl agent --help
for all options.
bash
undefined
AI驱动的自主提取(耗时2-5分钟)。运行
firecrawl agent --help
查看所有选项。
bash
undefined

Extract structured data

提取结构化数据

firecrawl agent "extract all pricing tiers" --wait -o .firecrawl/pricing.json
firecrawl agent "extract all pricing tiers" --wait -o .firecrawl/pricing.json

With a JSON schema for structured output

使用JSON schema生成结构化输出

firecrawl agent "extract products" --schema '{"type":"object","properties":{"name":{"type":"string"},"price":{"type":"number"}}}' --wait -o .firecrawl/products.json
firecrawl agent "extract products" --schema '{"type":"object","properties":{"name":{"type":"string"},"price":{"type":"number"}}}' --wait -o .firecrawl/products.json

Focus on specific pages

聚焦特定页面

firecrawl agent "get feature list" --urls "<url>" --wait -o .firecrawl/features.json

Options: `--urls`, `--model <spark-1-mini|spark-1-pro>`, `--schema <json>`, `--schema-file`, `--max-credits <n>`, `--wait`, `--pretty`, `-o`
firecrawl agent "get feature list" --urls "<url>" --wait -o .firecrawl/features.json

选项:`--urls`, `--model <spark-1-mini|spark-1-pro>`, `--schema <json>`, `--schema-file`, `--max-credits <n>`, `--wait`, `--pretty`, `-o`

browser

browser

Cloud Chromium sessions in Firecrawl's remote sandboxed environment. Run
firecrawl browser --help
and
firecrawl browser "agent-browser --help"
for all options.
bash
undefined
Firecrawl远程沙箱环境中的云端Chromium会话。运行
firecrawl browser --help
firecrawl browser "agent-browser --help"
查看所有选项。
bash
undefined

Typical browser workflow

典型浏览器工作流

firecrawl browser "open <url>" firecrawl browser "snapshot -i" # see interactive elements with @ref IDs firecrawl browser "click @e5" # interact with elements firecrawl browser "fill @e3 'search query'" # fill form fields firecrawl browser "scrape" -o .firecrawl/page.md # extract content firecrawl browser close

Shorthand auto-launches a session if none exists - no setup required.

**Core agent-browser commands:**

| Command              | Description                              |
| -------------------- | ---------------------------------------- |
| `open <url>`         | Navigate to a URL                        |
| `snapshot -i`        | Get interactive elements with `@ref` IDs |
| `screenshot`         | Capture a PNG screenshot                 |
| `click <@ref>`       | Click an element by ref                  |
| `type <@ref> <text>` | Type into an element                     |
| `fill <@ref> <text>` | Fill a form field (clears first)         |
| `scrape`             | Extract page content as markdown         |
| `scroll <direction>` | Scroll up/down/left/right                |
| `wait <seconds>`     | Wait for a duration                      |
| `eval <js>`          | Evaluate JavaScript on the page          |

Session management: `launch-session --ttl 600`, `list`, `close`

Options: `--ttl <seconds>`, `--ttl-inactivity <seconds>`, `--session <id>`, `--profile <name>`, `--no-save-changes`, `-o`

**Profiles** survive close and can be reconnected by name. Use them when you need to login first, then come back later to do work while already authenticated:

```bash
firecrawl browser "open <url>" firecrawl browser "snapshot -i" # 查看带@ref ID的交互元素 firecrawl browser "click @e5" # 与元素交互 firecrawl browser "fill @e3 'search query'" # 填写表单字段 firecrawl browser "scrape" -o .firecrawl/page.md # 提取内容 firecrawl browser close

简写模式会自动启动会话(如果不存在)——无需额外设置。

**核心agent-browser命令:**

| 命令              | 描述                                  |
| -------------------- | ---------------------------------------- |
| `open <url>`         | 导航到指定URL                           |
| `snapshot -i`        | 获取带`@ref` ID的交互元素               |
| `screenshot`         | 捕获PNG截图                             |
| `click <@ref>`       | 通过ref ID点击元素                      |
| `type <@ref> <text>` | 在元素中输入文本                        |
| `fill <@ref> <text>` | 填写表单字段(先清空原有内容)          |
| `scrape`             | 提取页面内容为markdown格式              |
| `scroll <direction>` | 向上/下/左/右滚动页面                   |
| `wait <seconds>`     | 等待指定时长                            |
| `eval <js>`          | 在页面上执行JavaScript代码              |

会话管理:`launch-session --ttl 600`, `list`, `close`

选项:`--ttl <seconds>`, `--ttl-inactivity <seconds>`, `--session <id>`, `--profile <name>`, `--no-save-changes`, `-o`

**配置文件(Profiles)**会在关闭后保留,可通过名称重新连接。当你需要先登录,之后再返回并在已认证状态下工作时,请使用配置文件:

```bash

Session 1: Login and save state

会话1:登录并保存状态

firecrawl browser launch-session --profile my-app firecrawl browser "open https://app.example.com/login" firecrawl browser "snapshot -i" firecrawl browser "fill @e3 'user@example.com'" firecrawl browser "click @e7" firecrawl browser "wait 2" firecrawl browser close
firecrawl browser launch-session --profile my-app firecrawl browser "open https://app.example.com/login" firecrawl browser "snapshot -i" firecrawl browser "fill @e3 'user@example.com'" firecrawl browser "click @e7" firecrawl browser "wait 2" firecrawl browser close

Session 2: Come back authenticated

会话2:已认证状态返回

firecrawl browser launch-session --profile my-app firecrawl browser "open https://app.example.com/dashboard" firecrawl browser "scrape" -o .firecrawl/dashboard.md firecrawl browser close

Read-only reconnect (no writes to session state):

```bash
firecrawl browser launch-session --profile my-app --no-save-changes
Shorthand with profile:
bash
firecrawl browser --profile my-app "open https://example.com"
If you get forbidden errors in the browser, you may need to create a new session as the old one may have expired.
firecrawl browser launch-session --profile my-app firecrawl browser "open https://app.example.com/dashboard" firecrawl browser "scrape" -o .firecrawl/dashboard.md firecrawl browser close

只读模式重新连接(不修改会话状态):

```bash
firecrawl browser launch-session --profile my-app --no-save-changes
使用配置文件的简写模式:
bash
firecrawl browser --profile my-app "open https://example.com"
如果在浏览器中收到禁止访问错误,你可能需要创建新会话,因为旧会话可能已过期。

credit-usage

credit-usage

bash
firecrawl credit-usage
firecrawl credit-usage --json --pretty -o .firecrawl/credits.json
bash
firecrawl credit-usage
firecrawl credit-usage --json --pretty -o .firecrawl/credits.json

Working with Results

处理结果

These patterns are useful when working with file-based output (
-o
flag) for complex tasks:
bash
undefined
当处理基于文件的输出(使用
-o
参数)完成复杂任务时,以下模式非常实用:
bash
undefined

Extract URLs from search

从搜索结果中提取URL

jq -r '.data.web[].url' .firecrawl/search.json
jq -r '.data.web[].url' .firecrawl/search.json

Get titles and URLs

获取标题和URL

jq -r '.data.web[] | "(.title): (.url)"' .firecrawl/search.json
undefined
jq -r '.data.web[] | "(.title): (.url)"' .firecrawl/search.json
undefined

Parallelization

并行化

Run independent operations in parallel. Check
firecrawl --status
for concurrency limit:
bash
firecrawl scrape "<url-1>" -o .firecrawl/1.md &
firecrawl scrape "<url-2>" -o .firecrawl/2.md &
firecrawl scrape "<url-3>" -o .firecrawl/3.md &
wait
For browser, launch separate sessions for independent tasks and operate them in parallel via
--session <id>
.
并行运行独立操作。请运行
firecrawl --status
查看并发限制:
bash
firecrawl scrape "<url-1>" -o .firecrawl/1.md &
firecrawl scrape "<url-2>" -o .firecrawl/2.md &
firecrawl scrape "<url-3>" -o .firecrawl/3.md &
wait
对于browser,请为独立任务启动单独的会话,并通过
--session <id>
并行操作。

Bulk Download

批量下载

download

download

Convenience command that combines
map
+
scrape
to save a site as local files. Maps the site first to discover pages, then scrapes each one into nested directories under
.firecrawl/
. All scrape options work with download. Always pass
-y
to skip the confirmation prompt. Run
firecrawl download --help
for all options.
bash
undefined
组合了
map
+
scrape
的便捷命令,可将网站保存为本地文件。先映射网站以发现页面,然后将每个页面抓取到
.firecrawl/
下的嵌套目录中。所有scrape选项都可用于download。请始终传递
-y
参数跳过确认提示。运行
firecrawl download --help
查看所有选项。
bash
undefined

Interactive wizard (picks format, screenshots, paths for you)

交互式向导(为你选择格式、截图、路径)

firecrawl download https://docs.firecrawl.dev
firecrawl download https://docs.firecrawl.dev

With screenshots

包含截图

firecrawl download https://docs.firecrawl.dev --screenshot --limit 20 -y
firecrawl download https://docs.firecrawl.dev --screenshot --limit 20 -y

Multiple formats (each saved as its own file per page)

多种格式(每个页面保存为单独的文件)

firecrawl download https://docs.firecrawl.dev --format markdown,links --screenshot --limit 20 -y
firecrawl download https://docs.firecrawl.dev --format markdown,links --screenshot --limit 20 -y

Creates per page: index.md + links.txt + screenshot.png

为每个页面生成:index.md + links.txt + screenshot.png

Filter to specific sections

过滤特定板块

firecrawl download https://docs.firecrawl.dev --include-paths "/features,/sdks"
firecrawl download https://docs.firecrawl.dev --include-paths "/features,/sdks"

Skip translations

跳过翻译页面

firecrawl download https://docs.firecrawl.dev --exclude-paths "/zh,/ja,/fr,/es,/pt-BR"
firecrawl download https://docs.firecrawl.dev --exclude-paths "/zh,/ja,/fr,/es,/pt-BR"

Full combo

完整组合示例

firecrawl download https://docs.firecrawl.dev
--include-paths "/features,/sdks"
--exclude-paths "/zh,/ja"
--only-main-content
--screenshot
-y

Download options: `--limit <n>`, `--search <query>`, `--include-paths <paths>`, `--exclude-paths <paths>`, `--allow-subdomains`, `-y`

Scrape options (all work with download): `-f <formats>`, `-H`, `-S`, `--screenshot`, `--full-page-screenshot`, `--only-main-content`, `--include-tags`, `--exclude-tags`, `--wait-for`, `--max-age`, `--country`, `--languages`
firecrawl download https://docs.firecrawl.dev
--include-paths "/features,/sdks"
--exclude-paths "/zh,/ja"
--only-main-content
--screenshot
-y

Download选项:`--limit <n>`, `--search <query>`, `--include-paths <paths>`, `--exclude-paths <paths>`, `--allow-subdomains`, `-y`

Scrape选项(所有选项均适用于download):`-f <formats>`, `-H`, `-S`, `--screenshot`, `--full-page-screenshot`, `--only-main-content`, `--include-tags`, `--exclude-tags`, `--wait-for`, `--max-age`, `--country`, `--languages`