Loading...
Loading...
Official Firecrawl CLI skill for web scraping, search, crawling, and browser automation. Returns clean LLM-optimized markdown. USE FOR: - Web search and research - Scraping pages, docs, and articles - Site mapping and bulk content extraction - Browser automation for interactive pages Must be pre-installed and authenticated. See rules/install.md for setup, rules/security.md for output handling.
npx skill4agent add connorads/dotfiles firecrawlfirecrawl --helpfirecrawl <command> --helpfirecrawl --status 🔥 firecrawl cli v1.8.0
● Authenticated via FIRECRAWL_API_KEY
Concurrency: 0/100 jobs (parallel scrape limit)
Credits: 500,000 remainingfirecrawl search "query" --scrape --limit 3map --search| Need | Command | When |
|---|---|---|
| Find pages on a topic | | No specific URL yet |
| Get a page's content | | Have a URL, page is static or JS-rendered |
| Find URLs within a site | | Need to locate a specific subpage |
| Bulk extract a site section | | Need many pages (e.g., all /docs/) |
| AI-powered data extraction | | Need structured data from complex sites |
| Interact with a page | | Content requires clicks, form fills, pagination, or login |
downloadmapscrapescrapebrowsersearchsearch --scrape.firecrawl/search "site:docs.example.com authentication API" → found the docs domain
map https://docs.example.com --search "auth" → found /docs/api/authentication
scrape https://docs.example.com/docs/api/auth... → got the contentscrape https://example.com/products → only shows first 10 items, no next-page links
browser "open https://example.com/products" → open in browser
browser "snapshot -i" → find the pagination button
browser "click @e12" → click "Next Page"
browser "scrape" -o .firecrawl/products-p2.md → extract page 2 contentbrowser launch-session --profile my-app → create a named profile
browser "open https://app.example.com/login" → navigate to login
browser "snapshot -i" → find form fields
browser "fill @e3 'user@example.com'" → fill email
browser "click @e7" → click Login
browser "wait 2" → wait for redirect
browser close → disconnect, state persisted
browser launch-session --profile my-app → reconnect, cookies intact
browser "open https://app.example.com/dashboard" → already logged in
browser "scrape" -o .firecrawl/dashboard.md → extract authenticated content
browser closesearch "firecrawl vs competitors 2024" --scrape -o .firecrawl/search-comparison-scraped.json
→ full content already fetched for each result
grep -n "pricing\|features" .firecrawl/search-comparison-scraped.json
head -200 .firecrawl/search-comparison-scraped.json → read and process what you have
→ notice a relevant URL in the content
scrape https://newsite.com/comparison -o .firecrawl/newsite-comparison.md
→ only scrape this new URL.firecrawl/-o.firecrawl/.gitignore?&firecrawl search "react hooks" -o .firecrawl/search-react-hooks.json --json
firecrawl scrape "<url>" -o .firecrawl/page.md.firecrawl/search-{query}.json
.firecrawl/search-{query}-scraped.json
.firecrawl/{site}-{path}.mdgrepheadwc -l .firecrawl/file.md && head -50 .firecrawl/file.md
grep -n "keyword" .firecrawl/file.md--format markdown,linksfirecrawl search --help# Basic search
firecrawl search "your query" -o .firecrawl/result.json --json
# Search and scrape full page content from results
firecrawl search "your query" --scrape -o .firecrawl/scraped.json --json
# News from the past day
firecrawl search "your query" --sources news --tbs qdr:d -o .firecrawl/news.json --json--limit <n>--sources <web,images,news>--categories <github,research,pdf>--tbs <qdr:h|d|w|m|y>--location--country <code>--scrape--scrape-formats-o.firecrawl/firecrawl scrape --help# Basic markdown extraction
firecrawl scrape "<url>" -o .firecrawl/page.md
# Main content only, no nav/footer
firecrawl scrape "<url>" --only-main-content -o .firecrawl/page.md
# Wait for JS to render, then scrape
firecrawl scrape "<url>" --wait-for 3000 -o .firecrawl/page.md
# Multiple URLs (each saved to .firecrawl/)
firecrawl scrape https://firecrawl.dev https://firecrawl.dev/blog https://docs.firecrawl.dev
# Get markdown and links together
firecrawl scrape "<url>" --format markdown,links -o .firecrawl/page.json-f <markdown,html,rawHtml,links,screenshot,json>-H--only-main-content--wait-for <ms>--include-tags--exclude-tags-ofirecrawl map --help# Find a specific page on a large site
firecrawl map "<url>" --search "authentication" -o .firecrawl/filtered.txt
# Get all URLs
firecrawl map "<url>" --limit 500 --json -o .firecrawl/urls.json--limit <n>--search <query>--sitemap <include|skip|only>--include-subdomains--json-ofirecrawl crawl --help# Crawl a docs section
firecrawl crawl "<url>" --include-paths /docs --limit 50 --wait -o .firecrawl/crawl.json
# Full crawl with depth limit
firecrawl crawl "<url>" --max-depth 3 --wait --progress -o .firecrawl/crawl.json
# Check status of a running crawl
firecrawl crawl <job-id>--wait--progress--limit <n>--max-depth <n>--include-paths--exclude-paths--delay <ms>--max-concurrency <n>--pretty-ofirecrawl agent --help# Extract structured data
firecrawl agent "extract all pricing tiers" --wait -o .firecrawl/pricing.json
# With a JSON schema for structured output
firecrawl agent "extract products" --schema '{"type":"object","properties":{"name":{"type":"string"},"price":{"type":"number"}}}' --wait -o .firecrawl/products.json
# Focus on specific pages
firecrawl agent "get feature list" --urls "<url>" --wait -o .firecrawl/features.json--urls--model <spark-1-mini|spark-1-pro>--schema <json>--schema-file--max-credits <n>--wait--pretty-ofirecrawl browser --helpfirecrawl browser "agent-browser --help"# Typical browser workflow
firecrawl browser "open <url>"
firecrawl browser "snapshot -i" # see interactive elements with @ref IDs
firecrawl browser "click @e5" # interact with elements
firecrawl browser "fill @e3 'search query'" # fill form fields
firecrawl browser "scrape" -o .firecrawl/page.md # extract content
firecrawl browser close| Command | Description |
|---|---|
| Navigate to a URL |
| Get interactive elements with |
| Capture a PNG screenshot |
| Click an element by ref |
| Type into an element |
| Fill a form field (clears first) |
| Extract page content as markdown |
| Scroll up/down/left/right |
| Wait for a duration |
| Evaluate JavaScript on the page |
launch-session --ttl 600listclose--ttl <seconds>--ttl-inactivity <seconds>--session <id>--profile <name>--no-save-changes-o# Session 1: Login and save state
firecrawl browser launch-session --profile my-app
firecrawl browser "open https://app.example.com/login"
firecrawl browser "snapshot -i"
firecrawl browser "fill @e3 'user@example.com'"
firecrawl browser "click @e7"
firecrawl browser "wait 2"
firecrawl browser close
# Session 2: Come back authenticated
firecrawl browser launch-session --profile my-app
firecrawl browser "open https://app.example.com/dashboard"
firecrawl browser "scrape" -o .firecrawl/dashboard.md
firecrawl browser closefirecrawl browser launch-session --profile my-app --no-save-changesfirecrawl browser --profile my-app "open https://example.com"firecrawl credit-usage
firecrawl credit-usage --json --pretty -o .firecrawl/credits.json-o# Extract URLs from search
jq -r '.data.web[].url' .firecrawl/search.json
# Get titles and URLs
jq -r '.data.web[] | "\(.title): \(.url)"' .firecrawl/search.jsonfirecrawl --statusfirecrawl scrape "<url-1>" -o .firecrawl/1.md &
firecrawl scrape "<url-2>" -o .firecrawl/2.md &
firecrawl scrape "<url-3>" -o .firecrawl/3.md &
wait--session <id>mapscrape.firecrawl/-yfirecrawl download --help# Interactive wizard (picks format, screenshots, paths for you)
firecrawl download https://docs.firecrawl.dev
# With screenshots
firecrawl download https://docs.firecrawl.dev --screenshot --limit 20 -y
# Multiple formats (each saved as its own file per page)
firecrawl download https://docs.firecrawl.dev --format markdown,links --screenshot --limit 20 -y
# Creates per page: index.md + links.txt + screenshot.png
# Filter to specific sections
firecrawl download https://docs.firecrawl.dev --include-paths "/features,/sdks"
# Skip translations
firecrawl download https://docs.firecrawl.dev --exclude-paths "/zh,/ja,/fr,/es,/pt-BR"
# Full combo
firecrawl download https://docs.firecrawl.dev \
--include-paths "/features,/sdks" \
--exclude-paths "/zh,/ja" \
--only-main-content \
--screenshot \
-y--limit <n>--search <query>--include-paths <paths>--exclude-paths <paths>--allow-subdomains-y-f <formats>-H-S--screenshot--full-page-screenshot--only-main-content--include-tags--exclude-tags--wait-for--max-age--country--languages