Loading...
Loading...
Bulk extract content from an entire website or site section. Use this skill when the user wants to crawl a site, extract all pages from a docs section, bulk-scrape multiple pages following links, or says "crawl", "get all the pages", "extract everything under /docs", "bulk extract", or needs content from many pages on the same site. Handles depth limits, path filtering, and concurrent extraction.
npx skill4agent add firecrawl/cli firecrawl-crawl/docs/# Crawl a docs section
firecrawl crawl "<url>" --include-paths /docs --limit 50 --wait -o .firecrawl/crawl.json
# Full crawl with depth limit
firecrawl crawl "<url>" --max-depth 3 --wait --progress -o .firecrawl/crawl.json
# Check status of a running crawl
firecrawl crawl <job-id>| Option | Description |
|---|---|
| Wait for crawl to complete before returning |
| Show progress while waiting |
| Max pages to crawl |
| Max link depth to follow |
| Only crawl URLs matching these paths |
| Skip URLs matching these paths |
| Delay between requests |
| Max parallel crawl workers |
| Pretty print JSON output |
| Output file path |
--wait--include-pathsfirecrawl credit-usage