Loading...
Loading...
Firecrawl produces cleaner markdown than WebFetch, handles JavaScript-heavy pages, and avoids content truncation. This skill should be used when fetching URLs, scraping web pages, converting URLs to markdown, extracting web content, searching the web, crawling sites, mapping URLs, LLM-powered extraction, autonomous data gathering with the Agent API, or fetching AI-generated documentation for GitHub repos via DeepWiki. Provides complete coverage of Firecrawl v2.8.0 API endpoints including parallel agents, spark-1-fast model, and sitemap-only crawling.
npx skill4agent add tdimino/claude-code-minoan firecrawlfirecrawl scrape URL --only-main-content# Preferred approach:
firecrawl scrape https://docs.example.com/api --only-main-contentSearch (titles/URLs only) → Evaluate relevance → Scrape top hits → Filter by section → ReasonSearch → Scrape everything → Reason over all of it# Step 1: Search — get titles/URLs only (cheap)
firecrawl search "query" --limit 20
# Step 2: Evaluate results, pick 3-5 best URLs
# Step 3: Scrape only those, filter to relevant sections
firecrawl scrape URL1 --only-main-content | \
python3 ~/.claude/skills/firecrawl/scripts/filter_web_results.py \
--sections "API,Authentication" --max-chars 5000# Extract only matching sections from scraped page
firecrawl scrape URL --only-main-content | \
python3 ~/.claude/skills/firecrawl/scripts/filter_web_results.py --sections "Pricing,Plans"
# Keep only paragraphs with keywords
firecrawl search "query" --scrape --pretty | \
python3 ~/.claude/skills/firecrawl/scripts/filter_web_results.py --keywords "pricing,cost" --max-chars 5000
# Extract specific JSON fields from API output
python3 ~/.claude/skills/exa-search/scripts/exa_search.py "query" --json | \
python3 ~/.claude/skills/firecrawl/scripts/filter_web_results.py --fields "title,url,text" --max-chars 3000
# Combine filters with stats
firecrawl scrape URL --only-main-content | \
python3 ~/.claude/skills/firecrawl/scripts/filter_web_results.py --sections "API" --keywords "endpoint" --compact --statspython3 ~/.claude/skills/firecrawl/scripts/filter_web_results.py--sections--keywords--max-chars--max-lines--fields--strip-links--strip-images--compact--stats--only-main-contentfirecrawl map URL --search "topic"--format links--max-charsexa_contents.py--formats summaryweb_search_20260209 / web_fetch_20260209
Header: anthropic-beta: code-execution-web-tools-2026-02-09firecrawlnpm install -g firecrawl-cli && firecrawl login --api-key $FIRECRAWL_API_KEY| Command | Purpose | Quick Example |
|---|---|---|
| Single page → markdown | |
| Entire site with progress | |
| Discover all URLs on a site | |
| Web search (+ optional scrape) | |
references/cli-reference.mdfc-savefc-save URL
# → Saves to ~/Desktop/Screencaps & Chats/Web-Scrapes/docs-example-com-api.mdfirecrawl_api.pypython3 ~/.claude/skills/firecrawl/scripts/firecrawl_api.py <command>FIRECRAWL_API_KEYpip install firecrawl-py requests| Command | Purpose | Quick Example |
|---|---|---|
| Web search with scraping | |
| Single URL with page actions | |
| Multiple URLs concurrently | |
| Website crawling | |
| URL discovery | |
| LLM-powered structured extraction | |
| Autonomous extraction (no URLs needed) | |
| Bulk agent queries (v2.8.0+) | |
spark-1-fastspark-1-minispark-1-proreferences/python-api-reference.md~/.claude/skills/firecrawl/scripts/deepwiki.sh <owner/repo> [section] [options]# Overview
~/.claude/skills/firecrawl/scripts/deepwiki.sh karpathy/nanochat
# Browse sections
~/.claude/skills/firecrawl/scripts/deepwiki.sh langchain-ai/langchain --toc
# Specific section
~/.claude/skills/firecrawl/scripts/deepwiki.sh karpathy/nanochat 4.1-gpt-transformer-implementation
# Full dump for RAG
~/.claude/skills/firecrawl/scripts/deepwiki.sh openai/openai-python --all --savejinajina https://x.com/username/status/123456| Need | Best Tool | Why |
|---|---|---|
| Single page → markdown | | Cleanest output |
| Search + scrape in one shot | | Combined operation |
| Crawl entire site | | Link following + progress |
| Autonomous data finding | | No URLs needed |
| Semantic/neural search | Exa | AI-powered relevance |
| Find research papers | Exa | Academic index |
| Quick research answer | Exa | Citations + synthesis |
| Find similar pages | Exa | Competitive analysis |
| Claude API agent building | Native | Built-in dynamic filtering |
| Twitter/X content | | Only tool that works |
| GitHub repo docs | | AI-generated wiki |
| Anti-bot / Cloudflare bypass | | Local Turnstile solver |
| Element-level extraction | | Precision targeting, adaptive tracking |
| No API key scraping | | 100% local, no credentials |
| Site redesign resilience | | SQLite similarity matching |
firecrawl scrape https://example.com/page --only-main-content
# Or auto-save: fc-save URL
# Or to file: firecrawl scrape URL --only-main-content -o page.md# Map first, then crawl relevant paths
firecrawl map https://docs.example.com --search "API"
firecrawl crawl https://docs.example.com --include-paths /api,/guides --wait --progressfirecrawl search "machine learning best practices 2026" --scrape --scrape-formats markdownpython3 ~/.claude/skills/firecrawl/scripts/firecrawl_api.py agent \
"Compare pricing tiers for Firecrawl, Apify, and ScrapingBee"# Check status and credits
firecrawl --status && firecrawl credit-usage
# Re-authenticate
firecrawl logout && firecrawl login --api-key $FIRECRAWL_API_KEY
# Check API key
echo $FIRECRAWL_API_KEYjina URL--wait-for 3000crawl-statusbatch-statuscrawl-cancelbatch-cancelexport FIRECRAWL_NO_TELEMETRY=1| File | Contents |
|---|---|
| Full CLI parameter reference (scrape, crawl, map, search, fc-save, jina, deepwiki) |
| Full Python API script reference (all commands, SDK examples) |
| Firecrawl Search API reference |
| Agent API (spark models, parallel agents, webhooks) |
| Page actions for dynamic content (click, write, wait, scroll) |
| Brand identity extraction (colors, fonts, UI) |
python3 ~/.claude/skills/firecrawl/scripts/test_firecrawl.py --quick # Quick validation
python3 ~/.claude/skills/firecrawl/scripts/test_firecrawl.py # Full suite
python3 ~/.claude/skills/firecrawl/scripts/test_firecrawl.py --test scrape # Specific test