Web Research with x402 APIs
Access Exa (neural search) and Firecrawl (web scraping) through x402-protected endpoints.
Setup
See rules/getting-started.md for installation and wallet setup.
Quick Reference
| Task | Endpoint | Price | Best For |
|---|
| Neural search | https://stableenrich.dev/api/exa/search
| $0.01 | Semantic web search |
| Find similar | https://stableenrich.dev/api/exa/find-similar
| $0.01 | Pages similar to a URL |
| Extract text | https://stableenrich.dev/api/exa/contents
| $0.002 | Clean text from URLs |
| Direct answers | https://stableenrich.dev/api/exa/answer
| $0.01 | Factual Q&A |
| Scrape page | https://stableenrich.dev/api/firecrawl/scrape
| $0.0126 | Single page to markdown |
| Web search | https://stableenrich.dev/api/firecrawl/search
| $0.0252 | Search with scraping |
When to Use What
| Scenario | Tool |
|---|
| General web search | WebSearch (free) or Exa ($0.01) |
| Semantic/conceptual search | Exa search |
| Find pages like X | Exa find-similar |
| Get clean text from URL | Exa contents |
| Scrape blocked/JS-heavy site | Firecrawl scrape |
| Search + scrape results | Firecrawl search |
| Quick fact lookup | Exa answer |
See rules/when-to-use.md for detailed guidance.
Exa Neural Search
Semantic search that understands meaning, not just keywords:
bash
npx agentcash fetch https://stableenrich.dev/api/exa/search -m POST -b '{
"query": "startups building AI agents for customer support",
"numResults": 10,
"type": "neural"
}'
Options:
- - Search query (required)
- - Number of results (default: 10, max: 25)
- - "neural" (semantic) or "keyword" (traditional)
- - Only search these domains
- - Skip these domains
- / - Date range filter
Returns: List of URLs with titles, snippets, and relevance scores.
Find Similar Pages
Find pages semantically similar to a reference URL:
bash
npx agentcash fetch https://stableenrich.dev/api/exa/find-similar -m POST -b '{
"url": "https://example.com/article-i-like",
"numResults": 10
}'
Great for:
- Finding competitor products
- Discovering related content
- Expanding research sources
Extract Text Content
Get clean, structured text from URLs:
bash
npx agentcash fetch https://stableenrich.dev/api/exa/contents -m POST -b '{
"urls": [
"https://example.com/article1",
"https://example.com/article2"
]
}'
Options:
- - Array of URLs to extract
- - Include full text (default: true)
- - Include key highlights
Cheapest option ($0.002) when you already have URLs and just need the content.
Direct Answers
Get factual answers to questions:
bash
npx agentcash fetch https://stableenrich.dev/api/exa/answer -m POST -b '{"query": "What is the population of Tokyo?"}'
Returns a direct answer with source citations. Best for:
- Factual questions
- Quick lookups
- Verification of claims
Firecrawl Scrape
Scrape a single page to clean markdown:
bash
npx agentcash fetch https://stableenrich.dev/api/firecrawl/scrape -m POST -b '{"url": "https://example.com/page-to-scrape"}'
Options:
- - Page to scrape (required)
- - Output formats: ["markdown", "html", "links"]
- - Skip nav/footer/ads (default: true)
- - Wait ms for JS to render
Advantages over WebFetch:
- Handles JavaScript-rendered content
- Bypasses common blocking
- Extracts main content only
- LLM-optimized markdown output
Firecrawl Search
Web search with automatic scraping of results:
bash
npx agentcash fetch https://stableenrich.dev/api/firecrawl/search -m POST -b '{
"query": "best practices for react server components",
"limit": 5
}'
Options:
- - Search query (required)
- - Number of results (default: 5)
- - Options passed to scraper
Returns search results with full scraped content for each.
Workflows
Deep Research
bash
npx agentcash fetch https://stableenrich.dev/api/exa/search -m POST -b '{"query": "AI agents in healthcare 2024", "numResults": 15}'
bash
npx agentcash fetch https://stableenrich.dev/api/exa/find-similar -m POST -b '{"url": "https://best-article-found.com"}'
bash
npx agentcash fetch https://stableenrich.dev/api/exa/contents -m POST -b '{"urls": ["url1", "url2", "url3"]}'
Blocked Site Scraping
bash
npx agentcash fetch https://stableenrich.dev/api/firecrawl/scrape -m POST -b '{"url": "https://blocked-site.com/article", "waitFor": 3000}'
Cost Optimization
- Use Exa contents ($0.002) when you already have URLs
- Use WebSearch/WebFetch first (free) and fall back to x402 endpoints
- Batch URL extraction - pass multiple URLs to Exa contents
- Limit results - request only as many as needed