Loading...
Loading...
Guides use of ProjectDiscovery Katana for web crawling and spidering in security testing and recon workflows. Covers installation, standard vs headless mode, scope and rate limits, JSONL output, and piping from httpx or URL lists. Use when the user mentions Katana, projectdiscovery/katana, web crawling, spidering, endpoint discovery, attack surface mapping, or chaining crawlers in automation pipelines.
npx skill4agent add agentic-reserve/blockint-skills katana-web-crawlingkatana -hCGO_ENABLED=1 go install github.com/projectdiscovery/katana/cmd/katana@latestdocker pull projectdiscovery/katana:latest
docker run projectdiscovery/katana:latest -u https://example.com-system-chrome-u https://a.com-list urls.txtecho https://example.com | katanacat domains | httpx | katana| Mode | When |
|---|---|
| Standard (default) | Fast; uses Go HTTP client; no full JS/DOM render—may miss post-render routes |
Headless ( | Browser context; better for JS-heavy apps; optional |
-js-crawl-jc-jsluice| Flag | Purpose |
|---|---|
| Max crawl depth (default 3) |
| Parallel fetchers |
| Max requests per second |
| Cap total crawl time (e.g. |
| In-scope / out-of-scope URL regex |
| Disable default host scope if you need cross-host (use carefully) |
| Ignore same path with different query strings |
| Reduce near-duplicate paths |
| |
| JSONL output for scripting |
| Write to file |
| Store HTTP for review (disk use) |
| HTTP/SOCKS5 proxy |
| Extra headers (auth, cookies) via |
katana -hkatana -u https://example.com -d 2 -silentkatana -u https://example.com -jsonl -o endpoints.jsonlkatana -list seeds.txt -d 3 -cs '.*\.example\.com.*' -rl 30 -jsonlkatana -u https://example.com -headless -d 2cat domains.txt | httpx -silent | katana -jsonl -o crawl.jsonlCGO_ENABLED=1-system-chrome-health-check-hc