Website Audit Skill
Use the CLI tool
from
squirrelscan to audit websites for SEO, technical aspects, content, performance, and security.
squirrel supports macOS, Windows, and Linux. It simulates browsers and search crawlers, analyzes site structure and content with over 140 rules, and outputs issue lists along with repair suggestions.
Links
- Official Website: https://squirrelscan.com
- Documentation (including rule explanations): https://docs.squirrelscan.com
Rule documentation template:
https://docs.squirrelscan.com/rules/{rule_category}/{rule_id}
Example:
https://docs.squirrelscan.com/rules/links/external-links
What This Skill Can Do
Supports AI agents to audit websites based on over 20 categories and 140+ rules, including:
- SEO: Meta tags, title, description, canonical, Open Graph
- Technical: Broken links, redirect chains, page speed, mobile-friendliness
- Performance: Load time, resource usage, caching
- Content: Heading structure, image alt text, content analysis
- Security: Exposed secrets, HTTPS, security headers, mixed content
- Accessibility: Alt text, color contrast, keyboard navigation
- Usability: Form validation, error handling, user flows
- Links: Broken internal and external link detection
- E-E-A-T: Experience, expertise, authority, trustworthiness
- Mobile: Mobile-friendliness, responsiveness, touch elements
- Crawlability: robots.txt, sitemap.xml, etc.
- Schema: Schema.org, structured data, rich snippets
- Legal: Compliance with privacy policies, terms of service, etc.
- Social: Open Graph, Twitter Cards, and schema validation
- URL Structure: Length, hyphens, keywords
- Keywords: Keyword stuffing detection
- Images: Alt text, contrast, size, format
- Local SEO: NAP consistency, geographic metadata
- Videos: VideoObject schema, accessibility
The audit will crawl the site, analyze pages against rules, and generate a report including:
- Overall health score (0–100)
- Breakdown by category (Core SEO, Technical SEO, Content, Security, etc.)
- Specific issues and affected URLs
- Broken link list
- Actionable improvement suggestions
When to Use
Use this skill in the following scenarios:
- Analyze website health
- Troubleshoot technical SEO issues
- Fix the various issues mentioned above
- Check for broken links
- Verify meta tags and structured data
- Generate site audit reports
- Compare health scores before and after website revamps
- Improve performance, accessibility, SEO, security, etc.
Prerequisites
This skill depends on the squirrel CLI, which must be installed and added to your PATH.
Installation (macOS / Linux)
bash
curl -fsSL https://squirrelscan.com/install | bash
This will:
- Download the latest binary
- Install to
~/.local/share/squirrel/releases/{version}/
- Create a symlink at
- Initialize configuration at
~/.squirrel/settings.json
If
is not in your PATH, add this to your shell configuration:
bash
export PATH="$HOME/.local/bin:$PATH"
Windows Installation
PowerShell:
powershell
irm https://squirrelscan.com/install.ps1 | iex
This will download and install to
and add it to your PATH. If using CMD, you may need to restart the terminal for the PATH changes to take effect.
Verify Installation
Configuration
Running
in your project directory will generate
.
Each project should have a unique project name (defaults to the audited site name) to distinguish multiple audits in the database:
bash
squirrel init --project-name my-project
Or:
bash
squirrel config set project.name my-project
If there is no in the current directory, you must first run and specify the project name with
(can be inferred).
The project name is used for database identification and stored in
.
Usage
Overview
There are three subcommands, and all results are written to the local project database:
- crawl: Execute or resume crawling
- analyze: Analyze crawled results
- report: Output reports in specified formats (llm, text, console, html, etc.)
is a wrapper for the above three steps, executed in sequence:
bash
squirrel audit https://example.com --format llm
Prioritize using : A compact, complete output format designed for LLMs.
Selecting Audit Targets
- If no URL is provided by the user: Infer possible sites from the current directory, environment variables (e.g., Vercel projects, memory, or references in code).
- If a local dev server can be started in the current directory: Audit the local site.
- If multiple auditable sites are found: Ask the user to select one.
- If no sites can be inferred: Ask the user for the URL to audit.
Prioritize auditing live sites as they better reflect real performance and rendering issues. If both local and live sites are available, prompt the user to select and recommend the live site. After identifying issues in the live site audit, fix them in the local code.
When Implementing Fixes
- Split large-scale fixes into parallel subtasks and use subagents to accelerate the process.
- After fixes are complete, run typecheck/format tools (e.g., ruff, biome, tsc) on the generated code if available in the environment.
Basic Workflow
- Execute Audit (writes to database and outputs to terminal)
- Export Report in Specified Format
bash
# 1. Execute audit (default console output)
squirrel audit https://example.com
# 2. Export as LLM format
squirrel report <audit-id> --format llm
Common Options
Crawl more pages:
bash
squirrel audit https://example.com --max-pages 200
Ignore cache and force a full recrawl:
bash
squirrel audit https://example.com --refresh
Resume an interrupted crawl:
bash
squirrel audit https://example.com --resume
Detailed output for debugging:
bash
squirrel audit https://example.com --verbose
Command Options
audit
| Option | Alias | Description | Default |
|---|
| | Output format: console, text, json, html, markdown, llm | console |
| | Maximum number of pages to crawl (max 500) | 500 |
| | Ignore cache and fully recrawl | false |
| - | Resume interrupted crawl | false |
| | Detailed output | false |
| - | Debug logs | false |
report
| Option | Alias | Description |
|---|
| | console, text, json, html, markdown, xml, llm |
Output Formats
- console (default): Colorful, readable output with progress.
- llm: Compact XML/text hybrid designed for LLMs, saving about 40% more tokens than verbose XML, including:
- Summary: Health score and core metrics
- Issues categorized by rules (Core SEO, Technical, Content, Security, etc.)
- Broken link list (internal + external)
- Improvement suggestions sorted by priority
Examples
1. Quick Site Audit (LLM Output)
bash
squirrel audit https://squirrelscan.com --format llm
2. Deep Audit for Large Sites
bash
squirrel audit https://myblog.com --max-pages 500 --format llm
3. Re-audit After Website Revamp (Ignore Cache)
bash
squirrel audit https://example.com --refresh --format llm
4. Two-Step Workflow (Reuse Existing Audit)
bash
squirrel audit https://example.com
# Note the output audit-id, e.g., a1b2c3d4
squirrel report a1b2c3d4 --format llm
Output
After completing the audit and fixes, provide the user with a summary of all changes.
Troubleshooting
command not found
- Install:
curl -fsSL https://squirrelscan.com/install | bash
- Add to PATH:
export PATH="$HOME/.local/bin:$PATH"
- Verify:
Permission Errors
bash
chmod +x ~/.local/bin/squirrel
Crawl Timeouts or Slow Speed
Large sites may take longer; use
to check progress:
bash
squirrel audit https://example.com --format llm --verbose
Invalid URL
Always include the protocol (
or
):
bash
# ✗ Incorrect
squirrel audit example.com
# ✓ Correct
squirrel audit https://example.com
Workflow
- Crawl: Discover and fetch pages from the base URL
- Analyze: Execute audit rules on each page
- External Links: Check availability of external links
- Report: Generate an LLM-optimized report
Audit results are saved in the local database and can later be exported in different formats using
.
Additional Resources