Chrome DevTools Trace Audit
Analyze a Chrome DevTools Performance trace and produce a comprehensive performance audit report.
Usage
/trace-audit <path-to-trace.json>
The argument is the absolute path to a Chrome DevTools trace JSON file (exported from the Performance panel).
Workflow
Follow these steps in order. Use parallel tool calls wherever noted.
Step 1 — Validate the trace file
Read the first 100 lines of the file using the Read tool. Confirm it is a valid Chrome DevTools trace by checking for:
- A top-level array, or a bare JSON array starting with
- Event objects with , , , fields
- Presence of or events
If validation fails, tell the user this doesn't appear to be a Chrome DevTools trace and stop.
Step 2 — Extract metadata
Use Grep on the trace file to extract (run these in parallel):
- Site URL — grep for or or and look for a URL in the
- Process names — grep for or to identify renderer, browser, GPU processes
- Trace time range — grep for the first and last values to compute trace duration
Step 3 — Run detection passes
Refer to
for the full set of patterns and thresholds. Run all detection categories
in parallel using Grep. For each category:
- Use the specified grep pattern on the trace file
- Collect matching lines with surrounding context where helpful ( or )
- Count matches and extract durations/values from the matched JSON
The detection categories are:
- Long Tasks ( with dur > 50000)
- Layout Thrashing ( → pairs)
- Forced Reflows ( events with )
- rAF Ticker Loops ( frequency)
- Style Recalc Storms ( with dur > 5000)
- Paint Storms ( events with dur > 3000)
- GC Pressure ( / )
- CLS ( cumulative score)
- INP ( max duration)
- Network Errors ( with statusCode >= 400)
- Redundant Fetches (same URL fetched multiple times)
- Script Eval ( / with dur > 50000)
- Long Animation Frames ( / )
Step 4 — Aggregate findings
For each detection category:
- Compute total count of flagged events
- Extract the worst offender (max duration or highest score)
- Classify severity: Critical (red) or Warning (yellow) based on the thresholds in
- Skip categories with zero findings
Step 5 — Identify timeline hotspots
Group flagged events by timestamp into time windows (e.g., 500ms buckets). Identify windows where multiple issue categories overlap — these are hotspot ranges that represent the most problematic sections of the trace.
Step 6 — Generate report
Output the report using the structure defined in
. The report should be:
- Actionable — every issue links to a concrete fix
- Scannable — use tables, severity badges, and clear headings
- Complete — cover all categories, even if just to say "no issues found"