Analyze application logs
Read and analyze structured wide-event logs from the local
directory to debug errors, investigate performance issues, and understand application behavior.
When to Use
- User asks to debug an error, investigate a bug, or understand why something failed
- User asks about request patterns, slow endpoints, or error rates
- User asks "what happened" or "what's going on" with their application
- User asks to analyze logs, check recent errors, or review application behavior
- User mentions a specific error message or status code they're seeing
Finding the logs
Logs are written by evlog's file system drain as
files, organized by date.
Format detection: The drain supports two modes:
- NDJSON (default, ): One compact JSON object per line. Parse line-by-line.
- Pretty (): Multi-line indented JSON per event. Parse by reading the entire file and splitting on top-level objects (e.g.
JSON.parse('[' + content.replace(/\}\n\{/g, '},{') + ']')
) or use a streaming JSON parser.
Always check the first few bytes of the file to detect the format: if the second character is a newline or
, it's NDJSON; if it's a space or newline followed by spaces, it's pretty-printed.
Search order — check these locations relative to the project root:
- (default)
- Any inside app directories (monorepos: )
Use glob to find log files:
.evlog/logs/*.jsonl
*/.evlog/logs/*.jsonl
apps/*/.evlog/logs/*.jsonl
Files are named by date:
. Start with the most recent file.
If no logs are found
The file system drain may not be enabled. Guide the user to set it up:
typescript
import { createFsDrain } from 'evlog/fs'
// Nuxt / Nitro: server/plugins/evlog-drain.ts
export default defineNitroPlugin((nitroApp) => {
nitroApp.hooks.hook('evlog:drain', createFsDrain())
})
// Hono / Express / Elysia: pass in middleware options
app.use(evlog({ drain: createFsDrain() }))
// Fastify: pass in plugin options
await app.register(evlog, { drain: createFsDrain() })
// NestJS: pass in module options
EvlogModule.forRoot({ drain: createFsDrain() })
// Standalone: pass to initLogger
initLogger({ drain: createFsDrain() })
After setup, the user needs to trigger some requests to generate logs, then re-analyze.
Log format
Each line is a self-contained JSON object (wide event). Key fields:
| Field | Type | Description |
|---|
| | ISO 8601 timestamp |
| | , , , |
| | Service name |
| | , , etc. |
| | HTTP method (, , etc.) |
| | Request path () |
| | HTTP response status code |
| | Request duration () |
| | Unique request identifier |
| | Error details: , , , , |
| | Human-readable explanation of what went wrong |
| | Suggested fix for the error |
| | for browser logs, absent for server logs |
| | Parsed browser/OS/device info |
All other fields are application-specific context added via
(e.g.
,
,
).
How to analyze
Step 1: Read the most recent log file
Read the latest
file. Each line is one JSON event. Parse each line independently.
Step 2: Identify the relevant events
Filter based on the user's question:
- Errors: look for or
- Specific endpoint: match on
- Slow requests: parse (e.g. ) and filter high values
- Specific user/action: match on application-specific fields
- Client-side issues: filter by
- Time range: compare values
Step 3: Analyze and explain
For each relevant event:
- What happened: summarize the , , ,
- Why it failed (errors): read , , and the stack trace
- How to fix: check for suggested remediation
- Context: examine application-specific fields for business context (user info, payment details, etc.)
- Patterns: look for recurring errors, degrading performance, or correlated failures
Analysis patterns
Find all errors
Filter: level === "error"
Group by: error.message or path
Look for: recurring patterns, common failure modes
Find slow requests
Filter: parse duration string, compare > threshold (e.g. 1000ms)
Sort by: duration descending
Look for: specific endpoints, time-of-day patterns
Trace a specific request
Filter: requestId === "the-request-id"
Result: single wide event with all context for that request
Error rate by endpoint
Group events by: path
Count: total events vs error events per path
Look for: endpoints with high error ratios
Client vs server errors
Split by: source === "client" vs no source field
Compare: error patterns between client and server
Look for: client errors that don't have corresponding server errors (network issues)
Important notes
- Each line is a complete, self-contained event. Unlike traditional logs, you don't need to correlate multiple lines — one line has all the context for one request.
- The and fields are evlog-specific structured error fields. When present, they provide the most actionable information.
- Duration values are strings with units (e.g. ). Parse the numeric part for comparisons.
- Events with originated from browser-side logging and were sent to the server via the transport endpoint.
- Log files are 'd automatically — they exist only on the local machine or server where the app runs.