Loading...
Loading...
Analyzes and optimizes SQL/NoSQL queries for performance. Use when reviewing query performance, optimizing slow queries, analyzing EXPLAIN output, suggesting indexes, identifying N+1 problems, recommending query rewrites, or improving database access patterns. Supports PostgreSQL, MySQL, SQLite, MongoDB, Redis, DynamoDB, and Elasticsearch.
npx skill4agent add galihcitta/dotclaudeskills optimizing-queriesQuery Optimization:
- [ ] Identify database type
- [ ] Analyze query structure
- [ ] Check for red flags
- [ ] Provide recommendations
- [ ] Suggest verificationSELECT *WHERE function(column)LIKE '%pattern'OROFFSETNOT INLIMIT$whereKEYS *SELECT * FROM orders WHERE customer_email = 'user@example.com';## Issue [HIGH]
Full table scan on `orders` - no index on `customer_email`
## Impact
Query time grows linearly with table size; causes lock contention
## Solution
CREATE INDEX idx_orders_customer_email ON orders(customer_email);
## Verification
EXPLAIN ANALYZE SELECT * FROM orders WHERE customer_email = 'user@example.com';
-- Should show "Index Scan" instead of "Seq Scan"users = db.query("SELECT * FROM users")
for user in users:
orders = db.query(f"SELECT * FROM orders WHERE user_id = {user.id}")## Issue [HIGH]
N+1 query pattern - executes 1 + N queries instead of 1
## Impact
100 users = 101 queries; network latency multiplied
## Solution
SELECT u.*, o.* FROM users u
LEFT JOIN orders o ON u.id = o.user_id;
## Verification
Check query count before/after; should reduce from N+1 to 1SELECT * FROM config WHERE key = 'app_version';## No Issues Detected
This query on a small lookup table (typically <100 rows) is acceptable.
Adding an index would add write overhead without meaningful read benefit.
## When to Reconsider
- If table grows beyond 10,000 rows
- If query is called >100 times/secondSELECT * FROM large_table WHERE status = ?;## Issue [MEDIUM]
Preliminary analysis without schema information
## Potential Issues
1. `SELECT *` - may fetch unnecessary columns
2. `status` column - if low cardinality (e.g., 5 values), index may not help
## Information Needed
- Table row count
- Cardinality of `status` column
- Existing indexes: `SHOW INDEX FROM large_table;`db.orders.find({ customer_email: "user@example.com" })## Issue [HIGH]
COLLSCAN on `orders` - no index on `customer_email`
## Impact
Query scans entire collection; O(n) performance
## Solution
db.orders.createIndex({ customer_email: 1 })
## Verification
db.orders.find({ customer_email: "..." }).explain("executionStats")
// Should show "stage": "IXSCAN" instead of "COLLSCAN"## Issue [HIGH/MEDIUM/LOW]
[Specific problem identified]
## Impact
[Performance/resource cost]
## Solution
[Concrete fix with code]
## Verification
[EXPLAIN command or metric to confirm]<table_name>| Issue Type | Impact | Priority |
|---|---|---|
| Missing JOIN index | Query blocks | P0 |
| Full table scan (>100k rows) | Slow response | P0 |
| N+1 pattern | Latency × N | P1 |
| SELECT * | Memory/bandwidth | P1 |
| Missing LIMIT | Resource exhaustion | P1 |
| Large OFFSET | Slow pagination | P2 |
| Suboptimal ORDER BY | Sort overhead | P2 |
references/sql-patterns.md## Index## Query Rewrite## Join## Aggregationreferences/nosql-patterns.md## MongoDB## Redis## DynamoDB## Elasticsearchreferences/explain-analysis.md## PostgreSQL## MySQL## MongoDBreferences/common-scenarios.md## Slow pagination## N+1## Dashboard## Bulkreferences/orm-patterns.md### Django### SQLAlchemy### ActiveRecord### Prismapython scripts/analyze_query.py "SELECT * FROM orders WHERE ..."
python scripts/analyze_query.py --file query.sql --jsonpython scripts/suggest_index.py "SELECT ... FROM orders WHERE status = 'pending'"python scripts/parse_explain.py --pg explain_output.txt
python scripts/parse_explain.py --mysql explain_output.txtpython scripts/diff_explain.py --pg before.txt after.txt
python scripts/diff_explain.py --mysql before.txt after.txt --jsonpython scripts/parse_slow_log.py --pg /var/log/postgresql.log --top 20
python scripts/parse_slow_log.py --mysql slow-query.log --jsonpython scripts/analyze_schema.py schema.sqlpython scripts/detect_orm_issues.py app.py
python scripts/detect_orm_issues.py --dir ./modelspython scripts/check_migration.py --pg migration.sql
python scripts/check_migration.py --mysql migration.sqlanalyze_query.pysuggest_index.pydiff_explain.py# Save before state
psql -c "EXPLAIN (ANALYZE, BUFFERS) SELECT ..." > before.txt
# Apply optimization (add index, rewrite query, etc.)
# Save after state
psql -c "EXPLAIN (ANALYZE, BUFFERS) SELECT ..." > after.txt
# Compare
python scripts/diff_explain.py --pg before.txt after.txtdetect_orm_issues.py --dir ./analyze_schema.pycheck_migration.py-- PostgreSQL
EXPLAIN (ANALYZE, BUFFERS) SELECT ...;
-- MySQL
EXPLAIN ANALYZE SELECT ...;
-- SQLite
EXPLAIN QUERY PLAN SELECT ...;// MongoDB
db.collection.find({...}).explain("executionStats")-- PostgreSQL/MySQL
CREATE INDEX [CONCURRENTLY] idx_table_columns
ON table(col1, col2, col3);
-- Rationale: [why this column order, covering considerations]// MongoDB (ESR: Equality, Sort, Range)
db.collection.createIndex({ equality_field: 1, sort_field: -1, range_field: 1 })"To give precise recommendations, I need the table schema. Could you share:
- Column data types
- Existing indexes
- Approximate row counts
Meanwhile, here's what I can infer from the query..."
"Run this command and share the output:sqlEXPLAIN (ANALYZE, BUFFERS) <their query>;This will show actual vs estimated rows and scan types."
"Which database are you using? (PostgreSQL, MySQL, SQLite, MongoDB, etc.) The optimization strategies differ significantly between them."
analyze_query.pyparse_explain.pyFileNotFoundErrorjson.JSONDecodeErrorreferences/sql-patterns.md