Loading...
Loading...
Generate usage insight reports based on local Cursor Agent session records; only invoke when the user explicitly requests phrases like 'generate session analysis report' or 'create an Agent usage insight'.
npx skill4agent add blankpen/skills cursor-insightsnpx bun run ./scripts/scan.ts.agent-insights/conversations%USERPROFILE%~.mdPlease summarize this part of the Agent session records, focusing on:
1. What the user requested
2. What the Agent did (which tools were used, which files were modified)
3. Frictions or problems encountered
4. Final outcome
Keep it concise, 3-5 sentences. Retain specific details such as file names, error messages, and user feedback.
Session record snippet:1. **goal_categories**: Only count **explicit requests made by the user**.
- Do not count codebase exploration proactively conducted by the Agent or work decided on its own
- Only count when the user makes explicit requests such as "Can you...?", "Please...", "I need...", "Let's...", etc.
2. **user_satisfaction_counts**: Only based on **explicit feedback expressed by the user**.
- "Great!" "Nice!" "Perfect!" → happy
- "Thank you" "Looks good" "It works now" → satisfied
- "Okay, now let's..." (continue without complaint) → likely_satisfied
- "That's wrong" "Try again" → dissatisfied
- "It's broken" "I give up" → frustrated
3. **friction_counts**: Mark specific problems by type.
- misunderstood_request: Agent misunderstood the request
- wrong_approach: Correct goal but wrong solution/approach
- buggy_code: Code cannot run normally
- user_rejected_action: User rejected or aborted a tool call
- excessive_changes: Over-design or excessive scope of changes
4. If the session is extremely short or only for warm-up, mark the goal category as **warmup_minimal**.
Session content:
<Session record inserted here>
Only return a valid JSON object that conforms to the following schema:
{
"underlying_goal": "The underlying goal the user wants to achieve",
"goal_categories": {"category_name": count, ...},
"outcome": "fully_achieved | mostly_achieved |
partially_achieved | not_achieved |
unclear_from_transcript",
"user_satisfaction_counts": {"level": count, ...},
"agent_helpfulness": "unhelpful | slightly_helpful | moderately_helpful | very_helpful | essential",
"session_type": "single_task | multi_task | iterative_refinement | exploration | quick_question",
"friction_counts": {"friction_type": count, ...},
"friction_detail": "One sentence describing the friction point, or empty",
"primary_success": "none | fast_accurate_search | correct_code_edits | good_explanations | proactive_help | multi_file_changes | good_debugging",
"brief_summary": "One sentence: what the user wanted and whether it was achieved"
}| Category | Description |
|---|---|
| debug_investigate | Debugging/Investigation |
| implement_feature | Feature Implementation |
| fix_bug | Bug Fixing |
| write_script_tool | Script/Tool Writing |
| refactor_code | Code Refactoring |
| configure_system | System Configuration |
| create_pr_commit | PR/Commit Creation |
| analyze_data | Data Analysis |
| understand_codebase | Codebase Understanding |
| write_tests | Test Writing |
| write_docs | Documentation Writing |
| deploy_infra | Deployment/Infrastructure |
| warmup_minimal | Warm-up (Minimal Session) |
| Type | Description |
|---|---|
| single_task | Single focused task |
| multi_task | Multiple tasks in one session |
| iterative_refinement | Iterative optimization |
| exploration | Codebase exploration/understanding |
| quick_question | Short Q&A |
| Category | Description |
|---|---|
| none | No significant success |
| fast_accurate_search | Fast and accurate code search |
| correct_code_edits | Accurate code modifications |
| good_explanations | Clear explanations |
| proactive_help | Proactive help beyond requirements |
| multi_file_changes | Successful coordination of multi-file edits |
| good_debugging | Effective debugging |
{
"sessions": "<Total number of sessions>",
"analyzed": "<Number of analyzed sessions>",
"date_range": { "start": "...", "end": "..." },
"messages": "<Total number of messages>",
"hours": "<Total duration (hours)>",
"commits": "<Number of git commits>",
"top_tools": ["Top 8 most used tools"],
"top_goals": ["Top 8 goal categories"],
"outcomes": { "Outcome distribution" },
"satisfaction": { "Satisfaction distribution" },
"friction": { "Friction type statistics" },
"success": { "Success category statistics" },
"languages": { "Language usage statistics" }
}Analyze the above Agent usage data and summarize 4–5 project domains.
Only return valid JSON, skip internal CC operations.
{
"areas": [
{
"name": "Domain name",
"session_count": N,
"description": "2-3 sentences describing the work content and how the Agent is used."
}
]
}
Each domain includes: name, session_count, description (2–3 sentences describing work content and Agent usage).Analyze the above Agent usage data and summarize the interaction style between the user and the Agent.
Only return valid JSON:
{
"style": "Brief description of the style (2-3 sentences)",
"strengths": ["2-3 things done well"],
"patterns": ["2-3 notable work patterns"]
}Analyze the above Agent usage data and identify well-functioning parts.
Only return valid JSON, including 2–3 "major achievements" with specific references to actual sessions:
{
"big_wins": [
{
"title": "Short title (4-6 words)",
"description": "2-3 sentences describing an impressive achievement"
}
]
}
Each includes title (4–6 words), description (2–3 sentences).Analyze the above Agent usage data and summarize friction patterns.
Only return valid JSON, including 2–3 friction points, honest and constructive:
{
"friction_points": [
{
"category": "Category name",
"frequency": "rare | occasional | frequent",
"description": "2-3 sentences describing the pattern"
}
]
}
Each includes category, frequency (rare | occasional | frequent), description (2–3 sentences).Analyze the above Agent usage data and generate executable suggestions.
Only return valid JSON, with 2–3 each for features_to_try and usage_patterns, tailored to actual usage patterns:
{
"features_to_try": [
{
"feature": "Feature name",
"benefit": "What help it provides",
"example": "Specific example from usage records"
}
],
"usage_patterns": [
{
"pattern": "Pattern name",
"benefit": "Why it helps",
"example": "How to apply it"
}
]
}
2-3 items each; suggestions must be specific and actionable.Analyze the above Agent usage data and extract opportunities to try in the next 3–6 months.
Only return valid JSON, including 3 opportunities, which can involve autonomous workflows, parallel agents, iterative controlled testing, etc.:
{
"intro": "1 sentence about the evolution of AI-assisted development",
"opportunities": [
{
"title": "Short title (4-8 words)",
"whats_possible": "2-3 sentences about the grand vision of autonomous workflows",
"how_to_try": "1-2 sentences mentioning relevant tools",
"copyable_prompt": "Detailed prompt that can be used directly"
}
]
}
Each includes intro (1 sentence), opportunities (title, whats_possible, how_to_try, copyable_prompt). Feel free to think boldly.Analyze the above Agent usage data and find a memorable moment from the session summaries (human, funny or unexpected, not statistical figures).
Only return valid JSON:
{
"headline": "A memorable qualitative moment from the records—not statistics. It should be human, funny or unexpected.",
"detail": "Brief description of when/where this moment happened"
}
Select truly funny or surprising content from the session summaries.You are writing the "At a Glance" executive summary for an Agent user's usage insight report.
Goal: Help the user understand their usage and how to better use the Agent as models evolve.
Write in the following 4 sections:
1. **What's Working**: What are the characteristics of the user's interaction style with the Agent, and what impactful things have they done? Can include 1–2 details but focus on high-level overview (the user may not remember specific sessions). Avoid empty flattery and do not list tool calls.
2. **What's Hindering You**: Divide into two categories—(a) Agent side: misunderstanding, wrong approach, bugs; (b) User side: insufficient context, environment issues, etc. Try to extract cross-project commonalities, honest but constructive.
3. **Quick Improvements to Try**: Select Agent features or workflow tips that can be tried immediately from the examples below. (Avoid less attractive suggestions like "Let the Agent confirm before acting" or "Write more context".)
4. **Ambitious Workflows for More Powerful Models**: What can the user prepare in advance as model capabilities improve in the next 3–6 months? Which currently difficult workflows will become possible? Draw inspiration from the corresponding section below.
2–3 sentences per section, not too long. Do not quote specific numbers or category names from the sessions. The tone should be coaching.
Only return valid JSON:
{
"whats_working": "(Refer to the above instructions)",
"whats_hindering": "(Refer to the above instructions)",
"quick_wins": "(Refer to the above instructions)",
"ambitious_workflows": "(Refer to the above instructions)"
}
Session data:
<Aggregated statistical JSON>
## Project Domains (User's Work Content)
<project_areas results>
## Major Achievements (Impressive Achievements)
<what_works results>
## Friction Categories (What Went Wrong)
<friction_analysis results>
## Features to Try
<suggestions.features_to_try results>
## Usage Patterns to Adopt
<suggestions.usage_patterns results>
## Future Outlook (Ambitious Workflows for More Powerful Models)
<on_the_horizon results>./temp/report_temp.html.agent-insights/reports/agent-insights-report-YYYY-MM-DD.html%USERPROFILE%~