prompt-review Skill
Analyzes the user's past AI Agent conversation histories, estimates technical understanding level, prompting patterns and AI dependency to generate a report. The report will be written in Japanese to
reports/prompt-review-YYYY-MM-DD.md
.
Argument Processing
Parse
and process parameters according to the following rules:
- Numeric only → Days filter (example: → data of the past 30 days)
- String only → Project name filter (partial match)
- String + number → Project name + Days filter (example: )
- No arguments → Cross all projects, data of the past 7 days (default)
Step 1: Data Collection (Script Execution)
Run the preprocessing script scripts/collect.py to collect data.
This script automatically detects logs from Claude Code, GitHub Copilot Chat, Cline, Roo Code, Windsurf, Antigravity, and returns filtered JSON to standard output.
Assemble script options from arguments
Parse
and run the following command in Bash:
bash
python ~/.claude/skills/prompt-review/scripts/collect.py [OPTIONS] > /tmp/prompt-review-data.json
- No arguments → No options (default: past 7 days)
- Numeric only (example: ) →
- or → (all time period)
- String only (example: ) →
- String + number (example: ) →
--project yonshogen --days 30
Important: Do not use a relative path from this skill file for the script path. Use the absolute path
.claude/skills/prompt-review/scripts/collect.py
of the project where the skill is stored, specified relative to the current working directory (
).
Read Output
After the script runs, read
/tmp/prompt-review-data.json
with the Read tool.
Output JSON structure:
json
{
"summary": {
"total_messages": 2616,
"detected_tools": ["Claude Code", "GitHub Copilot Chat"],
"filter_days": null,
"filter_project": null
},
"sources": [
{
"tool": "Claude Code",
"status": "検出",
"messages": [
{"text": "プロンプト本文", "timestamp": "2025-09-29 03:16", "project": "yonshogen"}
],
"period": "2025-09-29 03:16 〜 2026-03-12 04:58"
}
],
"project_stats": {
"farbrain": {"count": 668, "tools": ["Claude Code"]},
"yonshogen": {"count": 215, "tools": ["Claude Code"]}
},
"secret_warnings": [
{
"tool": "Claude Code",
"project": "some-project",
"timestamp": "2025-10-01 12:00",
"type": "OpenAI API Key",
"masked_value": "sk-abc12***xyz9",
"prompt_excerpt": "APIキーはsk-abc123..."
}
]
}
Step 2: Analysis
After reading
/tmp/prompt-review-data.json
with Read, analyze the user prompts in the
array from the following perspectives. Be sure to include
specific evidence (quotes of actual prompt fragments) for each perspective.
Preprocessing: Create project-specific summaries and exclude short responses
First, use
to get an overview of message counts and used tools per project. Read the prompt content for each project, and summarize the work being done in that project in one line. This information will be output in the "2. Project-specific Summary" section of the report.
Next, exclude short affirmative responses from analysis.
Preprocessing: Ignore short affirmative responses
Claude Code often asks users for confirmation like "Shall I do ~?", and short user affirmative responses to these are not valuable for analyzing prompting ability or technical proficiency. Exclude the following types of messages from analysis:
- Simple affirmations: 「y」「yes」「はい」「うん」「ok」「sure」「yep」「yeah」
- Execution instructions: 「進めて」「やって」「do it」「doit」「go」「go ahead」「proceed」
- Approvals: 「それで」「それでいい」「それでお願いします」「お願いします」「いいよ」「いいです」「大丈夫」
- Gratitude only: 「ありがとう」「ありがとうございます」「thanks」「thx」
Judgment criteria: Messages that are short (roughly 20 characters or less) and match the above patterns. However, even short messages that include specific technical instructions (e.g. "30pxがいいです", "asyncで") should not be excluded.
2a. Technical Proficiency Map
Extract technical concepts mentioned in prompts, and classify into 3 levels:
- Proficient: Confident instructions, accurate terminology use, presentation of specific implementation policies
- Basic understanding: Knows the concept but delegates details to AI
- Learning/Ambiguous: Question format, misunderstandings present, lots of trial and error
Classification signals:
- Specific instructions in imperative form → Proficient
- "Please do ~" + concept name only → Basic understanding
- "What is
?" " is not working" "How should I do it?" → Learning
2b. Prompting Pattern Analysis
- Effective patterns: Specific constraint specification, step-by-step instructions, sufficient context provision, clear specification of expected output format
- Improvable patterns: Ambiguous instructions (e.g. "Make it nice"), insufficient context, confusion of purpose and means
- Characteristic habits: Frequency of short approvals (「y」「doit」 etc.), use of Japanese/English
2c. AI Dependency Analysis
Per project/tool:
- Cases where user decides policy and delegates implementation to AI (proactive)
- Cases where user delegates even policy decision to AI (dependent)
- Pattern of relying on AI for debugging/error resolution
- Pattern of frequently requesting status checks from AI (e.g. "Check the status")
2d. Growth Trajectory
Time series analysis:
- Changes in prompt quality/specificity
- Newly adopted technical concepts
- Recurring issue patterns
2e. Cross-project and cross-tool trends
- Strong and weak areas
- Differences in behavior by project type
- How AI tools are used differently (if multiple tools are detected)
2f. Secret/Credential Warning
If the
array is not empty, output a warning section at the beginning of the report (right after the data source summary). List all secrets such as API Keys, Tokens, Passwords, connection strings detected by the script, and recommend the user to rotate (reissue/invalidate) the keys.
Never write secret values in plain text in the report. Use the masked value (
) returned by the script as-is.
Step 3: Report Generation
Generate the report in Japanese according to the template at references/report-template.md.
Output Rules
- If the directory does not exist, run in Bash
- Use the Write tool to output to
reports/prompt-review-YYYY-MM-DD.md
(YYYYMMDD is the execution date)
- After generating the report, notify the user of the file path
Writing Notes
- The report must be output in Japanese without exception
- Clearly state speculations as "It is estimated that ~" "There is a possibility of ~"
- When quoting the original prompt, keep excerpts short and pay attention to privacy:
- Mask personal name parts of file paths to
- Redact project-specific confidential information
- For parts that cannot be analyzed, honestly state "Could not analyze due to insufficient data"
- Clearly indicate which tool the information comes from in parentheses (e.g. "(Claude Code)" "(Copilot Chat)")
- Sections without evidence may be omitted
Reference Resources
- scripts/collect.py — Data collection and preprocessing script (Python)
- references/data-sources.md — Details of log storage locations and formats for each AI tool
- references/report-template.md — Report structure template