Loading...
Loading...
Provides comprehensive CLAUDE.md file management capabilities including auditing, quality assessment, and targeted improvements. Use when user asks to check, audit, update, improve, fix, maintain, or validate CLAUDE.md files. Also triggers for "project memory optimization", "CLAUDE.md quality check", "documentation review", or when CLAUDE.md needs to be created from scratch. This skill scans all CLAUDE.md files, evaluates quality against standardized criteria, outputs detailed quality reports with scores and recommendations, then makes targeted updates with user approval.
npx skill4agent add giuseppe-trisciuoglio/developer-kit-claude-code claude-md-management#find . -name "CLAUDE.md" -o -name ".claude.md" -o -name ".claude.local.md" 2>/dev/null | head -50| Type | Location | Purpose |
|---|---|---|
| Project root | | Primary project context (checked into git, shared with team) |
| Local overrides | | Personal/local settings (gitignored, not shared) |
| Global defaults | | User-wide defaults across all projects |
| Package-specific | | Module-level context in monorepos |
| Subdirectory | Any nested location | Feature/domain-specific context |
| Criterion | Weight | What to Check |
|---|---|---|
| Commands/workflows | 20 pts | Are build/test/deploy commands present and working? |
| Architecture clarity | 20 pts | Can Claude understand the codebase structure? |
| Non-obvious patterns | 15 pts | Are gotchas and quirks documented? |
| Conciseness | 15 pts | Is content dense without filler? |
| Currency | 15 pts | Does it reflect current codebase state? |
| Actionability | 15 pts | Are instructions executable and copy-paste ready? |
## CLAUDE.md Quality Report
### Summary
- Files found: X
- Average score: X/100
- Files needing update: X
### File-by-File Assessment
#### 1. ./CLAUDE.md (Project Root)
**Score: XX/100 (Grade: X)**
| Criterion | Score | Notes |
|-----------|-------|-------|
| Commands/workflows | X/20 | ... |
| Architecture clarity | X/20 | ... |
| Non-obvious patterns | X/15 | ... |
| Conciseness | X/15 | ... |
| Currency | X/15 | ... |
| Actionability | X/15 | ... |
**Issues:**
- [List specific problems]
**Recommended additions:**
- [List what should be added]### Update: ./CLAUDE.md
**Why:** [Brief explanation of why this helps]
```diff
+ ## Quick Start
+
+ ```bash
+ npm install
+ npm run dev
+ ```
### Phase 5: Apply Updates
After user approval, use Edit tool to apply changes. Preserve existing content structure.
## Examples
### Example 1: Quality Assessment
**User request:** "Check if my CLAUDE.md is good"
**Response:**
```markdown
## CLAUDE.md Quality Report
### Summary
- Files found: 1
- Average score: 65/100
- Files needing update: 1
### File-by-File Assessment
#### 1. ./CLAUDE.md
**Score: 65/100 (Grade: C)**
| Criterion | Score | Notes |
|-----------|-------|-------|
| Commands/workflows | 10/20 | Only npm install documented |
| Architecture clarity | 15/20 | Basic tree, missing module relationships |
| Non-obvious patterns | 5/15 | No gotchas documented |
| Conciseness | 15/15 | Concise and dense |
| Currency | 10/15 | Some outdated paths |
| Actionability | 10/15 | Some vague instructions |
**Issues:**
- Missing test command
- Missing build command
- No documentation of auth flow complexity
- References to old src/structure
**Recommended additions:**
- Add Commands section with test/build
- Document auth module gotcha
- Update architecture pathsI'll create a CLAUDE.md for your React project. Let me first analyze the codebase structure.
[Scans package.json, src/, and key config files]
## CLAUDE.md Quality Report
### Summary
- Files found: 0
- Creating new CLAUDE.md at project root
### Proposed CLAUDE.md Content:
# Project Name
## Commands
| Command | Description |
|---------|-------------|
| `npm install` | Install dependencies |
| `npm run dev` | Start dev server (Vite on port 5173) |
| `npm run build` | Production build |
| `npm run test` | Run Vitest tests |
| `npm run lint` | ESLint check |
## Architecture
## Gotchas
- Vite requires files in `public/` for static assets
- State updates in `useEffect` need proper dependency arrays
### Example 3: Update Proposal
**User request:** "Update my CLAUDE.md with what you learned"
**Response:**
```markdown
### Update: ./CLAUDE.md
**Why:** Discovered that tests require database setup which wasn't documented.
```diff
## Testing
+ **Prerequisites:** Start test database first with `docker-compose up -d test-db`
+ `npm run test` - Run all tests
+ `npm run test:unit` - Unit tests only (no DB required)## Environment
- `API_BASE_URL` - API endpoint (default: localhost:3000)
+ `API_BASE_URL` - API endpoint (required for production, defaults to http://localhost:3000 in dev)
## Best Practices
1. **Always read references before acting**: Quality criteria, templates, and update guidelines contain essential details
2. **Report before editing**: Never modify CLAUDE.md without first presenting a quality report
3. **Preserve existing structure**: When updating, maintain the existing organization and style
4. **Be project-specific**: Only add information specific to this codebase, not generic advice
5. **Verify commands work**: Before suggesting commands, mentally or actually verify they execute correctly
6. **Use progressive disclosure**: Keep SKILL.md lean, reference detailed rubrics in separate files
7. **Score consistently**: Apply the same scoring standards across all files for fair comparison
## Constraints and Warnings
1. **Never modify without approval**: Always get user confirmation before editing CLAUDE.md files
2. **Don't remove content without asking**: If suggesting deletions, explicitly mark them and get approval
3. **Respect `.claude.local.md`**: These are personal settings; never suggest modifying them in shared docs
4. **Avoid generic advice**: Do not add "write good code" type content - focus on project-specific patterns
5. **Keep diffs concise**: Show only the actual changes, not entire file contents
6. **Verify file paths**: Ensure all referenced files exist before documenting them
7. **Score objectively**: Use the rubric consistently; don't inflate scores for incomplete documentation