Loading...
Loading...
Generate an LLM-optimized project profile for any git repository. Outputs docs/{project-name}.md covering architecture, core abstractions, usage guide, design decisions, and recommendations. Trigger: "/project-profiler", "profile this project", "為專案建側寫"
npx skill4agent add yelban/orz99-skills project-profiler.package.jsonnamepyproject.toml[project] nameCargo.toml[package] namego.moduv run {SKILL_DIR}/scripts/scan-project.py {TARGET_DIR} --format summary--format json# Recent commits
git -C {TARGET_DIR} log --oneline -20
# Contributors
git -C {TARGET_DIR} log --format="%aN" | sort -u | head -20
# Version tags
git -C {TARGET_DIR} tag --sort=-v:refname | head -5
# First commit date
git -C {TARGET_DIR} log --format="%aI" --reverse | head -1docs/CODEBASE_MAP.mdtotal_tokens| Total Tokens | Mode | Strategy |
|---|---|---|
| ≤ 80k | Direct | Skip subagents. Opus reads all files directly and performs all analysis in a single context. |
| 80k – 200k | 2 agents | Agent AB (Core + Architecture + Design), Agent C (Usage + Patterns + Deployment) |
| 200k – 400k | 3 agents | Agent A (Core + Design), Agent B (Architecture + Patterns), Agent C (Usage + Deployment) |
| > 400k | 3 agents | Agent A, Agent B, Agent C — each ≤150k tokens, with overflow files assigned to lightest agent |
detected_sectionsRun in parallel with Phase 2 subagent launches (or with Phase 3 in direct mode).
.git/configgit -C {TARGET_DIR} remote get-url originowner/repogh api repos/{owner}/{repo} --jq '{stars: .stargazers_count, forks: .forks_count, open_issues: .open_issues_count}'ghN/Apackage.jsonWebFetch https://api.npmjs.org/downloads/point/last-month/{package_name}pyproject.tomlWebFetch https://pypistats.org/api/packages/{package_name}/recentN/AN/A| Criteria | Score |
|---|---|
| < 3 months, < 3 releases, 1-2 contributors | experimental |
| 3-12 months, 3-10 releases, 2-5 contributors | growing |
| 1-3 years, 10-50 releases, 5-20 contributors | stable |
| > 3 years, > 50 releases, > 20 contributors | mature |
Direct mode (≤80k tokens): SKIP this entire phase. Proceed to Phase 3. Opus reads files directly during synthesis.
TaskTask prompt for Agent A — subagent_type: "general-purpose", model: "sonnet"
## Mission
Identify the most architecturally significant abstractions AND key design decisions in this codebase.
## Files to Read
{LIST_OF_ASSIGNED_FILES}
Also read: README.md, CHANGELOG.md (if they exist and not already assigned)
## Output Format
### Part 1: Core Abstractions
Report the TOP 10-15 most architecturally significant abstractions, ranked by fan-in (how many other files reference them). If the project has fewer than 15 meaningful abstractions, report all.
For EACH abstraction:
#### {Name}
- **Purpose**: {≤15 words}
- **Defined in**: `{file_path}:ClassName` or `{file_path}:function_name`
- **Type**: {class / interface / type / trait / struct / protocol}
- **Public methods/fields**: {exact_count}
- **Adapters/implementations**: {count} — {names with file paths}
- **Imported by**: {count} files
- **Key pattern**: {factory / singleton / strategy / observer / none}
### Part 2: Design Decisions
For EACH decision (identify 3-5):
#### {Decision Title}
- **Problem**: {what needed solving}
- **Choice made**: {what was chosen}
- **Evidence**: `{file_path}:ClassName` or `{file_path}:function_name` — {relevant code pattern}
- **Alternatives NOT chosen**: {what else could have been done}
- **Why not**: {concrete reason — performance / complexity / ecosystem / team preference}
- **Tradeoff**: {what is gained} vs. {what is lost}
### Part 3: Architecture Risks
For EACH risk (identify 2-4):
- **Risk**: {specific description}
- **Location**: `{file_path}:SymbolName`
- **Impact**: {what breaks if this goes wrong}
- **Mitigation**: {how to fix or reduce risk}
### Part 4: Recommendations
For EACH recommendation (identify 2-4):
- **Current state**: `{file_path}` — {what exists now}
- **Problem**: {specific issue — not "could be better"}
- **Fix**: {concrete action — not "consider refactoring"}
- **Effect**: {measurable outcome}
## Rules
- Every number must come from actual code (count imports, count methods)
- No subjective language (no "well-designed", "elegant", "robust", "clean", "優雅", "完美", "強大")
- Every claim needs a `file:SymbolName` reference (NOT line numbers — they break on next commit)
- Each decision must have a "why NOT the alternative" answer
- Report the TOTAL count of abstractions foundTask prompt for Agent B — subagent_type: "general-purpose", model: "sonnet"
## Mission
Map the system topology, layer boundaries, data flow paths, AND code quality patterns.
## Files to Read
{LIST_OF_ASSIGNED_FILES}
## Output Format
### Part 1: Topology
- **Architecture style**: {monolith / microservices / serverless / library / CLI tool / plugin system}
- **Entry points**: {list with file paths}
- **Layer count**: {N}
### Part 2: Layers (table)
| Layer | Modules | Files | Responsibility |
|-------|---------|-------|---------------|
### Part 3: Data Flow Paths
For each major user-facing operation:
1. **{Operation name}**: {step1_module} → {step2_module} → ... → {result}
- Evidence: `{file:SymbolName}` for each step
### Part 4: Mermaid Diagram Elements
Provide raw data for Mermaid diagrams:
- Nodes: {module_name} — {file_path}
- Edges: {from} → {to} — {relationship_type: imports/calls/extends}
### Part 5: Module Dependencies (structured)
For each module:
- **{module_name}** (`{path}`): imports [{dep1}, {dep2}, ...]
### Part 6: Boundary Violations
List any cases where a lower layer imports from a higher layer.
### Part 7: Code Quality Patterns
- **Error handling**: {strategy and consistency — e.g., "try/catch at controller layer, custom AppError class"}
- **Logging**: {framework and coverage — e.g., "winston, structured JSON, covers all API routes"}
- **Testing**: {framework, coverage level, patterns — e.g., "vitest, 47 test files, unit + integration"}
- **Type safety**: {strict / partial / none — e.g., "strict TypeScript with no `any` casts"}
## Rules
- Every number must come from actual code
- No subjective language (no "well-designed", "elegant", "robust", "clean", "優雅", "完美", "強大")
- Every claim needs a `file:SymbolName` reference (NOT line numbers)
- Focus on HOW data moves, not WHAT the code doesTask prompt for Agent C — subagent_type: "general-purpose", model: "sonnet"
## Mission
Document all consumption interfaces, deployment modes, security surface, and AI agent integration points.
## Files to Read
{LIST_OF_ASSIGNED_FILES}
## Output Format
### Part 1: Consumption Interfaces
For each interface found:
- **Type**: {Python SDK / TS SDK / REST API / MCP / CLI / Vercel AI SDK / Library import}
- **Entry point**: `{file_path}:ClassName` or `{file_path}:function_name`
- **Public surface**: {N} exported functions/classes/endpoints
- **Example usage**: {minimal code snippet from docs/examples or inferred from exports}
### Part 2: Configuration
| Source | Path | Key Settings |
|--------|------|-------------|
### Part 3: Deployment Modes
| Mode | Evidence | Prerequisites |
|------|----------|--------------|
### Part 4: AI Agent Integration
- **MCP tools**: {count and names, if any}
- **Function calling schemas**: {count, if any}
- **Tool definitions**: {count, if any}
- **SDK integration**: {Vercel AI SDK / LangChain / LlamaIndex / custom}
### Part 5: Security Surface
- **API key handling**: {how and where}
- **Auth mechanism**: {type and file}
- **CORS config**: {if applicable}
- **Data at rest**: {encrypted / plaintext / N/A}
- **PII handling**: {anonymized / logged / none detected}
### Part 6: Performance & Cost Indicators
| Metric | Value | Source |
|--------|-------|--------|
| {LLM calls per request} | {N} | `{file:SymbolName}` |
| {Cache strategy} | {type} | `{file:SymbolName}` |
| {Rate limiting} | {config} | `{file:SymbolName}` |
## Rules
- Every number must come from actual code
- No subjective language (no "well-designed", "elegant", "robust", "clean", "優雅", "完美", "強大")
- Every claim needs a `file:SymbolName` reference (NOT line numbers)
- Include BOTH documented and undocumented interfacesdetected_sectionsPromise.allreferences/section-detection-rules.md- [x] Storage Layer — scanner detected: prisma in dependencies
- [ ] Embedding Pipeline — not detected
- [x] Infrastructure Layer — scanner detected: Dockerfile present
- [ ] Knowledge Graph — not detected
- [ ] Scalability — not detected
- [x] Concurrency — Agent B reported: Promise.all pattern in src/worker.tsgraph TBsequenceDiagram- **{module_name}** (\references/output-template.md| Section | Primary Source | Secondary Source |
|---|---|---|
| 1. Project Identity | Scanner metadata + Phase 1 | Git metadata |
| 2. Architecture | Agent B (Parts 1-6) | Agent A (abstractions per layer) |
| 3. Core Abstractions | Agent A (Part 1) | Agent B (layer context) |
| 4. Conditional | Phase 3 detection + relevant agents | — |
| 5. Usage Guide | Agent C (Parts 1-4) | Scanner entry_points |
| 6. Performance & Cost | Agent C (Part 6) + Agent B | — |
| 7. Security & Privacy | Agent C (Part 5) | — |
| 8. Design Decisions | Agent A (Part 2) | Agent B (architecture context) |
| 8.5 Code Quality & Patterns | Agent B (Part 7) | Agent A (supporting observations) |
| 9. Recommendations | Agent A (Part 4) | Agents B/C (supporting evidence) |
docs/{project-name}.mdreferences/quality-checklist.mdwell-designed, elegant, elegantly, robust, clean, impressive,
state-of-the-art, cutting-edge, best-in-class, beautifully,
carefully crafted, thoughtfully, well-thought-out, well-architected,
nicely, cleverly, sophisticated, powerful, seamless, seamlessly,
intuitive, intuitively優雅、完美、強大、直觀、無縫、精心、巧妙、出色、卓越、先進、高效、靈活、穩健、簡潔##>file:SymbolNamefile:SymbolNamefile_pathProfile generated: docs/{project-name}.md
- {total_files} files scanned ({total_tokens} tokens)
- {N} core abstractions identified
- {N} design decisions documented
- {N} recommendations
- Conditional sections: {list of included sections or "none"}