Purpose
Act as a Senior Software Architect + Tech Lead to analyze code modules and produce structured technical reports that explain internal behavior, module communication, architectural patterns, and system relationships — with Mermaid diagrams.
CRITICAL RULES
- Never assume context that doesn't exist. Only report what the code explicitly shows.
- Never invent dependencies. If a dependency isn't visible in imports, configs, or code, don't add it.
- If information is missing, say so explicitly. Document unknowns as unknowns, not guesses.
- Never copy full source code into the report. Explain how the code works — don't reproduce it.
When to Use This Skill
- Onboarding: New team members need to understand how a module works
- Technical audit: Reviewing module responsibilities, dependencies, and communication patterns
- Refactoring preparation: Understanding the current state before making architectural changes
- Living documentation: Generating reusable technical docs from actual code
- Code review context: Understanding the bigger picture around a set of changes
- Incident analysis: Tracing how a module interacts with others to debug systemic issues
Capabilities
Code Analysis
- Internal module behavior and execution flow
- Function/class responsibility mapping
- State management and error handling patterns
- Dependency identification (internal and external)
Architecture Assessment
- Architectural pattern detection (MVC, Clean Architecture, Hexagonal, etc.)
- Module boundary and responsibility analysis
- Coupling and cohesion evaluation
- Design principle adherence (SOLID, DRY, etc.)
Communication Mapping
- Inter-module communication (sync/async)
- API surface analysis (what a module exposes and consumes)
- Event-driven patterns (pub/sub, event emitters, message queues)
- Shared state and data flow analysis
Technical Documentation
- Structured markdown reports
- Mermaid diagrams (flowcharts, sequence, class, C4)
- Executive summaries for non-technical stakeholders
- Detailed technical breakdowns for engineers
Input Expected
The user provides:
| Input | Required | Description |
|---|
| Module/file path | Yes | Path to the code to analyze (e.g., ) |
| Code fragments | Optional | Partial or complete code snippets if not accessible via filesystem |
| Language/framework | Optional | If not detectable from code (e.g., "NestJS", "Next.js", "FastAPI") |
| Additional context | Optional | Business context, known constraints, specific questions |
| Analysis depth | Optional | (explanation), (+ diagrams), (+ refactor recommendations) |
Example prompts:
- "Analyze the module at "
- "Explain how works and how it connects to other modules"
- "Do a v3 analysis of
/src/services/notification-service.ts
"
Configuration Resolution
Before starting any workflow step, resolve the
path — the local staging directory where all output documents are stored.
- Infer the project name from the current directory name or git repository name
- Set =
.agents/staging/code-analyzer/{project-name}/
- Create the directory if it doesn't exist
- Present the resolved path to the user before proceeding
IMPORTANT: Every
reference in this skill depends on this resolution.
Obsidian Output Standard
All documents generated by this skill MUST follow these Obsidian output rules:
- Frontmatter: Every file includes the universal frontmatter schema (title, date, updated, project, type, status, version, tags, changelog, related)
- Types: Use for REPORT.md, for REFACTOR.md
- Wiki-links: When both REPORT.md and REFACTOR.md exist, cross-reference with /
- Referencias: Every document ends with listing related analysis documents
- Metrics: Use
| Metric | Before | After | Delta | Status |
format for code quality metrics, complexity scores, and coverage data
- IDs: Use D- for debt items in refactor plans
- Bidirectional: If REFACTOR.md references REPORT.md, REPORT.md must reference REFACTOR.md
See assets/templates/ for complete frontmatter schemas and document structures.
Workflow
Step 1: Discovery
Read and explore the target module/file to understand its structure.
Actions:
- Read the target path — identify all files, directories, and entry points
- Detect the language and framework from file extensions, imports, and config files
- Identify the module boundary (what's inside vs. outside the module)
- List all files that belong to the module
Output: Internal understanding of the module's file structure and technology stack.
Step 2: Deep Analysis
Analyze the code to understand internal behavior.
Actions:
- Identify the module's main responsibilities — what does it do?
- Map key functions/classes and their roles
- Trace the primary execution flow — entry point to output
- Analyze state management — how data flows and transforms
- Analyze error handling — how failures are managed
- List internal dependencies (other modules in the same project)
- List external dependencies (third-party libraries, APIs, services)
Output: Deep understanding of behavior, responsibilities, and dependencies.
Step 3: Communication Mapping
Understand how the module talks to the rest of the system.
Actions:
- Identify what the module consumes (imports, API calls, events listened to)
- Identify what the module exposes (exports, API endpoints, events emitted)
- Classify communication types: synchronous (function calls, HTTP) vs. asynchronous (events, queues, WebSockets)
- Identify shared state (global stores, shared databases, caches)
Output: Clear map of module boundaries and communication channels.
Step 4: Report Generation
Produce the structured technical report with all findings.
Actions:
- Write the report following the Output Structure (see below)
- Generate Mermaid diagrams for visual understanding
- Save the report to
{output_dir}/technical/module-analysis/{module-name}/
- Add section at the end of the report (link to REFACTOR.md if v3, link to any other analysis documents for the same module)
Output: Complete markdown report with diagrams.
Step 5: Refactor Recommendations (v3 only)
If the user requests a v3 analysis, add improvement suggestions.
Actions:
- Identify code smells and architectural issues
- Suggest specific, actionable improvements
- Rate each recommendation by impact and effort
- Prioritize recommendations
- Add section linking back to and any related analysis documents
Output: Actionable refactoring roadmap appended to the report.
Output Location
All reports are saved to a central technical documentation directory:
{output_dir}/technical/module-analysis/
└── {module-name}/
├── REPORT.md # Main technical report
└── REFACTOR.md # Refactoring recommendations (v3 only)
Naming convention: Use the module's folder name in kebab-case.
/src/modules/OrderService
→ {output_dir}/technical/module-analysis/order-service/
- →
{output_dir}/technical/module-analysis/payments/
/src/services/notification-service.ts
→ {output_dir}/technical/module-analysis/notification-service/
Output Structure
See assets/templates/ for complete document structures:
- REPORT.md — Technical analysis report template with Executive Summary, Technical Analysis, Module Communication, Technical Diagrams, Metrics, and Referencias sections
- REFACTOR.md — Refactoring recommendations template (v3 only) with Code Smells, Recommendations, Priority Matrix, Implementation Plan, Impact Analysis, Testing Strategy, and Referencias sections
Key Sections Overview
REPORT.md includes:
- Executive Summary (module overview, purpose, criticality, technology)
- Technical Analysis (responsibilities, key functions, execution flow, state management, error handling, dependencies)
- Module Communication (consumes, exposes, communication types, shared state)
- Technical Diagrams (Mermaid diagrams based on complexity)
- Metrics (code quality metrics using standard format)
- Referencias (bidirectional links to related documents)
REFACTOR.md (v3 only) includes:
- Code Smells (issues with severity ratings)
- Recommendations (actionable improvements with priority, impact, effort)
- Priority Matrix (visual representation of recommendations)
- Implementation Plan (phased refactoring roadmap)
- Impact Analysis (affected components, risk assessment, expected benefits)
- Testing Strategy (validation approach)
- Referencias (link back to REPORT.md)
Analysis Depth Levels
| Level | Name | Includes | Use When |
|---|
| v1 | Explanation | Executive Summary + Technical Analysis + Communication | Quick understanding of a module |
| v2 | Explanation + Diagrams | Everything in v1 + Mermaid Diagrams | Documentation or onboarding (default) |
| v3 | Full Analysis | Everything in v2 + Refactoring Recommendations | Pre-refactoring audit or technical review |
Default: If the user doesn't specify a level, use v2.
Critical Patterns
Pattern 1: Read Before You Write
Always read the actual code before generating any analysis. Never produce a report based on file names, folder structure, or assumptions alone. If a file can't be read, document it as "inaccessible" rather than guessing its contents.
Pattern 2: Explain, Don't Copy
The report explains how code works — it does not reproduce it. Use short inline snippets (1-3 lines) only when necessary to illustrate a specific pattern or behavior. Never paste full functions, classes, or files.
Bad: Pasting a 50-line function into the report
Good: "The
function validates the input, calls the payment gateway via
, and emits a
event on success."
Pattern 3: Explicit Unknowns
When information is not available or cannot be determined from the code:
Bad: Making assumptions about what a module probably does
Good: "The module imports
but the event handler implementations are not visible in this scope. The specific events consumed could not be determined."
Pattern 4: Dependency Honesty
Only list dependencies that are explicitly visible in the code (imports, require statements, config files, dependency injection). If a dependency is suspected but not confirmed, mark it as "suspected" with reasoning.
Pattern 5: Context-Appropriate Diagrams
See assets/helpers/diagram-guidelines.md for detailed Mermaid diagram selection criteria, syntax examples, and best practices. Match diagram complexity to module complexity (simple = flowchart only, medium = flowchart + sequence, complex = flowchart + sequence + class/C4).
Pattern 6: Technology-Agnostic Analysis
The analysis framework works for any language or framework. Adapt terminology to match the technology:
| Concept | JavaScript/TypeScript | Python | Go | Java |
|---|
| Module | Module/Package | Module/Package | Package | Package |
| Entry point | / export | | | |
| Interface | Type/Interface | Protocol/ABC | Interface | Interface |
| Dependency injection | Constructor/Provider | params | Struct fields | |
Best Practices
Before Analysis
- Confirm the target path exists — verify the module path before starting
- Identify the project type — monorepo, single app, microservice, library
- Check for existing documentation — READMEs, JSDoc, docstrings, OpenAPI specs
- Ask for context if needed — don't guess business requirements
During Analysis
- Start from entry points — find the main export, router, or handler first
- Trace the happy path first — understand the normal flow before edge cases
- Map dependencies as you go — build the dependency graph incrementally
- Note patterns as you see them — architectural patterns emerge from reading, not guessing
- Check test files — tests reveal intended behavior and edge cases
After Analysis
- Review the report for accuracy — every statement must be backed by code you read
- Verify diagram correctness — ensure diagrams match the textual analysis
- Check for missing sections — all required output sections must be present
- Save to the correct location —
{output_dir}/technical/module-analysis/{module-name}/
Integration with Other Skills
With
Use
during the Analysis Phase (Step 1) of universal-planner to understand the current state of modules that will be affected by the planned work.
With (EXECUTE mode)
Before executing a sprint that modifies a module, run
to document the "before" state for comparison.
Limitations
- Requires file access: Cannot analyze code that isn't readable via the filesystem. If the user provides code fragments, analysis is limited to what's visible
- No runtime analysis: Analyzes static code only — cannot detect runtime behavior, performance characteristics, or dynamic dispatch patterns
- Single module focus: Analyzes one module at a time. Cross-module analysis requires separate runs and manual correlation
- No automated testing: Does not execute tests or verify that the code works — only analyzes structure and patterns
- Framework detection: May not recognize custom or obscure frameworks. The user can provide framework context to compensate
Post-Production Delivery
After generating the technical report (and refactoring recommendations if v3), offer the user delivery options:
- Sync to Obsidian vault — use the skill (SYNC mode) to move the report to the vault
- Move to custom path — user specifies a destination and files are moved there
- Keep in staging — leave files in
.agents/staging/code-analyzer/
for later use
Ask the user which option they prefer.
Troubleshooting
| Issue | Solution |
|---|
| Module path doesn't exist | Verify path with user, check for typos, case sensitivity, or moved files |
| Can't determine framework | Ask user to specify, check config files (, , etc.) |
| Module too large | Break into sub-modules, analyze separately, create top-level summary |
| Dependencies unclear | Mark as "suspected" with reasoning, check DI containers and config files |
| Report seems incomplete | Verify all files read, check for dynamic imports or config-driven behavior |
Example Output
See assets/templates/REPORT.md and assets/templates/REFACTOR.md for complete examples including Executive Summary, Communication Maps, Mermaid diagrams, and all other sections.
Version History
- 2.2 (2026-02-17): Staging pattern migration — deterministic .agents/staging/ output, {output_dir} rename, post-production delivery
- 2.0 (2026-02-11): Obsidian-native output — rich frontmatter, wiki-links, bidirectional references, metric tables,
- 1.0 (2026-01-29): Initial release with v1/v2/v3 analysis depths, Mermaid diagrams, and structured report output
Future Enhancements
Multi-module analysis, dependency graph visualization, automated change detection, test coverage integration, and export to Confluence/Notion.