Loading...
Loading...
Generate visual concept maps, flowcharts, architecture diagrams, and relationship diagrams from structured notes or technical content using Mermaid syntax. Use when the user has lecture notes, study materials, or technical documentation and wants visual diagrams to aid understanding. Produces multiple diagram types: concept hierarchy maps, process flowcharts, architecture diagrams, comparison matrices, timeline diagrams, and mind maps. Trigger phrases: 'create diagrams from notes', 'visualize concepts', 'concept map', 'make flowcharts', 'diagram this', 'visual notes'.
npx skill4agent add prakharmnnit/skills-and-personas concept-cartographergraph TD
A[Neural Networks] --> B[Architecture]
A --> C[Training]
A --> D[Activation Functions]
B --> B1[Input Layer]
B --> B2[Hidden Layers]
B --> B3[Output Layer]
C --> C1[Forward Pass]
C --> C2[Loss Calculation]
C --> C3[Backpropagation]
C --> C4[Weight Update]
D --> D1[Sigmoid]
D --> D2[ReLU]flowchart LR
A[Input Data] --> B[Forward Pass]
B --> C[Calculate Loss]
C --> D{Loss acceptable?}
D -->|No| E[Backpropagation]
E --> F[Update Weights]
F --> B
D -->|Yes| G[Model Ready]graph LR
subgraph Input Layer
I1[x1] & I2[x2]
end
subgraph Hidden Layer
H1[h1] & H2[h2] & H3[h3]
end
subgraph Output
O1[y]
end
I1 & I2 --> H1 & H2 & H3
H1 & H2 & H3 --> O1graph TD
A[Activation Functions] --> B[Sigmoid]
A --> C[ReLU]
B --> B1["Range: 0 to 1"]
B --> B2["Use: Output layer"]
B --> B3["Problem: Vanishing gradient"]
C --> C1["Range: 0 to infinity"]
C --> C2["Use: Hidden layers"]
C --> C3["Problem: Dead neurons"]sequenceDiagram
participant D as Data
participant N as Network
participant L as Loss Function
participant O as Optimizer
D->>N: Forward pass
N->>L: Predictions
L->>L: Calculate error
L->>N: Gradients (backprop)
N->>O: Current weights + gradients
O->>N: Updated weightsstateDiagram-v2
[*] --> Untrained
Untrained --> Training: Start training
Training --> Evaluating: Each epoch
Evaluating --> Training: Loss too high
Evaluating --> Trained: Loss acceptable
Trained --> Deployed: Deploy
Deployed --> Training: Retrain| Domain | Priority Diagrams | Special Elements |
|---|---|---|
| AI/ML | Architecture, process flow, comparison | Layer structures, training loops, model pipelines |
| WebDev | Architecture, sequence, flowchart | Request/response flows, component trees, state management |
| Web3 | Sequence, architecture, state | Transaction flows, smart contract interactions, token flows |
| DSA | Flowchart, state, comparison | Algorithm steps, tree/graph structures, complexity comparisons |
# Visual Concept Maps: [Topic]
## Overview Map
[Concept hierarchy - always include this one]
## [Diagram Type 2 title]
[Most relevant additional diagram]
## [Diagram Type 3 title]
[Second most relevant]
## Key Relationships Summary
- [Concept A] depends on [Concept B] because...
- [Concept C] is an alternative to [Concept D] when...
- [Process X] feeds into [Process Y] via...## Concept Coverage
- Concepts in diagrams: [N] / [N] from inventory
- Concepts not diagrammed: [list] (with reason: "too granular" or "no visual relationship")graph LR
A[Linear Algebra] --> B[Neural Network Basics]
A --> C[Gradient Descent]
B --> D[Backpropagation]
C --> D
D --> E[Training Loop]
E --> F[PyTorch Implementation]quadrantChart
title Concept Difficulty vs Importance
x-axis Low Difficulty --> High Difficulty
y-axis Low Importance --> High Importance
Neuron anatomy: [0.3, 0.7]
Backpropagation: [0.8, 0.9]
Activation functions: [0.5, 0.8]
Learning rate tuning: [0.6, 0.7]graph LR
subgraph Before
B1["Neural network = black box"]
B2["Training = magic"]
end
subgraph After
A1["Neural network = layers of math functions"]
A2["Training = iterative error minimization"]
end
B1 -.->|"this lecture"| A1
B2 -.->|"this lecture"| A2