easysdd-learning
Every time you work on a feature or fix an issue, you leave behind spec files — such as
/
. However, spec documents record "what was done" and "how it was done",
and do not record "what pitfalls were encountered" or "what better practices were discovered".
Teams without knowledge retention keep solving the same problem repeatedly. It takes research to solve a problem the first time, but only a few minutes the next time if it's documented. easysdd-learning is designed to add a "learning card" for each non-trivial engineering practice.
Two tracks:
- Pitfall Track: Record the problems encountered, root causes, and solutions to avoid falling into the same pitfall next time
- Knowledge Track: Record discovered best practices, workflow improvements, and reusable patterns
Both tracks are written to the archive directory
(shares the same directory with other archive sub-skills, see Section 1 "Archived Documents" in
easysdd/reference/shared-conventions.md
for classification rules), with a unified format that can be retrieved by future AI and humans. Documents produced by this skill include
in the frontmatter, and filenames follow the pattern
YYYY-MM-DD-learning-{slug}.md
(starting with date, fixed type segment as
), which serves as the identifier for this skill in the shared directory.
When to Trigger
Trigger if any of the following conditions are met:
| Scenario | Description |
|---|
| Completing a feature workflow | easysdd-feature-acceptance
proactively asks "Would you like to record the learnings from this work?" according to easysdd/reference/shared-conventions.md
|
| Completing an issue workflow | proactively asks "Would you like to record this pitfall?" according to easysdd/reference/shared-conventions.md
|
| User initiates actively | Phrases like "record this", "document knowledge", "learning", "document learnings", etc. |
| Solved a one-time difficult problem | An engineering problem that took significant time to solve, but is not part of a feature / issue workflow |
Use a single, casual sentence when proactively recommending. Immediately skip if the user says "no" — repeated prompts may make the user feel the AI is overstepping.
What to Write for Each Track
Pitfall Track applies to: Debugged bugs, bypassed configuration traps, environment issues, integration failures... all experiences where "things should have worked but didn't".
Knowledge Track applies to: Discovered best practices, workflow improvements, architectural insights, reusable design patterns... all learnings that "should be the default approach going forward".
The frontmatter, body templates, and complete examples for both tracks have been moved to the same directory's
. This skill's documentation only retains the judgment and process rules.
Workflow Phases
Phase 1: Identify Source (Automatic)
Extract from the current conversation context:
- Source Type: Feature workflow / Issue workflow / Independent problem
- Associated Artifacts: Feature directory path / Issue directory path (if available, for reference in the document's "Source" field)
- Initial Track Classification: Pitfall or Knowledge. Judgment criteria — "fixed something that was broken" = Pitfall; "discovered a better approach" = Knowledge. Write two separate documents if both apply.
Ask the user one clarification question if the source is unclear, do not guess.
Phase 1.5: Check for Duplicates and Intent Diversion (Mandatory)
Follow Section 6, Items 5 / 6 in
easysdd/reference/shared-conventions.md
:
- If the user mentions "modify / update / supplement / a specific learning" or explicitly points to an old document → directly take the update existing entry path
- Otherwise, use the "Search Tool" below with or to search for the current topic / component. If similar old documents are found, present the candidates to the user and let them choose: Update / Supersede / Truly different topic
Update Path: Read the old document → align with the user on which sections to modify (common scenarios include adding newly encountered pitfalls, filling in the root cause that was "not found at the time") → draft the diff → write back to the original file, add
, do not create a new file.
Phase 2: Refine Key Points (Dialogue with User)
Ask one question at a time, do not give the user a large form to fill out.
For Pitfall Track, ask:
- "What was the initial phenomenon you observed?"
- "Which solutions did you try that didn't work?" (Encourage users to write this even if they think it's "nothing" — failed attempts are the most valuable information for future team members; knowing which paths don't work saves a lot of time)
- "How did you finally identify the real root cause?"
- "Could this be detected earlier next time? How?"
For Knowledge Track, ask:
- "In what scenarios is this pattern you discovered most valuable?"
- "What problems would arise if this approach is not followed?"
- "Are there any counterexamples where this doesn't apply?"
Skip a question if the user says "nothing" or "skip" — it's better to have fewer sections than to fill the document with empty words.
Phase 3: Confirm Content (AI Drafts, User Reviews)
- AI drafts the complete learning document (including YAML frontmatter + all body sections)
- Show the full draft to the user for review at once
- Write to the file after user confirmation; adjust according to user feedback if there are modifications
Phase 4: Archive
- New Document Path: Write to , filename follows
YYYY-MM-DD-learning-{slug}.md
(use the archive date, not the date the problem occurred), include at the top of the frontmatter (see )
- Update Path: Write back to the original file located in Phase 1.5, add to the frontmatter
- Supersede Path: Handle the old and new documents according to Section 6, Item 5 in
- Report the complete file path after writing
Phase 5: Discoverability Check
After writing, check if
or
contains instructions for AI to access the
archive directory.
If not, prompt the user whether to add a line — do not modify the file without permission, only prompt and let the user decide. The reason is that changes to entry files like AGENTS.md affect the team's guidance for AI, so the user should make the final call.
Search Tool
Complete syntax and examples can be found in
easysdd/reference/tools.md
. This section only lists typical queries specific to learning documents.
bash
# Filter pitfalls by track
python easysdd/tools/search-yaml.py --dir easysdd/compound --filter doc_type=learning --filter track=pitfall --filter severity=high
# Search for related learnings by component
python easysdd/tools/search-yaml.py --dir easysdd/compound --filter doc_type=learning --filter component~={组件名}
# Check for duplicates after archiving
python easysdd/tools/search-yaml.py --dir easysdd/compound --filter doc_type=learning --filter tags~={主要 tag} --json
Guardrails
Shared guardrails for archival workflows (add-only, quality over quantity, do not write on behalf of users, discoverability, check for duplicates after archiving) can be found in Section 6 of
easysdd/reference/shared-conventions.md
. Rules specific to this skill:
- Do not mix with spec — learning documents are not spec documents, do not place them in or ; spec documents should also not be placed in
- Only recognize own doc_type — only read and write documents with , do not interact with other doc_type documents in the directory