easysdd-feature-implement
At this point, the user has already signed off on the plan. Your task is to turn the plan into code. It sounds straightforward, but the actual problem is not writing the code itself, but what to do when you encounter situations not covered by the plan during implementation — forcing ahead makes the plan a mere formality, while stopping to discuss feels troublesome. The entire set of rules below is designed to make "stopping" the default action.
See Section 0 of
easysdd/reference/shared-conventions.md
for shared paths and naming conventions. By this stage, the feature directory has already been created by brainstorm or design.
Three Principles for Writing Code
The specific rules will be covered in "Startup Checks" and "Core Constraints During Implementation" below. This section first explains three overarching principles — they determine your default orientation when writing code. The specific rules are the applications of these three principles in common scenarios.
1. Default to writing the least code possible
Only write exactly what is explicitly required in the current step. Do not casually add configurable items, abstraction layers, parameter switches, or defensive fallbacks that "might be needed later". A judgment criterion: After writing a piece of code, if you think "Should I add X to make it complete?", first ask yourself whether X is perceivable by the user in the current step — if not, don't add it.
After finishing the entire implementation, if you find that 200 lines could be clearly expressed in 50 — rewrite it. Extra code is not neutral; it becomes a burden for future maintainers who have to first understand it, doubt it, and worry about missing some invariant.
2. Only modify what needs to be modified; do not "improve" adjacent code casually
When opening a file to modify a certain function, only modify that function. If other functions in the same file have ugly styles, strange naming, or outdated comments — do not touch them unless they directly conflict with your current changes. The style of newly written code should match the existing style in the current file, even if you don't usually write this way.
"Casual improvements" mixed into a PR prevent users from quickly seeing exactly what was changed and why. A clean feature PR can be diluted into a "mess of comprehensive changes" by style adjustments, variable renaming, and adjacent function rewrites, multiplying the review cost several times. If you really find something worth modifying, record it as a subsequent issue in the format of "Casual Discovery" as described below.
Orphan code handling is also tightened: If your current change makes an import / variable / function dead code — delete it. Dead code that was already there before your change — leave it and record it as a casual discovery.
3. Do not make decisions on matters not specified in the design
If you encounter corners not covered by the design halfway through writing code (a new boundary condition, an error path not specified how to handle, an out-of-scope file that needs to be modified) — the default action is to stop and discuss with the user based on the design, not to choose a reasonable approach on your own and continue writing.
"Stop when you have the urge to create a patch branch" and "Terminology Guard" below are two typical applications of this principle; but the scope is broader — not limited to patch branches and terminology, any moment when you "make a choice that the design didn't explicitly state" triggers this rule.
Startup Checks
Go through these checks before starting to code:
1. Is the plan file sufficient to support implementation?
Open
and first check the frontmatter:
- The file header has YAML frontmatter with
- The field matches the current feature directory
- is not empty, and has at least 2 items
Then check the section content — the check items for standard design and fastforward design are different:
Standard design (section numbers 0/1/2/3/4):
- Section 0 (Terminology Conventions) has content
- Section 2 (Interface Contract) has specific code pointers
- The implementation plan in Section 3 (Implementation Tips) specifies exact paths and functions
- The implementation sequence in Section 3 has clear steps and exit signals
- The test design in Section 3 covers each feature point, with test constraints / verification methods / use case skeletons for each
Fastforward design (section numbers 0/1/2/3):
- Section 0 (Requirement Summary) includes "Explicitly Not Doing"
- Section 1 (Design Plan) has change points (file path + function / type name)
- Each item in Section 2 (Acceptance Criteria) is verifiable (operation steps + expected results)
- The implementation steps in Section 3 have clear steps and exit signals
If any item fails, stop and tell the user to first go back to
to complete it. The reason is that missing items in the plan must be filled in on-site during implementation — and on-site filling means the user didn't review the plan, which bypasses the checkpoint.
2. Does {slug}-checklist.yaml exist and is it usable?
See
easysdd/reference/shared-conventions.md
for the lifecycle of
. In this phase, only the
section is consumed and advanced:
- The file exists, and the field matches the current feature directory
- The list is not empty, and each item's status is (some may be if resuming from a previous interruption, which is normal)
- If it does not exist → stop and ask the user to generate it in
3. Read the full context
Must-read before starting:
- Full content of the plan document
- Requirement source (user description + brainstorm note, if any)
- All existing code files mentioned in the contract examples of Section 2 of standard design / change points of Section 1 of fastforward design — only read the relevant functions, no need to read the entire file
4. Confirm with the user which step to start from
Usually Step 1, but if resuming from a previous interruption, refer to the
steps in
and start from the next step.
Core Constraints During Implementation
These are not arbitrary bans; each has a specific cost behind it. Only by understanding the reasons can you avoid rigid execution.
Strictly follow the step sequence in {slug}-checklist.yaml
Execute in the order of the
list; do not merge or skip steps. Immediately change the status of each step from
to
after completion.
The most common violation is "casually doing the next step as well" — why is this not allowed? Because splitting actions into steps in the plan has a purpose: each step corresponds to an independently verifiable exit signal. Combining two steps means that when a problem occurs, you don't know which step introduced it, and you can't roll back to a clean intermediate state.
Do not make changes outside the plan
If you find points worth refactoring while reading code (refer to the "Identify During Implementation" section in
), as long as they are
not within the scope of this feature, record them as subsequent issues and do not modify them casually.
Recording format:
markdown
> Casual Discovery: {File:Line Number} {Brief description of the problem}. Not within the scope of this iteration, recorded for subsequent issue.
Why is this so strict? Casually modified code is not in the plan, so it cannot be verified during acceptance; future maintainers looking at git blame can't tell which changes are for this feature and which are casual. If three or five "casual" changes are mixed in, the entire PR becomes unclear about what was actually changed.
Terminology Guard
Only applicable to standard design: All newly written type names, function names, and variable names must be cross-checked against Section 0 (Terminology Conventions) of the plan document. New concepts not in the document are not allowed. If you feel the need to introduce a new concept, first stop to modify Section 0 of the plan document, grep for conflicts, confirm with the user, and then continue writing code.
The cost behind this rule is specific: Terminology conflicts mean that in the future, the same concept will have two names in the code, or two different concepts will share the same name — the latter is particularly fatal, as it will make search completely ineffective.
Fastforward design does not have a formal terminology table, but the same principle applies: when you need to create a new concept name (type / function / key variable), grep the current code for the same or similar names. If a conflict is found, stop to rename it or discuss with the user based on the plan.
Stop when you have the urge to create a patch branch
If you have the urge to write code like
if (special case) { special handling }
halfway through writing,
stop.
The only reason for patch branches in new features is usually that the plan does not cover this situation. Continuing to write forcefully will result in "special logic added just to make the code run", and future maintainers will have no idea why this branch exists. The correct approach is to discuss with the user based on the plan: either add this situation to the design, cut it, or clearly mark it as a legacy issue.
Code Quality Reflection Check
In addition to the process constraints above ("No skipping steps / No out-of-plan changes / Terminology Guard / No patch branches"), there is a set of reflection checks for code quality — see Section 7 of
easysdd/reference/shared-conventions.md
.
Core idea:
It's not "must split if more than N lines", but "stop and ask yourself when encountering situation X". Each item corresponds to a pit that AI will default to — appending to an already long file, adding methods to an already heavy class, making functions do more and more without splitting, writing
if (special user handling)
patch branches, copy-pasting, adding the 4+ parameter, piling code into a universal util. Stop when triggered during writing.
If the conclusion of the reflection check is "Need to split / Need to create a new file / Need to rename / Need to extract a shared layer", and this action is beyond the scope of the existing steps in
, discuss with the user first before making a decision — do not split secretly and continue writing.
Submit a Unified Report After Completion
After completing all steps, submit a report using the fixed template below, then stop and wait for user review.
Why a fixed template? Vague reports (e.g., "Basically done", "Should be okay") shift all verification responsibility to the user. The fixed template forces you to clearly state which files were modified, whether any out-of-plan content was touched, and whether new concepts were introduced. With this report, the user can directly check the relevant points without re-reading the entire git diff.
markdown
## Implementation Completion Report
### Modified Files
{Run git status and paste the actual output}
### Modified Functions / Types (Grouped by Step)
**Step N: {Step Name}**
- file:line Function Name Change Type (Add / Modify / Delete)
- ...
### Did you touch any files outside the plan?
{Yes / No. If yes, explain why and whether the plan document has been updated synchronously}
### Did you introduce new concepts / abstractions not in the plan document?
{Yes / No. If yes, explain that the plan document has been updated (Section 0 of standard design for terminology conventions; Section 1 of fastforward design for design plan) and grep has been done to prevent conflicts}
### Self-Check of Code Quality Reflection
{Compare with Section 7 of shared-conventions. Did any reflection signals trigger during this implementation (appending to large files / adding methods to large classes / functions longer than one screen / special branches / copy-paste / multi-parameter functions / piling into util)? If triggered, explain how it was handled (stopped to split / discussed with user and added to steps / confirmed natural aggregation no need to split); if none triggered, write "No triggers"}
### Check of Implementation Sequence Exit Signals
{Compare with steps in {slug}-checklist.yaml, list action + exit_signal + status (should all be done)}
### Self-Check of Test Constraints
**Standard Design**:
{Compare with the test design in Section 3 of the plan. Does the current implementation meet the test constraints for each feature point? How is it guaranteed (type system / unit test / integration test / runtime assert)?}
**Fastforward Design**:
{Compare with the acceptance criteria in Section 2 of the plan, check each item for compliance}
After submitting the report, stop and wait for review. If the user provides modification suggestions, make the changes and submit a brief confirmation again, repeating until the user explicitly approves to enter the acceptance phase.
How to Implement Test Cases
The key use case skeletons in the test design of Section 3 of the plan are the input for implementing tests, not decoration — write complete test cases based on the skeletons.
Note a common misunderstanding: Test pass ≠ Test constraints satisfied. A test pass only means the test cases you wrote passed, but it does not mean each test constraint is covered by a test case. Therefore, during the report, you must confirm item by item that each test constraint for each feature point is covered by a test case.
If a test constraint is guaranteed by the type system (e.g., TypeScript type signature directly excludes a certain call), explain in the report "This type signature has been implemented, guaranteed at compile time".
Exit Conditions
After Exit
Tell the user: "All steps are completed, and the plan document has been updated synchronously. The next step is Phase 3: Acceptance Closure. You can trigger the easysdd-feature-acceptance skill."
Do not casually start writing the acceptance report yourself — the acceptance phase requires an independent checklist rhythm. Entering early will invalidate the gatekeeping role of acceptance.
Common Pitfalls
- Submitting a completion report after only writing part of the code — the report is only submitted once after all steps are completed
- Writing "Modified relevant files" in the report instead of listing specific file:line
- Casually modifying code outside the plan
- Introducing new types / concepts without updating the plan document (modify Section 0 of standard design for terminology conventions; modify Section 1 of fastforward design for design plan)
- Adding
if (user is X) { special handling }
patch branches without stopping to reflect on the plan
- Entering the acceptance phase on your own before the user's review is approved
- Not implementing any of the use case skeletons in the test design, or not verifying each test constraint one by one",