Implementation Workflow
Core Principles
- Execute tasks by dependency - Pick any task with no unresolved dependencies and execute it
- Complete each task fully - Implementation → Verification → Self-review → External review as one unit per task
- Complete all tasks without stopping - NEVER stop mid-workflow; continue until all tasks are finished
Reference
This workflow uses files from the
.tasks/{YYYY-MM-DD}-{nn}-{slug}/
directory created by impl-plan:
- plan.md - Human-readable plan with task descriptions, file paths, and acceptance criteria
- plan.json - Machine-readable task list for tracking progress
- memory.md - Learnings recorded during task execution (created by this workflow)
See plan-json-schema.md for:
- Schema definition of
- yq commands to query next executable task and mark tasks complete
Documentation Language
All documents under
must be written in
English.
Workflow
Phase 1: Task Planning
Load the implementation plan and register tasks as todos:
- Load the implementation plan from
- Register all tasks as todos (all start as "pending")
- Ensure review is included after implementation phase
Loading the Plan
Read
from the
.tasks/{YYYY-MM-DD}-{nn}-{slug}/
directory.
Register Todos
Register and track todos from the
array using your environment's task management mechanism (e.g., todo tool, task list, or equivalent):
- 1 task = 1 todo (strict 1:1 mapping)
- DO NOT combine multiple tasks into a single todo
- Use task prefix as todo ID (e.g., , )
- Use task title as todo content (e.g., )
- If , register as "completed" (for resuming interrupted work)
- If , reset to in and register as "pending" (previous execution was interrupted; re-execute from scratch)
- If , register as "pending"
- Update todo status as each task completes
Example - If
has 3 tasks (B1, B2, F1), create exactly 3 todos:
Todo 1: id="B1", content="B1: Create User Model"
Todo 2: id="B2", content="B2: Add API endpoints"
Todo 3: id="F1", content="F1: Build login form"
WRONG: Creating a single todo like "Implement B1, B2, and F1" that combines multiple tasks
Phase 2: Task Execution
Execute tasks based on dependency resolution. For each task, perform implementation, self-review, and external review as a single unit.
Task Execution Loop:
- Get next task: Find the next executable task from
- Task has
- All tasks in have (or is empty)
- Mark task as in progress: Set in
- Read task details: Get task details from (description, file paths, acceptance criteria, etc.)
- Implement: Execute the task implementation
- Verify: Run verification checks (see Verification below)
- Self-review: Review and fix issues (see Self-Review below)
- External review: Request external review via subagent (see External Review below)
- Complete task:
- Mark task as complete in (set )
- Record learnings in memory.md (see Memory Recording below)
- Git commit all changes (see Git Commit below)
- Update todo status
- Repeat: Go to step 1 until all tasks are complete
Verification
After implementing each task, run verification checks:
-
Static Analysis
- Run linter (eslint, ruff, golangci-lint, etc.)
- Run type checker if applicable (tsc, mypy, etc.)
-
Test Execution
- Run existing tests to detect regressions
- Run new tests for implemented features
-
Build Check
- Verify the project builds successfully
-
Acceptance Criteria
- Verify all acceptance criteria defined in for this task are met
- Each criterion must be explicitly checked and confirmed
Verification Loop (MUST complete before proceeding):
- Run all verification checks (including acceptance criteria)
- If ANY check fails or ANY acceptance criterion is not met:
- Fix the issues (do NOT just report them)
- Return to step 1 and re-run ALL checks
- Only proceed to self-review when ALL checks pass and ALL acceptance criteria are met
CRITICAL: Reporting issues without fixing them is NOT acceptable. The verification loop MUST continue until all checks pass and all acceptance criteria are satisfied.
Self-Review
After verification passes, perform self-review:
- Review implemented code for:
- Correctness and adherence to requirements
- Code quality and best practices
- Potential bugs and edge cases
- Security concerns
- Performance issues
- Based on review findings, determine response based on fix complexity (NOT issue severity):
- Simple/Moderate fixes: Fix autonomously without user confirmation
- Complex fixes requiring significant changes: Consult user before proceeding
- For autonomous fixes:
- Apply fixes directly
- Re-run verification
- Re-run self-review
- Repeat until all issues are resolved
- Proceed to External Review ONLY when self-review passes with no issues
Fix Autonomously (regardless of issue severity):
- Localized code changes within a few files
- Bug fixes and error corrections
- Code style and formatting issues
- Missing error handling
- Performance improvements within current architecture
- Documentation improvements
- Test coverage additions
- Refactoring within existing patterns
Require User Confirmation:
- Changes requiring significant architectural restructuring
- Modifications spanning many files or modules
- Changes that fundamentally alter the implementation approach
- Trade-offs between conflicting requirements
External Review
After self-review passes, perform external review using a subagent:
IMPORTANT: External review is high-cost. Resolve all self-review issues BEFORE requesting external review.
-
Launch or resume review subagent
- If no review subagent exists yet, launch one and store the agent ID in session memory
- If a review subagent already exists, resume it using the stored agent ID
- If the subagent has been terminated for any reason, launch a new one and store the new agent ID
-
Provide task context
- Pass the current task ID (UUID) and task prefix
- Instruct the subagent to read the corresponding section in for acceptance criteria, target files, and design intent
- Provide the list of files changed in this task
- Instruct the subagent to read the relevant codebase (changed files and their surrounding context)
-
Process review findings
- Identify all issues and suggestions from subagent response
-
If issues exist:
- Fix all identified issues
- Re-run verification
- Perform self-review again
- Resume the same subagent using the stored agent ID
- Repeat until external review passes
-
If no issues:
- External review passed
- Mark task as complete and proceed to next task
- Do NOT terminate the subagent (it will be reused for the next task)
Memory Recording
After external review passes, record learnings from this task in
.tasks/{YYYY-MM-DD}-{nn}-{slug}/memory.md
:
- Create or update memory.md in the task directory
- Title: Use
# {Plan Title} Implementation
as the document title (e.g., # User Authentication Implementation
)
- Write in English - memory.md is a structured data source for generating other artifacts
- Add a new section for this task using the task prefix as heading (e.g., )
- Use the entry template for each learning (see below)
Entry Template
Each learning MUST use the following structure:
markdown
### <Category>: <Title>
**Context**: What situation triggered this (1-2 sentences)
**Problem**: What specifically went wrong or was unexpected
**Resolution**: How it was resolved (MUST include concrete examples: code snippets, config values, commands, error messages, etc.)
**Scope**: `codebase` | `task-specific`
- : Applies to any development in this codebase → candidate for agent instruction files (AGENTS.md / CLAUDE.md)
- : Specific to this implementation → stays only in memory.md, valuable for future maintenance and debugging of this feature
What to Record
Both codebase-wide and task-specific learnings:
- Technical insights specific to this codebase ()
- Workarounds for library/framework quirks ()
- Configuration or environment discoveries ()
- Code patterns that worked well or didn't ()
- Testing strategies that proved effective ()
- Deviations from the original plan and why ()
- Implementation decisions and their rationale ()
- Integration details specific to this feature ()
- Edge cases encountered and how they were handled ()
What NOT to Record
- Generic programming knowledge (not specific to this codebase)
- Information already documented elsewhere (README, docs, etc.)
See memory.md for example format.
Git Commit
After recording learnings, commit all changes for this task:
-
Check for commit message rules
- Look for project-specific commit conventions (e.g., , , or repository rules)
- If rules exist, follow them
-
Default to Conventional Commits
- If no project-specific rules exist, use Conventional Commits format:
<type>(<scope>): <description>
- Common types: , , , , ,
- Scope: affected module (e.g., , , )
- Example:
feat(auth): add user authentication endpoint
-
Stage and commit
- Stage all changes (if is not gitignored, include it as well)
- Create commit with appropriate message
- Do NOT push (user will decide when to push)
Note: If
is not gitignored, including it in the commit ensures consistency when resuming interrupted work. If it is gitignored, it will be automatically excluded and the commit will proceed normally.
Phase 3: Completion
Before reporting completion to user:
- Verify all tasks in have
- Verify all todos are marked as completed
- If any task is incomplete, return to Phase 2 and complete remaining tasks
- Update agent instruction files with learnings (see below)
- External review of agent instruction updates: Resume the review subagent and request review of the updated agent instruction files (see below)
- Git commit the updates (use commit message:
docs: update agent instructions with learnings from {slug}
)
- Terminate review subagent: If a review subagent was used and a terminate function is available, terminate it by specifying the stored agent ID. If termination is not supported, do nothing.
- Provide summary of completed work
Update Agent Instruction Files
Integrate universally applicable learnings from
into agent instruction files (
and/or
).
Step 1: Determine Update Targets
Scan the repository for
and
files (they can exist at root AND in subdirectories). Then determine which files to update:
| Condition | Update target |
|---|
| Only exists | |
| Only exists | |
| Both exist independently | Both and |
| One references the other (e.g., contains or ) | Only the file with actual content (skip the reference-only file) |
Reference detection: A file is considered a reference-only file if its primary content is a reference to the other file (e.g.,
or
). Such files should NOT be updated — only the file with substantive content is the update target.
If neither file exists anywhere, create
at repository root.
Step 2: Find All Target Files
- Target files can exist at repository root AND in subdirectories
- Each file applies to its directory and descendants
- Example locations: , ,
Step 3: Read and Understand Existing Structure
- Read the entire target file before making changes
- Identify existing sections and their purposes
- Understand the organizational pattern used in the file
Step 4: Review and Filter Learnings
Review ALL entries in
and determine which belong in agent instruction files:
- is a hint, not a definitive filter — the agent that wrote it may have misclassified entries
- entries are strong candidates, but still verify they are truly universal
- entries should also be reviewed — some may contain patterns, conventions, or gotchas that apply beyond the current task
- Apply the include/exclude criteria in Step 6 as the final decision basis
Step 5: Match Learnings to Appropriate File
- Review each selected learning
- Determine which directory scope the learning applies to
- Update the target file closest to the relevant code
- Example: Backend database learnings → (if exists) or
Step 6: Integrate into Existing Structure
DO NOT create a "Learnings" section:
- Find the most appropriate existing section for each learning
- If a section for that topic exists, add to it or update existing content
- If no suitable section exists, create a descriptive section name that matches the topic (e.g., "Database Patterns", "API Conventions", "Testing Guidelines")
- Merge related information rather than duplicating
- Keep entries concise and actionable
- Focus on "what every developer should know"
Include:
- Codebase-specific conventions discovered
- Non-obvious configuration requirements
- Integration patterns with external systems
- Common pitfalls and how to avoid them
- Testing patterns specific to this codebase
Do NOT include:
- Implementation details that only matter for this specific task and have no broader applicability
- Temporary workarounds
- Information already documented in README or other docs
- Generic best practices (not codebase-specific)
- A generic "Learnings" or "Learning" section (integrate into topic-specific sections instead)
External Review of Agent Instruction Updates
After updating agent instruction files, request external review using the same review subagent:
- Resume or re-launch the review subagent — if the stored agent ID is still valid, resume it; otherwise launch a new one and store the new agent ID
- Provide the updated files for review — include the diff or full content of each updated agent instruction file
- Review criteria for agent instruction files:
- Are the learnings correctly scoped (codebase-wide, not task-specific)?
- Are entries placed in appropriate sections?
- Are entries concise, actionable, and useful for other developers?
- Is there any duplication with existing content?
- Does the content read naturally within the existing document structure?
- If issues exist: Fix all identified issues, then resume the subagent for re-review
- If no issues: Proceed to git commit
Important Rules
- NEVER stop mid-workflow - Complete ALL tasks from start to finish without interruption
- Execute tasks by dependency - Pick any task where all dependencies are complete; no strict execution order
- Complete each task fully before moving to next - Implementation → Verification → Self-review → External review → Memory → Commit → Mark complete
- Run verification after implementation - Execute lint, tests, and build checks; fix all issues before self-review
- Resolve ALL self-review issues before external review - External review is high-cost; do not waste it on issues you can find yourself
- Use single subagent for external review - Launch at first external review, reuse across ALL tasks in Phase 2 and for agent instruction file review in Phase 3
- Re-launch subagent if terminated - If the subagent has been terminated unexpectedly, launch a new one, store the new agent ID, and have it read plan.md and relevant codebase before proceeding
- Keep subagent ID in session memory - Store the agent ID to resume the same subagent for all external reviews throughout the workflow
- Terminate subagent only after Phase 3 review - Do NOT terminate the review subagent until agent instruction file review is complete
- Record learnings in memory.md - After each task, document discoveries, gotchas, and patterns in memory.md
- Update appropriate agent instruction files - AGENTS.md / CLAUDE.md can exist in root and subdirectories; determine update targets per Step 1 rules and match learnings to the closest relevant file
- Create AGENTS.md if neither exists - If no AGENTS.md or CLAUDE.md exists in the repository, create AGENTS.md at root with universal learnings
- Fix review findings autonomously based on fix complexity - do NOT ask user permission for simple/moderate fixes
- Only consult user when fixes require significant architectural changes or widespread modifications