Start Session
Initialize your AI development session and begin working on tasks.
Operation Types
| Marker | Meaning | Executor |
|---|
| Bash scripts or Task calls executed by AI | You (AI) |
| Slash commands executed by user | User |
Initialization
Step 1: Understand Development Workflow
First, read the workflow guide to understand the development process:
Follow the instructions in workflow.md - it contains:
- Core principles (Read Before Write, Follow Standards, etc.)
- File system structure
- Development process
- Best practices
Step 2: Get Current Context
bash
python3 ./.trellis/scripts/get_context.py
This shows: developer identity, git status, current task (if any), active tasks.
Step 3: Read Guidelines Index
bash
cat .trellis/spec/frontend/index.md # Frontend guidelines
cat .trellis/spec/backend/index.md # Backend guidelines
cat .trellis/spec/guides/index.md # Thinking guides
Step 4: Report and Ask
Report what you learned and ask: "What would you like to work on?"
Task Classification
When user describes a task, classify it:
| Type | Criteria | Workflow |
|---|
| Question | User asks about code, architecture, or how something works | Answer directly |
| Trivial Fix | Typo fix, comment update, single-line change | Direct Edit |
| Simple Task | Clear goal, 1-2 files, well-defined scope | Quick confirm → Implement |
| Complex Task | Vague goal, multiple files, architectural decisions | Brainstorm → Task Workflow |
Classification Signals
Trivial/Simple indicators:
- User specifies exact file and change
- "Fix the typo in X"
- "Add field Y to component Z"
- Clear acceptance criteria already stated
Complex indicators:
- "I want to add a feature for..."
- "Can you help me improve..."
- Mentions multiple areas or systems
- No clear implementation path
- User seems unsure about approach
Decision Rule
If in doubt, use Brainstorm + Task Workflow.
Task Workflow ensures specs are injected to agents, resulting in higher quality code.
The overhead is minimal, but the benefit is significant.
Question / Trivial Fix
For questions or trivial fixes, work directly:
- Answer question or make the fix
- If code was changed, remind user to run
Simple Task
For simple, well-defined tasks:
- Quick confirm: "I understand you want to [goal]. Ready to proceed?"
- If yes, skip to Task Workflow Step 2 (Research)
- If no, clarify and confirm again
Complex Task - Brainstorm First
For complex or vague tasks, use the brainstorm process to clarify requirements.
See
for the full process. Summary:
- Acknowledge and classify - State your understanding
- Create task directory - Track evolving requirements in
- Ask questions one at a time - Update PRD after each answer
- Propose approaches - For architectural decisions
- Confirm final requirements - Get explicit approval
- Proceed to Task Workflow - With clear requirements in PRD
Key Brainstorm Principles
| Principle | Description |
|---|
| One question at a time | Never overwhelm with multiple questions |
| Update PRD immediately | After each answer, update the document |
| Prefer multiple choice | Easier for users to answer |
| YAGNI | Challenge unnecessary complexity |
Task Workflow (Development Tasks)
Why this workflow?
- Research Agent analyzes what specs are needed
- Specs are configured in jsonl files
- Implement Agent receives specs via Hook injection
- Check Agent verifies against specs
- Result: Code that follows project conventions automatically
Step 1: Understand the Task
If coming from Brainstorm: Skip this step - requirements are already in PRD.
If Simple Task: Quick confirm understanding:
- What is the goal?
- What type of development? (frontend / backend / fullstack)
- Any specific requirements or constraints?
Step 2: Research the Codebase
Call Research Agent to analyze:
Task(
subagent_type: "research",
prompt: "Analyze the codebase for this task:
Task: <user's task description>
Type: <frontend/backend/fullstack>
Please find:
1. Relevant spec files in .trellis/spec/
2. Existing code patterns to follow (find 2-3 examples)
3. Files that will likely need modification
Output:
## Relevant Specs
- <path>: <why it's relevant>
## Code Patterns Found
- <pattern>: <example file path>
## Files to Modify
- <path>: <what change>
## Suggested Task Name
- <short-slug-name>",
model: "opus"
)
Step 3: Create Task Directory
Based on research results:
bash
TASK_DIR=$(python3 ./.trellis/scripts/task.py create "<title from research>" --slug <suggested-slug>)
Step 4: Configure Context
Initialize default context:
bash
python3 ./.trellis/scripts/task.py init-context "$TASK_DIR" <type>
# type: backend | frontend | fullstack
Add specs found by Research Agent:
bash
# For each relevant spec and code pattern:
python3 ./.trellis/scripts/task.py add-context "$TASK_DIR" implement "<path>" "<reason>"
python3 ./.trellis/scripts/task.py add-context "$TASK_DIR" check "<path>" "<reason>"
Step 5: Write Requirements
Create
in the task directory with:
markdown
# <Task Title>
## Goal
<What we're trying to achieve>
## Requirements
- <Requirement 1>
- <Requirement 2>
## Acceptance Criteria
- [ ] <Criterion 1>
- [ ] <Criterion 2>
## Technical Notes
<Any technical decisions or constraints>
Step 6: Activate Task
bash
python3 ./.trellis/scripts/task.py start "$TASK_DIR"
This sets
so hooks can inject context.
Step 7: Implement
Call Implement Agent (specs are auto-injected by hook):
Task(
subagent_type: "implement",
prompt: "Implement the task described in prd.md.
Follow all specs that have been injected into your context.
Run lint and typecheck before finishing.",
model: "opus"
)
Step 8: Check Quality
Call Check Agent (specs are auto-injected by hook):
Task(
subagent_type: "check",
prompt: "Review all code changes against the specs.
Fix any issues you find directly.
Ensure lint and typecheck pass.",
model: "opus"
)
Step 9: Complete
- Verify lint and typecheck pass
- Report what was implemented
- Remind user to:
- Test the changes
- Commit when ready
- Run to record this session
Continuing Existing Task
- Read the task's to understand the goal
- Check for current status and phase
- Ask user: "Continue working on <task-name>?"
If yes, resume from the appropriate step (usually Step 7 or 8).
Commands Reference
User Commands
| Command | When to Use |
|---|
| Begin a session (this command) |
| Clarify vague requirements (called from start) |
| Complex tasks needing isolated worktree |
| Before committing changes |
| After completing a task |
AI Scripts
| Script | Purpose |
|---|
python3 ./.trellis/scripts/get_context.py
| Get session context |
python3 ./.trellis/scripts/task.py create
| Create task directory |
python3 ./.trellis/scripts/task.py init-context
| Initialize jsonl files |
python3 ./.trellis/scripts/task.py add-context
| Add spec to jsonl |
python3 ./.trellis/scripts/task.py start
| Set current task |
python3 ./.trellis/scripts/task.py finish
| Clear current task |
python3 ./.trellis/scripts/task.py archive
| Archive completed task |
Sub Agents
| Agent | Purpose | Hook Injection |
|---|
| research | Analyze codebase | No (reads directly) |
| implement | Write code | Yes (implement.jsonl) |
| check | Review & fix | Yes (check.jsonl) |
| debug | Fix specific issues | Yes (debug.jsonl) |
Key Principle
Specs are injected, not remembered.
The Task Workflow ensures agents receive relevant specs automatically.
This is more reliable than hoping the AI "remembers" conventions.