journey-loop
Original:🇺🇸 English
Translated
Orchestrates a continuous journey-builder → refine → restart loop. Runs journey-builder and refine-journey sequentially, improving the skill each iteration. Loops until all spec requirements are covered by journeys and the score reaches 95%.
2installs
Sourcesunfmin/autocraft
Added on
NPX Install
npx skill4agent add sunfmin/autocraft journey-loopTags
Translated version includes tags in frontmatterSKILL.md Content
View Translation Comparison →You are the curator of a growing test suite. Each journey in your collection should be something you're proud to show. When the loop ends, someone will look at the journeys you produced — the screenshots, the tests, the review files — and judge whether real features were built or whether an agent just went through the motions.
Your job is not to run a process. Your job is to ensure every journey in the collection is genuine.
You manage three phases per iteration:
- Builder — runs the journey-builder skill to build and test the next user journey (runs in background)
- Timing Watcher — monitors in real-time while builder runs; kills the test and reports violations when gaps > 5s are detected
screenshot-timing.jsonl - Refiner — runs the refine-journey skill to evaluate output and improve the skill
Inputs
Spec file: $ARGUMENTS
If no argument given, use in the current directory.
spec.mdShared State Files
| File | Written by | Read by |
|---|---|---|
| Builder | Refiner, Orchestrator, Watcher |
| Builder (snap helper) | Watcher (real-time), Orchestrator |
| Refiner | Orchestrator |
| Refiner | Builder (each restart) |
| Orchestrator | Orchestrator (resume) |
| Builder | Builder, Orchestrator |
Orchestrator State File
Create or resume :
journey-loop-state.mdmarkdown
# Journey Loop State
**Spec:** <path>
**Started:** <timestamp>
**Current Iteration:** 1
**Status:** running
## Iteration History
| # | Journey Built | Duration | Score | AGENTS.md Changes | Decision |
|---|--------------|----------|-------|-----------------|----------|If this file already exists, read it and resume from the correct iteration.
Loop Protocol
Step 0: Load Pitfalls (MANDATORY — every iteration)
Before ANYTHING else, fetch and read ALL pitfall files from the shared gist:
bash
gh gist view 84a5c108d5742c850704a5088a3f4cbf --filesThen read each file:
bash
gh gist view 84a5c108d5742c850704a5088a3f4cbf -f <filename>Include the full pitfalls content in the builder agent's prompt so it has them available.
Step 1: Read Current AGENTS.md + Journey State
Before each iteration, read the root fresh (create if missing). The refiner may have changed it.
AGENTS.mdAlso read to determine what to work on:
journey-state.md1a. Build the Acceptance-Criteria Master List (MANDATORY — every iteration).
Read in full. For every requirement, extract EVERY acceptance criterion. Write the complete list into under a section using this format exactly:
spec.mdjourney-loop-state.md## Acceptance Criteria Master List## Acceptance Criteria Master List
Total requirements: N
Total acceptance criteria: M
| ID | Requirement | Criterion # | Criterion Text |
|----|-------------|-------------|----------------|
| P0-0 | First Launch Setup | 1 | User sees consent dialog on first launch |
| P0-0 | First Launch Setup | 2 | User can accept consent |
...This table is the ground truth for coverage. Every row MUST be accounted for before the loop stops. Do NOT omit any criterion from any requirement.
Priority order for picking the next journey:
- Any journey with status or
in-progress→ work on that one firstneeds-extension - If no in-progress journeys, pick the next uncovered spec requirement and create a new journey
- Journeys with status but unmeasured/estimated (
polished) durations need a measurement run, but do NOT block progress on new journeys. The orchestrator can batch-measure these separately.~
A journey is "truly unfinished" only if its status is or . Polished journeys with unmeasured durations are low-priority — measure them when no in-progress work remains.
in-progressneeds-extensionStep 2: Launch Builder + Timing Watcher
2a. Determine the journey being worked on. From Step 1, you know which journey the builder will work on. Identify its folder path: .
journeys/{NNN}-{name}/2b. Clear the timing file before launching the builder:
bash
rm -f journeys/{NNN}-{name}/screenshot-timing.jsonl2c. Launch the Builder Agent in background. Spawn a new Agent (run_in_background=true) with:
- The full content of as instructions (if it exists)
AGENTS.md - The full content of all pitfall files from the gist
- The current content
journey-state.md - Clear directive: work on the first in-progress/needs-extension journey, or create the next new journey for uncovered spec requirements
2d. Launch the Timing Watcher immediately after the builder starts. The watcher is a polling loop that YOU (the orchestrator) run directly — not a separate agent. Use Bash to poll:
bash
# Poll screenshot-timing.jsonl every 5 seconds
TIMING_FILE="journeys/{NNN}-{name}/screenshot-timing.jsonl"
SEEN=0
while true; do
if [ -f "$TIMING_FILE" ]; then
TOTAL=$(wc -l < "$TIMING_FILE" | tr -d ' ')
if [ "$TOTAL" -gt "$SEEN" ]; then
# Show new entries
tail -n +"$((SEEN + 1))" "$TIMING_FILE"
# Check for unexcused SLOW entries (skip SLOW-OK which are documented)
SLOW_COUNT=$(tail -n +"$((SEEN + 1))" "$TIMING_FILE" | grep '"SLOW"' | grep -cv 'SLOW-OK' || true)
SEEN=$TOTAL
if [ "$SLOW_COUNT" -gt "0" ]; then
echo "VIOLATION: $SLOW_COUNT new SLOW entries detected (not SLOW-OK)"
grep '"SLOW"' "$TIMING_FILE" | grep -v 'SLOW-OK'
echo "STOPPING_BUILDER"
# Kill the running xcodebuild test
pkill -f "xcodebuild.*test.*-only-testing" 2>/dev/null || true
exit 1
fi
fi
fi
sleep 5
doneRun this Bash command in the background. When it exits with code 1, a timing violation was caught. entries (documented unavoidable gaps) are ignored.
SLOW-OK2e. Wait for the builder to complete. Two possible outcomes:
Outcome A — Builder completes normally (no violations):
The watcher found no SLOW entries. Proceed to Step 3 (Refiner).
Outcome B — Builder completes but evidence review finds gaps:
The orchestrator reads the screenshots and timing log. If the snap index sequence has large gaps (e.g., snap names jump from "090-..." to "103-..." skipping the entire recording phase), the journey has silently skipped phases. Re-launch the builder with a directive to investigate and fix the gaps. Include the specific missing phases in the prompt.
Outcome C — Watcher killed the test (violation detected):
- Read to find all SLOW entries
screenshot-timing.jsonl - For each SLOW entry, read the test code to find what happens between the previous screenshot and the slow one
- Research: Is it possible to make this step <= 5 seconds?
- Read the app code that the test is exercising
- Check if a is set too high
waitForExistence(timeout:) - Check if the app itself is doing unnecessary work
- Check if intermediate screenshots could break a long operation into visible chunks
- If fixable (can be <= 5s): Fix the test code or app code. Go back to 2b (clear timing, re-launch builder).
- If NOT fixable (genuine async like a real download): Add a comment in the test code on the line BEFORE the slow snap explaining exactly why: . Then go back to 2b.
// SLOW-OK: 8s gap — simulated model download requires async completion, cannot be reduced - The watcher will now skip entries with matching names that have comments in the test code.
SLOW-OK
Important: When investigating a SLOW entry, think carefully. Common fixable causes:
- where the element appears in <1s — lower the timeout
waitForExistence(timeout: 10) - Missing accessibility identifier causing XCUITest to do a slow tree search — add the identifier
- App performing synchronous work on main thread — move to background
- Test waiting for an element that doesn't exist yet because app code hasn't been written — write the app code
Common unfixable causes (document these):
- Real network/download simulation that must complete asynchronously
- App launch time (first screenshot always has overhead)
- System permission dialogs that appear unpredictably
Step 3: Launch Refiner Agent
After the builder completes, invoke the skill via the Skill tool, passing the spec path as an argument.
autocraft:refine-journeyWait for the refiner to complete. It will:
- Evaluate the builder's output
- Write a score to
journey-refinement-log.md - Edit with project-specific improvements, or add platform-specific pitfalls to the gist
AGENTS.md
Step 4: See What the Builder Produced
Don't just read journey-state.md. Look at the actual work:
-
Read 3-5 screenshots from the journey the builder just worked on. Do they show real features working? Or empty states, error messages, "No Results"? If the screenshots show a feature that's supposed to work but the screenshot shows it empty or broken — the journey is not done, regardless of what journey-state.md claims.
-
Check the journey folder for review files. Are there actual review notes? Or did the builder skip them?bash
ls -la journeys/{NNN}-{name}/*.mdIf the builder claims "polished" but there are fewer than 3 review files — it's not polished. -
Read the refinement log for the score and findings. Extract:
- — the percentage
Score: - — list of failures
Failures Found: - — what was changed
Changes Made to AGENTS.md:
-
Read— but treat it as the builder's CLAIM, not the truth. If your own observations (screenshots, review files) contradict the claimed status, update the status yourself.
journey-state.md
If the screenshots show empty states where features should be, or the review files don't exist — update the status to and send the builder back.
needs-extensionStep 5: Decide Next Action
5a. Pre-stop audit (MANDATORY when score >= 90% or all journeys show ).
polished- Read the Acceptance Criteria Master List from . M total rows.
journey-loop-state.md - For each criterion row: (a) confirm a journey maps it by number in its , (b) confirm the journey's test file contains a step exercising it (search for keywords from the criterion text), (c) confirm a screenshot file exists in
## Spec Coveragefor that step.journeys/{NNN}-{name}/screenshots/ - Build a final audit table:
## Pre-Stop Criterion Audit | Req ID | Crit # | Journey | Mapped? | Test Step? | Screenshot? | VERDICT | - Count uncovered = rows with any NO.
- If uncovered > 0: do NOT stop. For each uncovered criterion — if it belongs to a journey currently marked , update that journey to
polishedinneeds-extension; if no journey owns it, create a new journey targeting those criteria. Continue the loop.journey-state.md - Only proceed to stop if uncovered == 0 AND score >= 95%.
Stop if score >= 95% AND pre-stop audit shows 0 uncovered criteria AND all journeys .
polishedIf current journey is not yet : Continue working on the same journey next iteration.
polishedIf current journey is : Move to the next journey, or next uncovered criteria from the audit.
polishedneeds-extensionIf score did NOT improve for 2 consecutive iterations: Log a warning. If the same failure pattern appears 3 times, escalate.
Step 6: Update Loop State
Append to :
journey-loop-state.md| <iteration> | <journey-name> | <duration> | <score>% | <N changes> | <continue/done> |Increment iteration counter. Go to Step 0.
Stop Condition
Stop when all of:
- Overall score >= 95%
- Build passes
- All journey tests pass
- Every journey in has status
journey-state.mdpolished - Pre-stop criterion audit in Step 5a shows 0 uncovered criteria (every acceptance criterion in has: a journey mapping it by number, a test step exercising it, and a screenshot proving the outcome)
spec.md - Total criteria covered == M (from the Acceptance Criteria Master List)
When stopped, output:
Loop complete after <N> iterations.
Final score: XX%
Journeys built: <list with durations>
Spec coverage: X / N requirements fully covered (all criteria)
Criteria coverage: X / M acceptance criteria covered (impl + test + screenshot)
Uncovered criteria: (should be 0)
Total test suite duration: Xm
Run all tests with: <exact test command>Safety Limits
- No iteration limit. The loop runs indefinitely until the user stops it or the stop condition is met.
- Stall detection: If the builder produces no changes for 2 consecutive iterations, log the stall and proceed to the refiner — it can diagnose why the builder stalled.
- Never modify the spec — the spec is read-only. Only and the pitfalls gist get improved.
AGENTS.md - Pitfall gist is append-only — add new pitfalls, never delete existing ones.