Loading...
Loading...
Executes browser-based user workflows from /workflows/browser-workflows.md using Claude-in-Chrome MCP. Use this when the user says "run browser workflows", "execute browser workflows", or "test browser workflows". Tests each workflow step by step, captures before/after screenshots, documents issues, and generates HTML reports with visual evidence of fixes.
npx skill4agent add neonwatty/claude-skills browser-workflow-executor[Workflow Task] "Execute: User Login Flow"
└── [Issue Task] "Issue: Missing hover states on submit button"
└── [Issue Task] "Issue: Keyboard navigation broken in form"
[Workflow Task] "Execute: Checkout Process"
└── [Issue Task] "Issue: Back button doesn't preserve cart"
[Fix Task] "Fix: Missing hover states" (created in fix mode)
[Verification Task] "Verify: Run test suite"
[Report Task] "Generate: HTML report"Audit Mode → Find Issues → Capture BEFORE → Present to User
↓
User: "Fix this issue"
↓
Fix Mode → Spawn Fix Agents → Capture AFTER → Verify Locally
↓
Run Tests → Fix Failing Tests → Run E2E
↓
All Pass → Generate Reports → Create PRTaskListin_progresspending/workflows/browser-workflows.md/workflows/browser-workflows.md## Workflow:For each workflow to execute, call TaskCreate:
- subject: "Execute: [Workflow Name]"
- description: |
Execute browser workflow: [Workflow Name]
Steps: [count] steps
File: /workflows/browser-workflows.md
Steps summary:
1. [Step 1 brief]
2. [Step 2 brief]
...
- activeForm: "Executing [Workflow Name]"tabs_context_mcpcreateIfEmpty: truetabIdTaskUpdate:
- taskId: [workflow task ID]
- status: "in_progress"navigatefindcomputerleft_clickcomputertyperead_pageget_page_textcomputerleft_click_dragcomputerscrollcomputerwaitcomputeraction: screenshotgif_creatorSave to: workflows/screenshots/browser-audit/wfNN-stepNN.pngwf{workflow_number:02d}-step{step_number:02d}.pngTaskCreate:
- subject: "Issue: [Brief issue description]"
- description: |
**Workflow:** [Workflow name]
**Step:** [Step number and description]
**Issue:** [Detailed description]
**Severity:** [High/Med/Low]
**Current behavior:** [What's wrong]
**Expected behavior:** [What it should do]
**Screenshot:** [Path to before screenshot]
- activeForm: "Documenting issue"
Then link it to the workflow task:
TaskUpdate:
- taskId: [issue task ID]
- addBlockedBy: [workflow task ID]TaskUpdate:
- taskId: [workflow task ID]
- status: "completed"
- metadata: {"issuesFound": [count], "stepsPassed": [count], "stepsFailed": [count]}Task tool parameters:
- subagent_type: "general-purpose"
- model: "opus" (thorough research and evaluation)
- prompt: |
You are evaluating a web app for web platform UX compliance.
## Page Being Evaluated
[Include current page URL and brief description]
## Quick Checklist - Evaluate Each Item
**Navigation:**
- Browser back button works correctly
- URLs reflect current state (deep-linkable)
- No mobile-style bottom tab bar
- Navigation works without gestures (click-based)
**Interactions:**
- All interactive elements have hover states
- Keyboard navigation works (Tab, Enter, Escape)
- Focus indicators are visible
- No gesture-only interactions for critical features
**Components:**
- Uses web-appropriate form components
- No iOS-style picker wheels
- No Android-style floating action buttons
- Modals don't unnecessarily go full-screen
**Responsive/Visual:**
- Layout works at different viewport widths
- No mobile-only viewport restrictions
- Text is readable without zooming
**Accessibility:**
- Color is not the only indicator of state
- Form fields have labels
## Reference Comparison
Search for reference examples using WebSearch:
- "web app [page type] design Dribbble"
- "[well-known web app like Linear/Notion/Figma] [page type] screenshot"
Visit 2-3 reference examples and compare:
- Navigation placement and behavior
- Component types and interaction patterns
- Hover/focus states
## Return Format
Return a structured report:
```
## UX Platform Evaluation: [Page Name]
### Checklist Results
| Check | Pass/Fail | Notes |
|-------|-----------|-------|
### Reference Comparison
- Reference apps compared: [list]
- Key differences found: [list]
### Issues Found
- [Issue 1]: [Description] (Severity: High/Med/Low)
### Recommendations
- [Recommendation 1]
```.claude/plans/browser-workflow-findings.md---
### Workflow [N]: [Name]
**Timestamp:** [ISO datetime]
**Status:** Passed/Failed/Partial
**Steps Summary:**
- Step 1: [Pass/Fail] - [brief note]
- Step 2: [Pass/Fail] - [brief note]
...
**Issues Found:**
- [Issue description] (Severity: High/Med/Low)
**Platform Appropriateness:**
- Web conventions followed: [Yes/Partially/No]
- Issues: [List any platform anti-patterns found]
- Reference comparisons: [Apps/pages compared, if any]
**UX/Design Notes:**
- [Observation]
**Technical Problems:**
- [Problem] (include console errors if any)
**Feature Ideas:**
- [Idea]
**Screenshots:** [list of screenshot IDs captured]TaskCreate:
- subject: "Generate: HTML Audit Report"
- description: "Generate HTML report with screenshots for all workflow results"
- activeForm: "Generating HTML audit report"
TaskUpdate:
- taskId: [report task ID]
- status: "in_progress"TaskList.claude/plans/browser-workflow-findings.mdworkflows/browser-audit-report.html<!-- Required sections: -->
<h1>Browser Workflow Audit Report</h1>
<p>Date: [timestamp] | Environment: [URL]</p>
<!-- Summary table -->
<table>
<tr><th>#</th><th>Workflow</th><th>Status</th><th>Steps</th><th>Notes</th></tr>
<!-- One row per workflow with PASS/FAIL/SKIP badge -->
</table>
<!-- Per-workflow detail sections -->
<h2>Workflow N: [Name]</h2>
<p>Status: PASS/FAIL/SKIP</p>
<h3>Steps</h3>
<ol>
<li>Step description — PASS/FAIL
<br><img src="screenshots/browser-audit/wfNN-stepNN.png" style="max-width:800px; border:1px solid #ddd; border-radius:8px; margin:8px 0;">
</li>
</ol><img>screenshots/browser-audit/wfNN-stepNN.png## Audit Complete
**Workflows Executed:** [completed count]/[total count]
**Issues Found:** [issue task count]
- High severity: [count]
- Medium severity: [count]
- Low severity: [count]
**Report:** workflows/browser-audit-report.html
What would you like to do?
- "fix all" - Fix all issues
- "fix 1,3,5" - Fix specific issues by number
- "done" - End sessionTaskUpdate:
- taskId: [report task ID]
- status: "completed"workflows/
├── screenshots/
│ ├── {workflow-name}/
│ │ ├── before/
│ │ │ ├── 01-hover-states-missing.png
│ │ │ ├── 02-keyboard-nav-broken.png
│ │ │ └── ...
│ │ └── after/
│ │ ├── 01-hover-states-added.png
│ │ ├── 02-keyboard-nav-fixed.png
│ │ └── ...
│ └── {another-workflow}/
│ ├── before/
│ └── after/
├── browser-workflows.md
└── browser-changes-report.html{NN}-{descriptive-name}.png01-hover-states-missing.png01-hover-states-added.pngworkflows/screenshots/{workflow-name}/before/workflows/screenshots/{workflow-name}/after/Call TaskList to get all issue tasks (subject starts with "Issue:")
Display to user:
Issues found:
1. [Task ID: X] Missing hover states on buttons - BEFORE: 01-hover-states-missing.png
2. [Task ID: Y] Keyboard navigation broken - BEFORE: 02-keyboard-nav-broken.png
3. [Task ID: Z] Back button doesn't work - BEFORE: 03-back-button-broken.png
Fix all issues? Or specify which to fix: [1,2,3 / all / specific numbers]For each issue the user wants fixed:
TaskCreate:
- subject: "Fix: [Issue brief description]"
- description: |
Fixing issue from task [issue task ID]
**Issue:** [Issue name and description]
**Severity:** [High/Med/Low]
**Current behavior:** [What's wrong]
**Expected behavior:** [What it should do]
**Screenshot reference:** [Path to before screenshot]
- activeForm: "Fixing [issue brief]"
TaskUpdate:
- taskId: [fix task ID]
- addBlockedBy: [issue task ID] # Links fix to its issue
- status: "in_progress"Task tool parameters (for each issue):
- subagent_type: "general-purpose"
- model: "opus" (thorough code analysis and modification)
- prompt: |
You are fixing a specific UX issue in a web application.
## Issue to Fix
**Issue:** [Issue name and description]
**Severity:** [High/Med/Low]
**Current behavior:** [What's wrong]
**Expected behavior:** [What it should do]
**Screenshot reference:** [Path to before screenshot]
## Your Task
1. **Explore the codebase** to understand the implementation
- Use Glob to find relevant files
- Use Grep to search for related code
- Use Read to examine files
2. **Plan the fix**
- Identify which files need changes
- Consider side effects
3. **Implement the fix**
- Make minimal, focused changes
- Follow existing code patterns
- Do not refactor unrelated code
4. **Return a summary:**
```
## Fix Complete: [Issue Name]
### Changes Made
- [File 1]: [What changed]
- [File 2]: [What changed]
### Files Modified
- src/components/Button.css (MODIFIED)
- src/styles/global.css (MODIFIED)
### Testing Notes
- [How to verify the fix works]
```
Do NOT run tests - the main workflow will handle that.For each completed fix:
TaskUpdate:
- taskId: [fix task ID]
- status: "completed"
- metadata: {
"filesModified": ["src/components/Button.css", "src/styles/global.css"],
"afterScreenshot": "workflows/screenshots/{workflow}/after/{file}.png"
}TaskUpdate:
- taskId: [issue task ID]
- status: "completed"
- metadata: {"fixedBy": [fix task ID], "fixedAt": "[ISO timestamp]"}TaskCreate:
- subject: "Verify: Run test suite"
- description: |
Run all tests to verify fixes don't break existing functionality.
Fixes applied: [list of fix task IDs]
Files modified: [aggregated list from fix task metadata]
- activeForm: "Running verification tests"
TaskUpdate:
- taskId: [verification task ID]
- status: "in_progress"Task tool parameters:
- subagent_type: "general-purpose"
- model: "opus" (thorough test analysis and fixing)
- prompt: |
You are verifying that code changes pass all tests.
## Context
Recent changes were made to fix UX issues. You need to verify the codebase is healthy.
## Your Task
1. **Run the test suite:**
```bash
# Detect and run appropriate test command
npm test # or yarn test, pnpm test
```
2. **If tests fail:**
- Analyze the failing tests
- Determine if failures are related to recent changes
- Fix the broken tests or update them to reflect new behavior
- Re-run tests until all pass
- Document what tests were updated and why
3. **Run linting and type checking:**
```bash
npm run lint # or eslint, prettier
npm run typecheck # or tsc --noEmit
```
4. **Run end-to-end tests locally:**
```bash
npm run test:e2e # common convention
npx playwright test # Playwright
npx cypress run # Cypress
```
5. **If E2E tests fail:**
- Analyze the failures (may be related to UI changes)
- Update E2E tests to reflect new UI behavior
- Re-run until all pass
- Document what E2E tests were updated
6. **Return verification results:**
```
## Local Verification Results
### Test Results
- Unit tests: ✓/✗ [count] passed, [count] failed
- Lint: ✓/✗ [errors if any]
- Type check: ✓/✗ [errors if any]
- E2E tests: ✓/✗ [count] passed, [count] failed
### Tests Updated
- [test file 1]: [why updated]
- [test file 2]: [why updated]
### Status: PASS / FAIL
[If FAIL, explain what's still broken]
```TaskUpdate:
- taskId: [verification task ID]
- status: "completed"
- metadata: {
"result": "PASS" or "FAIL",
"unitTests": {"passed": N, "failed": N},
"e2eTests": {"passed": N, "failed": N},
"lint": "pass" or "fail",
"typecheck": "pass" or "fail"
}TaskCreate:
- subject: "Generate: HTML Report"
- description: "Generate HTML report with before/after screenshot comparisons"
- activeForm: "Generating HTML report"
TaskUpdate:
- taskId: [html report task ID]
- status: "in_progress"Task tool parameters:
- subagent_type: "general-purpose"
- model: "haiku" (simple generation task)
- prompt: |
Generate an HTML report for browser UX compliance fixes.
## Data to Include
**App Name:** [App name]
**Date:** [Current date]
**Issues Fixed:** [Count]
**Issues Remaining:** [Count]
**Fixes Made:**
[For each fix:]
- Issue: [Name]
- Before screenshot: workflows/screenshots/{workflow}/before/{file}.png
- After screenshot: workflows/screenshots/{workflow}/after/{file}.png
- Files changed: [List]
- Why it matters: [Explanation]
## Output
Write the HTML report to: workflows/browser-changes-report.html
Use this template structure:
- Executive summary with stats
- Before/after screenshot comparisons for each fix
- Files changed section
- "Why this matters" explanations
Style: Clean, professional, uses system fonts, responsive grid for screenshots.
Return confirmation when complete.TaskUpdate:
- taskId: [html report task ID]
- status: "completed"
- metadata: {"outputPath": "workflows/browser-changes-report.html"}TaskCreate:
- subject: "Generate: Markdown Report"
- description: "Generate Markdown documentation for fixes"
- activeForm: "Generating Markdown report"
TaskUpdate:
- taskId: [md report task ID]
- status: "in_progress"Task tool parameters:
- subagent_type: "general-purpose"
- model: "haiku"
- prompt: |
Generate a Markdown report for browser UX compliance fixes.
## Data to Include
[Same data as HTML report]
## Output
Write the Markdown report to: workflows/browser-changes-documentation.md
Include:
- Executive summary
- Before/after comparison table
- Detailed changes for each fix
- Files changed
- Technical implementation notes
- Testing verification results
Return confirmation when complete.TaskUpdate:
- taskId: [md report task ID]
- status: "completed"
- metadata: {"outputPath": "workflows/browser-changes-documentation.md"}TaskCreate:
- subject: "Create: Pull Request"
- description: |
Create PR for browser UX compliance fixes.
Fixes included: [list from completed fix tasks]
Files modified: [aggregated from fix task metadata]
- activeForm: "Creating pull request"
TaskUpdate:
- taskId: [pr task ID]
- status: "in_progress"git checkout -b fix/browser-ux-compliancegit add .
git commit -m "fix: browser UX compliance improvements
- [List key fixes made]
- Updated tests to reflect new behavior
- All local tests passing"git push -u origin fix/browser-ux-compliance
gh pr create --title "fix: Browser UX compliance improvements" --body "## Summary
[Brief description of fixes]
## Changes
- [List of changes]
## Testing
- [x] All unit tests pass locally
- [x] All E2E tests pass locally
- [x] Manual verification complete
## Screenshots
See workflows/browser-changes-report.html for before/after comparisons"TaskUpdate:
- taskId: [pr task ID]
- metadata: {
"prUrl": "https://github.com/owner/repo/pull/123",
"ciStatus": "running" | "passed" | "failed"
}TaskUpdate:
- taskId: [pr task ID]
- status: "completed"
- metadata: {"prUrl": "...", "ciStatus": "passed", "merged": false}PR created: https://github.com/owner/repo/pull/123
CI status: Running... (or Passed/Failed)Call TaskList to generate final summary:
## Session Complete
**Workflows Executed:** [count completed workflow tasks]
**Issues Found:** [count issue tasks]
**Issues Fixed:** [count completed fix tasks]
**Tests:** [from verification task metadata]
**PR:** [from pr task metadata]
All tasks completed successfully.navigate({ url, tabId })find({ query, tabId })read_page({ tabId, filter: 'interactive' })computer({ action: 'left_click', coordinate: [x, y], tabId })computer({ action: 'left_click', ref: 'ref_1', tabId })computer({ action: 'type', text: '...', tabId })computer({ action: 'scroll', scroll_direction: 'down', coordinate: [x, y], tabId })computer({ action: 'left_click_drag', start_coordinate: [x1, y1], coordinate: [x2, y2], tabId })computer({ action: 'wait', duration: 2, tabId })computer({ action: 'screenshot', tabId })get_page_text({ tabId })read_console_messages({ tabId, pattern: 'error' })read_network_requests({ tabId })form_input({ ref, value, tabId })alert()confirm()prompt()read_console_messagesTaskListin_progresspendingTaskList shows:
├── All workflow tasks completed, no fix tasks
│ └── Ask user: "Audit complete. Want to fix issues?"
├── All workflow tasks completed, fix tasks in_progress
│ └── Resume fix mode, check agent status
├── Some workflow tasks pending
│ └── Resume from first pending workflow
├── Workflow task in_progress
│ └── Read findings file to see which steps completed
│ └── Resume from next step in that workflow
└── No tasks exist
└── Fresh start (Phase 1).claude/plans/browser-workflow-findings.mdResuming from interrupted session:
- Workflows completed: [list from completed tasks]
- Issues found: [count from issue tasks]
- Current state: [in_progress task description]
- Resuming: [next action]