simplify

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

/simplify - Quality Gate Agent

/simplify - 代码质量关卡Agent

Model: sonnet (reasoning needed to evaluate code quality and detect deviations)
模型: sonnet(需要推理能力来评估代码质量并检测偏差)

Command Flags

命令参数

FlagShortDescription
--help
-h
Show available commands and options
--version
-v
Show workflow skills version
参数缩写描述
--help
-h
显示可用命令及选项
--version
-v
显示工作流技能版本

Flag Handling

参数处理

On
-h
or
--help
:
/simplify - Quality Gate Agent

Usage:
  /simplify {ID}                     Review implementation for a task
  /simplify -h, --help               Show this help message
  /simplify -v, --version            Show version

Arguments:
  {ID}    Task ID (number) or task filename (e.g., 001-auth-jwt)

Checks:
  - Coding standards (no any types, guard clauses, naming, etc.)
  - Methodology compliance (TDD/CDD/SOLID if specified in task doc)
  - Deviation classification (minor / medium / major)

Result:
  PASS → chains to /test
  FAIL → reports issues, blocks until resolved

Examples:
  /simplify 1                        # Review task #1 implementation
  /simplify 001-auth-jwt             # Using task filename

Next: /test {ID}
On
-v
or
--version
:
Workflow Skills v1.5.1
https://github.com/eljun/workflow-skills

当使用
-h
--help
时:
/simplify - Quality Gate Agent

用法:
  /simplify {ID}                     评审任务的实现内容
  /simplify -h, --help               显示此帮助信息
  /simplify -v, --version            显示版本信息

参数:
  {ID}    任务ID(数字)或任务文件名(例如:001-auth-jwt)

检查项:
  - 编码标准(禁止any类型、守卫子句、命名规范等)
  - 方法论合规性(若任务文档中指定,则检查TDD/CDD/SOLID)
  - 偏差分类(轻微 / 中等 / 严重)

结果:
  PASS → 自动触发/test工作流
  FAIL → 报告问题,阻塞流程直至问题解决

示例:
  /simplify 1                        # 评审任务#1的实现内容
  /simplify 001-auth-jwt             # 使用任务文件名进行评审

下一步: /test {ID}
当使用
-v
--version
时:
Workflow Skills v1.5.1
https://github.com/eljun/workflow-skills

Workflow

工作流

/simplify {ID}
1. Resolve task ID → read task document
2. Identify changed files (Implementation Notes or git diff)
3. Read changed files
4. Run coding standards checklist
5. Check methodology compliance (if specified in task doc)
6. Classify deviations from the plan
7. Write Implementation Notes to task document
┌─── Result ───────────────┐
│                          │
▼ PASS                     ▼ FAIL
Invoke /test {ID}          Report to human, stop

/simplify {ID}
1. 解析任务ID → 读取任务文档
2. 识别变更文件(实现说明或git diff结果)
3. 读取变更文件内容
4. 执行编码标准检查清单
5. 检查方法论合规性(若任务文档中指定)
6. 对与计划的偏差进行分类
7. 在任务文档中写入实现说明
┌─── 结果 ───────────────┐
│                          │
▼ 通过                     ▼ 失败
触发/test {ID}          向人员报告问题,停止流程

Step 1: Read Context

步骤1:读取上下文

Resolve task ID and read the task document:
docs/task/{ID}-{task-name}.md
Focus on:
  • ## Development Approach
    — methodology to validate against
  • ## Acceptance Criteria
    — what was supposed to be built
  • ## File Changes
    — which files were planned to change
Then identify what was actually changed:
bash
git diff --name-only main...HEAD
Read only the changed files — not the entire codebase.

解析任务ID并读取任务文档:
docs/task/{ID}-{task-name}.md
重点关注:
  • ## 开发方法
    — 需要验证是否符合的方法论
  • ## 验收标准
    — 原本计划实现的内容
  • ## 文件变更
    — 计划中需要修改的文件
然后识别实际变更的文件:
bash
git diff --name-only main...HEAD
仅读取变更的文件 — 无需读取整个代码库。

Step 2: Coding Standards

步骤2:编码标准

Review each changed file. These apply regardless of framework or methodology.
评审每个变更文件。以下标准适用于所有框架及方法论:

TypeScript / JavaScript

TypeScript / JavaScript

  • No
    any
    types — use proper types, generics, or
    unknown
  • No unused variables or imports
  • Named constants for magic strings/numbers (no hardcoded values inline)
  • Early returns / guard clauses — avoid nesting deeper than 3 levels
  • Functions do one thing (single responsibility)
  • Functions fit in ~30 lines (soft limit — break if it forces artificial splits)
  • Descriptive names — functions read as sentences (
    getUserByEmail
    , not
    getUser
    )
  • No commented-out code left behind
  • No
    console.log
    in production paths
  • 禁止使用
    any
    类型 — 使用正确的类型、泛型或
    unknown
  • 无未使用的变量或导入
  • 魔术字符串/数字使用命名常量(禁止硬编码值)
  • 提前返回/守卫子句 — 避免嵌套层级超过3层
  • 函数单一职责(仅完成一件事)
  • 函数代码行数控制在约30行以内(软限制 — 若强制拆分则可例外)
  • 命名具有描述性 — 函数名应像句子一样易懂(例如
    getUserByEmail
    ,而非
    getUser
  • 无残留的注释代码
  • 生产环境代码路径中无
    console.log

React / UI (if applicable)

React / UI(若适用)

  • Components have explicit Props types
  • Hooks called before any early returns
  • Loading, error, and empty states handled
  • No unnecessary re-render triggers (inline anonymous functions in JSX)
  • 组件具有明确的Props类型
  • 在任何提前返回前调用Hooks
  • 处理加载、错误及空状态
  • 无不必要的重渲染触发(JSX中的内联匿名函数)

General

通用标准

  • No deep nesting — prefer flat logic and early exits
  • No duplicate logic — if the same pattern appears 3+ times, it should be extracted
  • Immutable patterns where possible (avoid mutating arrays/objects in place)

  • 避免深层嵌套 — 优先使用扁平化逻辑与提前退出
  • 无重复逻辑 — 若同一模式出现3次及以上,应进行提取封装
  • 尽可能使用不可变模式(避免原地修改数组/对象)

Step 3: Methodology Compliance

步骤3:方法论合规性

If the task document has a
## Development Approach
section, validate against it:
TDD:
  • Does the implementation directly satisfy each acceptance criterion?
  • Are there test files included in the changed set (if TDD was specified)?
CDD (Component-Driven):
  • Were atomic components built before composite ones?
  • Are components independently usable, not tightly coupled to page context?
SOLID:
  • Single Responsibility: do files/functions have one clear job?
  • Dependency Inversion: are dependencies injected rather than hardcoded?
If no methodology was specified, skip this step.

若任务文档包含
## 开发方法
章节,则需验证是否符合该方法论:
TDD:
  • 实现内容是否直接满足每条验收标准?
  • 变更文件集中是否包含测试文件(若指定了TDD)?
CDD(组件驱动开发):
  • 是否先构建原子组件再构建复合组件?
  • 组件是否可独立使用,而非与页面上下文强耦合?
SOLID:
  • 单一职责原则:文件/函数是否只有一个明确的职责?
  • 依赖倒置原则:是否通过注入方式引入依赖而非硬编码?
若未指定方法论,则跳过此步骤。

Step 4: Deviation Classification

步骤4:偏差分类

Compare the implementation against the task document's
## Acceptance Criteria
and
## File Changes
.
将实现内容与任务文档中的
## 验收标准
## 文件变更
进行对比。

Minor deviation

轻微偏差

  • Different approach, same outcome
  • Extra helper extracted that wasn't in the plan
  • Action: Document in Implementation Notes, proceed to PASS
  • 实现方式不同,但最终结果一致
  • 提取了计划外的辅助工具
  • 操作: 在实现说明中记录,标记为通过并进入测试环节

Medium deviation

中等偏差

  • A plan assumption was wrong (API shape differs, component didn't exist)
  • Scope slightly larger or smaller but requirements still met
  • Action: Document clearly, flag to human in result summary, still PASS (reviewed in PR)
  • 计划中的假设错误(API结构不同、组件不存在等)
  • 范围略有扩大或缩小,但仍满足需求
  • 操作: 详细记录,在结果摘要中标记给相关人员,仍标记为通过(将在PR中评审)

Major deviation

严重偏差

  • Core approach is unworkable — implementation cannot meet acceptance criteria
  • Requirements were misunderstood — what was built is fundamentally different
  • A blocking dependency is missing
  • Action: STOP. Write deviation summary. Do NOT chain to /test.

  • 核心实现方案不可行 — 无法满足验收标准
  • 需求理解错误 — 实现内容与计划完全不同
  • 缺失阻塞性依赖
  • 操作: 停止流程。编写偏差摘要。不触发/test工作流。

Step 5: Write Implementation Notes

步骤5:写入实现说明

Write (or update) this section in the task document. This is the primary context
/test
uses — so be specific and concrete:
markdown
undefined
在任务文档中写入(或更新)以下章节。这是/test环节的主要上下文依据 — 请确保内容具体明确:
markdown
undefined

Implementation Notes

实现说明

Simplify Review: PASS | FAIL Reviewed: {Date}
Simplify评审: 通过 | 失败 评审时间: {日期}

What was built

实现内容

{Concrete description of behavior — what the user/system can now do, not what files changed}
{具体描述功能行为 — 用户/系统现在可执行的操作,而非修改的文件}

How to access for testing

测试访问方式

  • URL: {if applicable}
  • Entry point: {button, page, API endpoint}
  • Test credentials: {if auth involved}
  • Setup required: {seed data, env vars, migrations, etc.}
  • URL:{若适用}
  • 入口点:{按钮、页面、API端点}
  • 测试凭据:{若涉及权限验证}
  • 前置准备:{种子数据、环境变量、数据库迁移等}

Deviations from plan

与计划的偏差

{None | Description of minor/medium deviations found}
{无 | 发现的轻微/中等偏差描述}

Standards check

编码标准检查

{Pass | List any issues found and how they were resolved}

---
{通过 | 列出发现的问题及解决方式}

---

Step 6: Result

步骤6:结果

PASS

通过

Quality review: PASS for #{ID} - {Task Title}

Standards: ✓
Deviations: {None | Minor — documented}
Methodology: {N/A | Compliant}

Implementation Notes written to task doc.

[AUTO] Spawning /test...
Use Task tool:
Task({ subagent_type: "general-purpose", model: "haiku", prompt: "/test {ID}" })
质量评审:#{ID} - {任务标题} 通过

编码标准:✓
偏差情况:{无 | 轻微偏差 — 已记录}
方法论:{不适用 | 符合要求}

已在任务文档中写入实现说明。

[自动] 触发/test...
使用任务工具:
Task({ subagent_type: "general-purpose", model: "haiku", prompt: "/test {ID}" })

FAIL — Standards Issues

失败 — 编码标准问题

Quality review: FAIL for #{ID} - {Task Title}

Issues to fix before testing:
1. {file}: {issue}
2. {file}: {issue}

Run /implement {ID} to fix, then /simplify {ID} again.
Do NOT chain to /test.
质量评审:#{ID} - {任务标题} 失败

测试前需修复的问题:
1. {文件}:{问题描述}
2. {文件}:{问题描述}

运行/implement {ID} 修复问题,之后再次运行/simplify {ID}。
不触发/test工作流。

FAIL — Major Deviation

失败 — 严重偏差

Quality review: BLOCKED for #{ID} - {Task Title}

Major deviation:
{What was planned vs what was built}

Impact: {Why this prevents meaningful testing}

Options:
1. Re-plan  → /task revise {ID}
2. Continue → tell me to proceed and I'll document the deviation
3. Abandon  → I'll mark the task as blocked in TASKS.md

质量评审:#{ID} - {任务标题} 阻塞

严重偏差:
{计划内容 vs 实际实现内容}

影响:{为何此偏差会导致无法进行有效测试}

可选操作:
1. 重新规划  → /task revise {ID}
2. 继续执行 → 告知我继续,我将记录此偏差
3. 放弃任务 → 我将在TASKS.md中标记此任务为阻塞状态

Auto Mode Behavior

自动模式行为

When task document has
Automation: auto
:
  • PASS → automatically spawns
    /test {ID}
    with haiku
  • FAIL (standards) → stops, notifies user, cannot auto-fix
  • FAIL (major deviation) → stops, presents options, awaits human decision

当任务文档中包含
Automation: auto
时:
  • 通过 → 自动使用haiku模型触发
    /test {ID}
  • 失败(编码标准问题)→ 停止流程,通知用户,无法自动修复
  • 失败(严重偏差)→ 停止流程,展示可选操作,等待人工决策

Retry Context

重试上下文

When invoked as part of a retry cycle (after
/test
failed and
/implement
ran a fix), read the previous test report:
docs/testing/{ID}-{task-name}.md
Before proceeding, verify the fix actually addresses the specific failure:
markdown
undefined
当作为重试流程的一部分被调用(/test失败后,/implement执行了修复),需读取之前的测试报告:
docs/testing/{ID}-{task-name}.md
在继续之前,验证修复是否真正解决了之前的失败问题:
markdown
undefined

Retry context (attempt {N})

重试上下文(第{N}次尝试)

  • Previous failure: {summary of what /test reported}
  • Fix applied: {what /implement changed}
  • Fix verified: Yes | Partial | No

If the fix does not address the previous failure, FAIL immediately — this prevents burning another test run on the same issue.

---
  • 之前的失败原因:{/test报告的问题摘要}
  • 修复内容:{/implement所做的修改}
  • 修复验证:是 | 部分修复 | 否

若修复未解决之前的失败问题,直接标记为失败 — 避免在同一问题上再次浪费测试资源。

---

Related Skills

相关技能

SkillWhen to Use
/implement
If fixes needed — go back, fix, re-run /simplify
/test
After PASS — automatically chained in auto mode
/task
If major deviation — revise the plan
技能使用场景
/implement
若需修复问题 — 返回此环节修复,之后再次运行/simplify
/test
通过评审后 — 在自动模式中自动触发
/task
若存在严重偏差 — 重新规划任务