simplify

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

/simplify - Quality Gate Agent

/simplify - 代码质量检查Agent

Model: sonnet (reasoning needed to evaluate code quality and detect deviations)
Model: sonnet(需要推理能力来评估代码质量并检测偏差)

Command Flags

命令参数

FlagShortDescription
--help
-h
Show available commands and options
--version
-v
Show workflow skills version
参数缩写描述
--help
-h
显示可用命令和选项
--version
-v
显示工作流技能版本

Flag Handling

参数处理

On
-h
or
--help
:
/simplify - Quality Gate Agent

Usage:
  /simplify {ID}                     Review implementation for a task
  /simplify -h, --help               Show this help message
  /simplify -v, --version            Show version

Arguments:
  {ID}    Task ID (number) or task filename (e.g., 001-auth-jwt)

Checks:
  - Coding standards (no any types, guard clauses, naming, etc.)
  - Methodology compliance (TDD/CDD/SOLID if specified in task doc)
  - Deviation classification (minor / medium / major)

Result:
  PASS → chains to /test
  FAIL → reports issues, blocks until resolved

Examples:
  /simplify 1                        # Review task #1 implementation
  /simplify 001-auth-jwt             # Using task filename

Next: /test {ID}
On
-v
or
--version
:
Workflow Skills v1.5.1
https://github.com/eljun/workflow-skills

当使用
-h
--help
时:
/simplify - 代码质量检查Agent

使用方法:
  /simplify {ID}                     评审任务的实现内容
  /simplify -h, --help               显示此帮助信息
  /simplify -v, --version            显示版本

参数:
  {ID}    任务ID(数字)或任务文件名(例如:001-auth-jwt)

检查项:
  - 编码标准(禁止any类型、守卫语句、命名规范等)
  - 方法论合规性(若任务文档中指定了TDD/CDD/SOLID)
  - 偏差分类(轻微 / 中等 / 严重)

结果:
  PASS → 触发/test流程
  FAIL → 报告问题,阻止流程直到问题解决

示例:
  /simplify 1                        # 评审任务#1的实现内容
  /simplify 001-auth-jwt             # 使用任务文件名

下一步: /test {ID}
当使用
-v
--version
时:
Workflow Skills v1.5.1
https://github.com/eljun/workflow-skills

Workflow

工作流

/simplify {ID}
1. Resolve task ID → read task document
2. Identify changed files (Implementation Notes or git diff)
3. Read changed files
4. Run coding standards checklist
5. Check methodology compliance (if specified in task doc)
6. Classify deviations from the plan
7. Write Implementation Notes to task document
┌─── Result ───────────────┐
│                          │
▼ PASS                     ▼ FAIL
Invoke /test {ID}          Report to human, stop

/simplify {ID}
1. 解析任务ID → 读取任务文档
2. 识别变更文件(实现说明或git diff)
3. 读取变更文件
4. 执行编码标准检查清单
5. 检查方法论合规性(若任务文档中指定)
6. 分类与计划的偏差
7. 向任务文档写入实现说明
┌─── 结果 ───────────────┐
│                          │
▼ PASS                     ▼ FAIL
触发/test {ID}          向人工报告,停止流程

Step 1: Read Context

步骤1:读取上下文

Resolve task ID and read the task document:
docs/task/{ID}-{task-name}.md
Focus on:
  • ## Development Approach
    — methodology to validate against
  • ## Acceptance Criteria
    — what was supposed to be built
  • ## File Changes
    — which files were planned to change
Then identify what was actually changed:
bash
git diff --name-only main...HEAD
Read only the changed files — not the entire codebase.

解析任务ID并读取任务文档:
docs/task/{ID}-{task-name}.md
重点关注:
  • ## 开发方法
    — 需要验证的方法论
  • ## 验收标准
    — 计划实现的内容
  • ## 文件变更
    — 计划修改的文件
然后识别实际变更的文件:
bash
git diff --name-only main...HEAD
仅读取变更的文件 — 而非整个代码库。

Step 2: Coding Standards

步骤2:编码标准

Review each changed file. These apply regardless of framework or methodology.
评审每个变更文件。以下规则适用于所有框架和方法论。

TypeScript / JavaScript

TypeScript / JavaScript

  • No
    any
    types — use proper types, generics, or
    unknown
  • No unused variables or imports
  • Named constants for magic strings/numbers (no hardcoded values inline)
  • Early returns / guard clauses — avoid nesting deeper than 3 levels
  • Functions do one thing (single responsibility)
  • Functions fit in ~30 lines (soft limit — break if it forces artificial splits)
  • Descriptive names — functions read as sentences (
    getUserByEmail
    , not
    getUser
    )
  • No commented-out code left behind
  • No
    console.log
    in production paths
  • 禁止使用
    any
    类型 — 使用正确的类型、泛型或
    unknown
  • 无未使用的变量或导入
  • 魔术字符串/数字使用命名常量 — 禁止硬编码值
  • 提前返回/守卫语句 — 避免嵌套超过3层
  • 函数单一职责
  • 函数长度控制在约30行内(软限制 — 若强制拆分不合理可例外)
  • 命名具有描述性 — 函数名应像句子一样易懂(例如
    getUserByEmail
    ,而非
    getUser
  • 不遗留注释掉的代码
  • 生产路径中无
    console.log

React / UI (if applicable)

React / UI(若适用)

  • Components have explicit Props types
  • Hooks called before any early returns
  • Loading, error, and empty states handled
  • No unnecessary re-render triggers (inline anonymous functions in JSX)
  • 组件具有明确的Props类型
  • Hooks在所有提前返回语句之前调用
  • 处理加载、错误和空状态
  • 避免不必要的重渲染触发(JSX中的内联匿名函数)

General

通用规则

  • No deep nesting — prefer flat logic and early exits
  • No duplicate logic — if the same pattern appears 3+ times, it should be extracted
  • Immutable patterns where possible (avoid mutating arrays/objects in place)

  • 避免深层嵌套 — 优先使用扁平化逻辑和提前退出
  • 无重复逻辑 — 若同一模式出现3次以上,应提取为公共代码
  • 尽可能使用不可变模式(避免原地修改数组/对象)

Step 3: Methodology Compliance

步骤3:方法论合规性

If the task document has a
## Development Approach
section, validate against it:
TDD:
  • Does the implementation directly satisfy each acceptance criterion?
  • Are there test files included in the changed set (if TDD was specified)?
CDD (Component-Driven):
  • Were atomic components built before composite ones?
  • Are components independently usable, not tightly coupled to page context?
SOLID:
  • Single Responsibility: do files/functions have one clear job?
  • Dependency Inversion: are dependencies injected rather than hardcoded?
If no methodology was specified, skip this step.

若任务文档包含
## 开发方法
章节,需验证是否符合要求:
TDD:
  • 实现是否直接满足每个验收标准?
  • 变更文件集中是否包含测试文件(若指定了TDD)?
CDD(组件驱动开发):
  • 是否先构建原子组件再构建复合组件?
  • 组件是否可独立使用,而非与页面上下文强耦合?
SOLID:
  • 单一职责原则:文件/函数是否只有一个明确的职责?
  • 依赖倒置原则:是否通过注入依赖而非硬编码?
若未指定方法论,跳过此步骤。

Step 4: Deviation Classification

步骤4:偏差分类

Compare the implementation against the task document's
## Acceptance Criteria
and
## File Changes
.
将实现内容与任务文档的
## 验收标准
## 文件变更
进行对比。

Minor deviation

轻微偏差

  • Different approach, same outcome
  • Extra helper extracted that wasn't in the plan
  • Action: Document in Implementation Notes, proceed to PASS
  • 实现方式不同,但结果一致
  • 提取了计划外的辅助代码
  • 操作: 在实现说明中记录,流程继续并标记为PASS

Medium deviation

中等偏差

  • A plan assumption was wrong (API shape differs, component didn't exist)
  • Scope slightly larger or smaller but requirements still met
  • Action: Document clearly, flag to human in result summary, still PASS (reviewed in PR)
  • 计划假设错误(API结构不同、组件不存在等)
  • 范围略有扩大或缩小,但仍满足需求
  • 操作: 清晰记录,在结果摘要中标记给人工,仍标记为PASS(在PR中评审)

Major deviation

严重偏差

  • Core approach is unworkable — implementation cannot meet acceptance criteria
  • Requirements were misunderstood — what was built is fundamentally different
  • A blocking dependency is missing
  • Action: STOP. Write deviation summary. Do NOT chain to /test.

  • 核心方法不可行 — 实现无法满足验收标准
  • 需求理解错误 — 实现内容与计划完全不同
  • 缺失阻塞性依赖
  • 操作: 停止流程。写入偏差摘要。不触发/test流程。

Step 5: Write Implementation Notes

步骤5:写入实现说明

Write (or update) this section in the task document. This is the primary context
/test
uses — so be specific and concrete:
markdown
undefined
在任务文档中写入(或更新)此章节。这是
/test
使用的主要上下文 — 因此需具体明确:
markdown
undefined

Implementation Notes

实现说明

Simplify Review: PASS | FAIL Reviewed: {Date}
Simplify 评审: PASS | FAIL 评审时间: {日期}

What was built

实现内容

{Concrete description of behavior — what the user/system can now do, not what files changed}
{具体的行为描述 — 用户/系统现在可以执行的操作,而非文件变更内容}

How to access for testing

测试访问方式

  • URL: {if applicable}
  • Entry point: {button, page, API endpoint}
  • Test credentials: {if auth involved}
  • Setup required: {seed data, env vars, migrations, etc.}
  • URL: {若适用}
  • 入口点: {按钮、页面、API端点}
  • 测试凭据: {若涉及权限验证}
  • 所需前置设置: {种子数据、环境变量、迁移脚本等}

Deviations from plan

与计划的偏差

{None | Description of minor/medium deviations found}
{无 | 发现的轻微/中等偏差描述}

Standards check

标准检查

{Pass | List any issues found and how they were resolved}

---
{通过 | 列出发现的问题及解决方式}

---

Step 6: Result

步骤6:结果

PASS

PASS

Quality review: PASS for #{ID} - {Task Title}

Standards: ✓
Deviations: {None | Minor — documented}
Methodology: {N/A | Compliant}

Implementation Notes written to task doc.

[AUTO] Spawning /test...
Use Task tool:
Task({ subagent_type: "general-purpose", model: "haiku", prompt: "/test {ID}" })
质量评审:任务#{ID} - {任务标题} 评审通过

编码标准:✓
偏差情况:{无 | 轻微偏差已记录}
方法论:{不适用 | 合规}

已向任务文档写入实现说明。

[自动] 触发/test...
使用任务工具:
Task({ subagent_type: "general-purpose", model: "haiku", prompt: "/test {ID}" })

FAIL — Standards Issues

FAIL — 标准问题

Quality review: FAIL for #{ID} - {Task Title}

Issues to fix before testing:
1. {file}: {issue}
2. {file}: {issue}

Run /implement {ID} to fix, then /simplify {ID} again.
Do NOT chain to /test.
质量评审:任务#{ID} - {任务标题} 评审不通过

测试前需修复的问题:
1. {文件}: {问题}
2. {文件}: {问题}

运行 /implement {ID} 修复问题,然后重新运行 /simplify {ID}。
不触发/test流程。

FAIL — Major Deviation

FAIL — 严重偏差

Quality review: BLOCKED for #{ID} - {Task Title}

Major deviation:
{What was planned vs what was built}

Impact: {Why this prevents meaningful testing}

Options:
1. Re-plan  → /task revise {ID}
2. Continue → tell me to proceed and I'll document the deviation
3. Abandon  → I'll mark the task as blocked in TASKS.md

质量评审:任务#{ID} - {任务标题} 流程阻塞

严重偏差:
{计划内容 vs 实际实现内容}

影响:{为何此偏差会阻碍有效测试}

选项:
1. 重新规划  → /task revise {ID}
2. 继续执行 → 告知我继续,我会记录偏差
3. 放弃任务 → 我会在TASKS.md中标记任务为阻塞状态

Auto Mode Behavior

自动模式行为

When task document has
Automation: auto
:
  • PASS → automatically spawns
    /test {ID}
    with haiku
  • FAIL (standards) → stops, notifies user, cannot auto-fix
  • FAIL (major deviation) → stops, presents options, awaits human decision

当任务文档中包含
Automation: auto
时:
  • PASS → 自动使用haiku模型触发
    /test {ID}
  • FAIL(标准问题)→ 停止流程,通知用户,无法自动修复
  • FAIL(严重偏差)→ 停止流程,呈现选项,等待人工决策

Retry Context

重试上下文

When invoked as part of a retry cycle (after
/test
failed and
/implement
ran a fix), read the previous test report:
docs/testing/{ID}-{task-name}.md
Before proceeding, verify the fix actually addresses the specific failure:
markdown
undefined
当作为重试流程的一部分被调用时(在/test失败且/implement执行修复后),读取之前的测试报告:
docs/testing/{ID}-{task-name}.md
在继续前,验证修复是否真正解决了之前的失败问题:
markdown
undefined

Retry context (attempt {N})

重试上下文(第{N}次尝试)

  • Previous failure: {summary of what /test reported}
  • Fix applied: {what /implement changed}
  • Fix verified: Yes | Partial | No

If the fix does not address the previous failure, FAIL immediately — this prevents burning another test run on the same issue.

---
  • 之前的失败:{/test报告的问题摘要}
  • 应用的修复:{/implement的变更内容}
  • 修复验证:是 | 部分修复 | 否

若修复未解决之前的失败问题,立即标记为FAIL — 避免在同一问题上浪费另一次测试资源。

---

Related Skills

相关技能

SkillWhen to Use
/implement
If fixes needed — go back, fix, re-run /simplify
/test
After PASS — automatically chained in auto mode
/task
If major deviation — revise the plan
技能使用场景
/implement
若需要修复问题 — 返回此步骤修复,重新运行/simplify
/test
PASS后 — 在自动模式下自动触发
/task
若存在严重偏差 — 重新规划任务