openspec-verify-change

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese
Verify that an implementation matches the change artifacts (specs, tasks, design).
Input: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
Steps
  1. If no change name provided, prompt for selection
    Run
    openspec list --json
    to get available changes. Use the AskUserQuestion tool to let the user select.
    Show changes that have implementation tasks (tasks artifact exists). Include the schema used for each change if available. Mark changes with incomplete tasks as "(In Progress)".
    IMPORTANT: Do NOT guess or auto-select a change. Always let the user choose.
  2. Check status to understand the schema
    bash
    openspec status --change "<name>" --json
    Parse the JSON to understand:
    • schemaName
      : The workflow being used (e.g., "spec-driven")
    • Which artifacts exist for this change
  3. Get the change directory and load artifacts
    bash
    openspec instructions apply --change "<name>" --json
    This returns the change directory and context files. Read all available artifacts from
    contextFiles
    .
  4. Initialize verification report structure
    Create a report structure with three dimensions:
    • Completeness: Track tasks and spec coverage
    • Correctness: Track requirement implementation and scenario coverage
    • Coherence: Track design adherence and pattern consistency
    Each dimension can have CRITICAL, WARNING, or SUGGESTION issues.
  5. Verify Completeness
    Task Completion:
    • If tasks.md exists in contextFiles, read it
    • Parse checkboxes:
      - [ ]
      (incomplete) vs
      - [x]
      (complete)
    • Count complete vs total tasks
    • If incomplete tasks exist:
      • Add CRITICAL issue for each incomplete task
      • Recommendation: "Complete task: <description>" or "Mark as done if already implemented"
    Spec Coverage:
    • If delta specs exist in
      openspec/changes/<name>/specs/
      :
      • Extract all requirements (marked with "### Requirement:")
      • For each requirement:
        • Search codebase for keywords related to the requirement
        • Assess if implementation likely exists
      • If requirements appear unimplemented:
        • Add CRITICAL issue: "Requirement not found: <requirement name>"
        • Recommendation: "Implement requirement X: <description>"
  6. Verify Correctness
    Requirement Implementation Mapping:
    • For each requirement from delta specs:
      • Search codebase for implementation evidence
      • If found, note file paths and line ranges
      • Assess if implementation matches requirement intent
      • If divergence detected:
        • Add WARNING: "Implementation may diverge from spec: <details>"
        • Recommendation: "Review <file>:<lines> against requirement X"
    Scenario Coverage:
    • For each scenario in delta specs (marked with "#### Scenario:"):
      • Check if conditions are handled in code
      • Check if tests exist covering the scenario
      • If scenario appears uncovered:
        • Add WARNING: "Scenario not covered: <scenario name>"
        • Recommendation: "Add test or implementation for scenario: <description>"
  7. Verify Coherence
    Design Adherence:
    • If design.md exists in contextFiles:
      • Extract key decisions (look for sections like "Decision:", "Approach:", "Architecture:")
      • Verify implementation follows those decisions
      • If contradiction detected:
        • Add WARNING: "Design decision not followed: <decision>"
        • Recommendation: "Update implementation or revise design.md to match reality"
    • If no design.md: Skip design adherence check, note "No design.md to verify against"
    Code Pattern Consistency:
    • Review new code for consistency with project patterns
    • Check file naming, directory structure, coding style
    • If significant deviations found:
      • Add SUGGESTION: "Code pattern deviation: <details>"
      • Recommendation: "Consider following project pattern: <example>"
  8. Generate Verification Report
    Summary Scorecard:
    plain
    ## Verification Report: <change-name>
    
    ### Summary
    | Dimension    | Status           |
    |--------------|------------------|
    | Completeness | X/Y tasks, N reqs|
    | Correctness  | M/N reqs covered |
    | Coherence    | Followed/Issues  |
    Issues by Priority:
    1. CRITICAL (Must fix before archive):
      • Incomplete tasks
      • Missing requirement implementations
      • Each with specific, actionable recommendation
    2. WARNING (Should fix):
      • Spec/design divergences
      • Missing scenario coverage
      • Each with specific recommendation
    3. SUGGESTION (Nice to fix):
      • Pattern inconsistencies
      • Minor improvements
      • Each with specific recommendation
    Final Assessment:
    • If CRITICAL issues: "X critical issue(s) found. Fix before archiving."
    • If only warnings: "No critical issues. Y warning(s) to consider. Ready for archive (with noted improvements)."
    • If all clear: "All checks passed. Ready for archive."
Verification Heuristics
  • Completeness: Focus on objective checklist items (checkboxes, requirements list)
  • Correctness: Use keyword search, file path analysis, reasonable inference - don't require perfect certainty
  • Coherence: Look for glaring inconsistencies, don't nitpick style
  • False Positives: When uncertain, prefer SUGGESTION over WARNING, WARNING over CRITICAL
  • Actionability: Every issue must have a specific recommendation with file/line references where applicable
Graceful Degradation
  • If only tasks.md exists: verify task completion only, skip spec/design checks
  • If tasks + specs exist: verify completeness and correctness, skip design
  • If full artifacts: verify all three dimensions
  • Always note which checks were skipped and why
Output Format
Use clear markdown with:
  • Table for summary scorecard
  • Grouped lists for issues (CRITICAL/WARNING/SUGGESTION)
  • Code references in format:
    file.ts:123
  • Specific, actionable recommendations
  • No vague suggestions like "consider reviewing"
验证实现是否与变更工件(规格说明、任务、设计文档)匹配。
输入:可选择性指定变更名称。如果未指定,检查是否可从对话上下文推断。若模糊或存在歧义,必须提示用户选择可用的变更。
步骤
  1. 未提供变更名称时,提示用户选择
    执行
    openspec list --json
    获取可用变更。使用AskUserQuestion工具让用户选择。
    仅展示包含实现任务(存在tasks工件)的变更。 若有可用信息,包含每个变更所使用的 schema。 将任务未完成的变更标记为“(进行中)”。
    重要提示:请勿猜测或自动选择变更,必须始终让用户选择。
  2. 检查状态以了解schema
    bash
    openspec status --change "<name>" --json
    解析JSON以了解:
    • schemaName
      : 所使用的工作流(例如:"spec-driven")
    • 此变更对应的所有工件
  3. 获取变更目录并加载工件
    bash
    openspec instructions apply --change "<name>" --json
    该命令将返回变更目录和上下文文件。读取
    contextFiles
    中的所有可用工件。
  4. 初始化验证报告结构
    创建包含三个维度的报告结构:
    • 完整性:跟踪任务和规格说明的覆盖情况
    • 正确性:跟踪需求实现和场景覆盖情况
    • 连贯性:跟踪设计遵循度和模式一致性
    每个维度的问题可分为CRITICAL(严重)、WARNING(警告)或SUGGESTION(建议)等级别。
  5. 验证完整性
    任务完成情况:
    • 如果contextFiles中存在tasks.md,读取该文件
    • 解析复选框:
      - [ ]
      (未完成) vs
      - [x]
      (已完成)
    • 统计已完成任务与总任务数
    • 若存在未完成任务:
      • 为每个未完成任务添加CRITICAL级问题
      • 建议:“完成任务:<任务描述>”或“若已实现则标记为完成”
    规格说明覆盖情况:
    • 如果
      openspec/changes/<name>/specs/
      中存在增量规格说明:
      • 提取所有需求(标记为"### Requirement:"的内容)
      • 针对每个需求:
        • 在代码库中搜索与该需求相关的关键词
        • 评估是否可能已实现该需求
      • 若发现需求未实现:
        • 添加CRITICAL级问题:“需求未实现:<需求名称>”
        • 建议:“实现需求X:<需求描述>”
  6. 验证正确性
    需求实现映射:
    • 针对增量规格说明中的每个需求:
      • 在代码库中搜索实现证据
      • 若找到,记录文件路径和行号范围
      • 评估实现是否符合需求意图
      • 若发现偏差:
        • 添加WARNING级问题:“实现可能与规格说明存在偏差:<详情>”
        • 建议:“对照需求X检查<文件>:<行号>”
    场景覆盖情况:
    • 针对增量规格说明中的每个场景(标记为"#### Scenario:"的内容):
      • 检查代码是否处理了场景中的条件
      • 检查是否存在覆盖该场景的测试
      • 若发现场景未被覆盖:
        • 添加WARNING级问题:“场景未覆盖:<场景名称>”
        • 建议:“为场景添加测试或实现:<场景描述>”
  7. 验证连贯性
    设计遵循度:
    • 如果contextFiles中存在design.md:
      • 提取关键决策(查找包含"Decision:"、"Approach:"、"Architecture:"的章节)
      • 验证实现是否遵循这些决策
      • 若发现矛盾:
        • 添加WARNING级问题:“未遵循设计决策:<决策内容>”
        • 建议:“更新实现或修订design.md以符合实际情况”
    • 若不存在design.md:跳过设计遵循度检查,并注明“无design.md可用于验证”
    代码模式一致性:
    • 检查新增代码是否与项目模式保持一致
    • 检查文件命名、目录结构、编码风格
    • 若发现显著偏差:
      • 添加SUGGESTION级问题:“代码模式不一致:<详情>”
      • 建议:“考虑遵循项目模式:<示例>”
  8. 生成验证报告
    汇总评分卡:
    plain
    ## 验证报告: <change-name>
    
    ### 汇总
    | 维度          | 状态               |
    |--------------|------------------|
    | 完整性        | X/Y任务完成, N项需求 |
    | 正确性        | M/N项需求已覆盖     |
    | 连贯性        | 符合/存在问题       |
    按优先级划分的问题:
    1. CRITICAL(归档前必须修复):
      • 未完成的任务
      • 缺失的需求实现
      • 每个问题都附带具体、可执行的建议
    2. WARNING(建议修复):
      • 规格说明/设计偏差
      • 缺失的场景覆盖
      • 每个问题都附带具体建议
    3. SUGGESTION(可选修复):
      • 模式不一致
      • 微小改进点
      • 每个问题都附带具体建议
    最终评估:
    • 若存在CRITICAL问题:“发现X个严重问题。归档前请修复。”
    • 若仅存在WARNING问题:“无严重问题。存在Y个需考虑的警告。已准备好归档(需注意已标注的改进点)。”
    • 若所有检查通过:“所有检查已通过。已准备好归档。”
验证启发式规则
  • 完整性:聚焦客观检查项(复选框、需求列表)
  • 正确性:使用关键词搜索、文件路径分析、合理推断——无需绝对确定
  • 连贯性:关注明显的不一致,无需纠结于细节风格
  • 误报处理:不确定时,优先标记为SUGGESTION而非WARNING,WARNING而非CRITICAL
  • 可执行性:每个问题都必须附带具体建议,如有可能需包含文件/行号引用
优雅降级
  • 若仅存在tasks.md:仅验证任务完成情况,跳过规格说明/设计检查
  • 若存在tasks + specs:验证完整性和正确性,跳过设计检查
  • 若存在完整工件:验证所有三个维度
  • 始终注明跳过的检查项及原因
输出格式
使用清晰的markdown格式,包含:
  • 汇总评分卡表格
  • 按优先级分组的问题列表(CRITICAL/WARNING/SUGGESTION)
  • 代码引用格式:
    file.ts:123
  • 具体、可执行的建议
  • 避免模糊建议,如“建议复查”