spec-kitty-clarify

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

User Input

用户输入

text
$ARGUMENTS
You MUST consider the user input before proceeding (if not empty).
text
$ARGUMENTS
在继续之前,你必须考虑用户输入(如果非空)。

Outline

大纲

Goal: Detect and reduce ambiguity or missing decision points in the active feature specification and record the clarifications directly in the spec file.
Note: This clarification workflow is expected to run (and be completed) BEFORE invoking
/spec-kitty.plan
. If the user explicitly states they are skipping clarification (e.g., exploratory spike), you may proceed, but must warn that downstream rework risk increases.
Execution steps:
  1. Run
    spec-kitty agent feature check-prerequisites --json --paths-only
    from the repository root and parse JSON for:
    • feature_dir
      - Absolute path to feature directory (e.g.,
      /path/to/kitty-specs/017-my-feature/
      )
    • FEATURE_SPEC
      - Absolute path to spec.md file
    • If command fails or JSON parsing fails, abort and instruct user to run
      /spec-kitty.specify
      first or verify they are in a spec-kitty-initialized repository.
  2. Load the current spec file. Perform a structured ambiguity & coverage scan using this taxonomy. For each category, mark status: Clear / Partial / Missing. Produce an internal coverage map used for prioritization (do not output raw map unless no questions will be asked).
    Functional Scope & Behavior:
    • Core user goals & success criteria
    • Explicit out-of-scope declarations
    • User roles / personas differentiation
    Domain & Data Model:
    • Entities, attributes, relationships
    • Identity & uniqueness rules
    • Lifecycle/state transitions
    • Data volume / scale assumptions
    Interaction & UX Flow:
    • Critical user journeys / sequences
    • Error/empty/loading states
    • Accessibility or localization notes
    Non-Functional Quality Attributes:
    • Performance (latency, throughput targets)
    • Scalability (horizontal/vertical, limits)
    • Reliability & availability (uptime, recovery expectations)
    • Observability (logging, metrics, tracing signals)
    • Security & privacy (authN/Z, data protection, threat assumptions)
    • Compliance / regulatory constraints (if any)
    Integration & External Dependencies:
    • External services/APIs and failure modes
    • Data import/export formats
    • Protocol/versioning assumptions
    Edge Cases & Failure Handling:
    • Negative scenarios
    • Rate limiting / throttling
    • Conflict resolution (e.g., concurrent edits)
    Constraints & Tradeoffs:
    • Technical constraints (language, storage, hosting)
    • Explicit tradeoffs or rejected alternatives
    Terminology & Consistency:
    • Canonical glossary terms
    • Avoided synonyms / deprecated terms
    Completion Signals:
    • Acceptance criteria testability
    • Measurable Definition of Done style indicators
    Misc / Placeholders:
    • TODO markers / unresolved decisions
    • Ambiguous adjectives ("robust", "intuitive") lacking quantification
    For each category with Partial or Missing status, add a candidate question opportunity unless:
    • Clarification would not materially change implementation or validation strategy
    • Information is better deferred to planning phase (note internally)
  3. Generate (internally) a prioritized queue of candidate clarification questions (maximum 5). Do NOT output them all at once. Apply these constraints:
    • Maximum of 10 total questions across the whole session.
    • Each question must be answerable with EITHER:
      • A short multiple‑choice selection (2–5 distinct, mutually exclusive options), OR
      • A one-word / short‑phrase answer (explicitly constrain: "Answer in <=5 words").
    • Only include questions whose answers materially impact architecture, data modeling, task decomposition, test design, UX behavior, operational readiness, or compliance validation.
    • Ensure category coverage balance: attempt to cover the highest impact unresolved categories first; avoid asking two low-impact questions when a single high-impact area (e.g., security posture) is unresolved.
    • Exclude questions already answered, trivial stylistic preferences, or plan-level execution details (unless blocking correctness).
    • Favor clarifications that reduce downstream rework risk or prevent misaligned acceptance tests.
    • Scale thoroughness to the feature’s complexity: a lightweight enhancement may only need one or two confirmations, while multi-system efforts warrant the full question budget if gaps remain critical.
    • If more than 5 categories remain unresolved, select the top 5 by (Impact * Uncertainty) heuristic.
  4. Sequential questioning loop (interactive):
    • Present EXACTLY ONE question at a time.
    • For multiple-choice questions, list options inline using letter prefixes rather than tables, e.g.
      Options: (A) describe option A · (B) describe option B · (C) describe option C · (D) short custom answer (<=5 words)

      Ask the user to reply with the letter (or short custom text when offered).
    • For short-answer style (no meaningful discrete options), output a single line after the question:
      Format: Short answer (<=5 words)
      .
    • After the user answers:
      • Validate the answer maps to one option or fits the <=5 word constraint.
      • If ambiguous, ask for a quick disambiguation (count still belongs to same question; do not advance).
      • Once satisfactory, record it in working memory (do not yet write to disk) and move to the next queued question.
    • Stop asking further questions when:
      • All critical ambiguities resolved early (remaining queued items become unnecessary), OR
      • User signals completion ("done", "good", "no more"), OR
      • You reach 5 asked questions.
    • Never reveal future queued questions in advance.
    • If no valid questions exist at start, immediately report no critical ambiguities.
  5. Integration after EACH accepted answer (incremental update approach):
    • Maintain in-memory representation of the spec (loaded once at start) plus the raw file contents.
    • For the first integrated answer in this session:
      • Ensure a
        ## Clarifications
        section exists (create it just after the highest-level contextual/overview section per the spec template if missing).
      • Under it, create (if not present) a
        ### Session YYYY-MM-DD
        subheading for today.
    • Append a bullet line immediately after acceptance:
      - Q: <question> → A: <final answer>
      .
    • Then immediately apply the clarification to the most appropriate section(s):
      • Functional ambiguity → Update or add a bullet in Functional Requirements.
      • User interaction / actor distinction → Update User Stories or Actors subsection (if present) with clarified role, constraint, or scenario.
      • Data shape / entities → Update Data Model (add fields, types, relationships) preserving ordering; note added constraints succinctly.
      • Non-functional constraint → Add/modify measurable criteria in Non-Functional / Quality Attributes section (convert vague adjective to metric or explicit target).
      • Edge case / negative flow → Add a new bullet under Edge Cases / Error Handling (or create such subsection if template provides placeholder for it).
      • Terminology conflict → Normalize term across spec; retain original only if necessary by adding
        (formerly referred to as "X")
        once.
    • If the clarification invalidates an earlier ambiguous statement, replace that statement instead of duplicating; leave no obsolete contradictory text.
    • Save the spec file AFTER each integration to minimize risk of context loss (atomic overwrite).
    • Preserve formatting: do not reorder unrelated sections; keep heading hierarchy intact.
    • Keep each inserted clarification minimal and testable (avoid narrative drift).
  6. Validation (performed after EACH write plus final pass):
    • Clarifications session contains exactly one bullet per accepted answer (no duplicates).
    • Total asked (accepted) questions ≤ 5.
    • Updated sections contain no lingering vague placeholders the new answer was meant to resolve.
    • No contradictory earlier statement remains (scan for now-invalid alternative choices removed).
    • Markdown structure valid; only allowed new headings:
      ## Clarifications
      ,
      ### Session YYYY-MM-DD
      .
    • Terminology consistency: same canonical term used across all updated sections.
  7. Write the updated spec back to
    FEATURE_SPEC
    .
  8. Report completion (after questioning loop ends or early termination):
    • Number of questions asked & answered.
    • Path to updated spec.
    • Sections touched (list names).
    • Coverage summary listing each taxonomy category with a status label (Resolved / Deferred / Clear / Outstanding). Present as plain text or bullet list, not a table.
    • If any Outstanding or Deferred remain, recommend whether to proceed to
      /spec-kitty.plan
      or run
      /spec-kitty.clarify
      again later post-plan.
    • Suggested next command.
Behavior rules:
  • If no meaningful ambiguities found (or all potential questions would be low-impact), respond: "No critical ambiguities detected worth formal clarification." and suggest proceeding.
  • If spec file missing, instruct user to run
    /spec-kitty.specify
    first (do not create a new spec here).
  • Never exceed 5 total asked questions (clarification retries for a single question do not count as new questions).
  • Avoid speculative tech stack questions unless the absence blocks functional clarity.
  • Respect user early termination signals ("stop", "done", "proceed").
  • If no questions asked due to full coverage, output a compact coverage summary (all categories Clear) then suggest advancing.
  • If quota reached with unresolved high-impact categories remaining, explicitly flag them under Deferred with rationale.
Context for prioritization: User arguments from $ARGUMENTS section above (if provided). Use these to focus clarification on specific areas of concern mentioned by the user.
目标:检测并减少当前功能规格说明书中的模糊点或缺失的决策点,并将澄清内容直接记录在规格文件中。
注意:此澄清工作流应在调用
/spec-kitty.plan
之前运行(并完成)。如果用户明确表示要跳过澄清(例如探索性研究),你可以继续操作,但必须警告用户后续返工风险会增加。
执行步骤:
  1. 从仓库根目录运行
    spec-kitty agent feature check-prerequisites --json --paths-only
    命令,并解析JSON以获取:
    • feature_dir
      - 功能目录的绝对路径(例如:
      /path/to/kitty-specs/017-my-feature/
    • FEATURE_SPEC
      - spec.md文件的绝对路径
    • 如果命令执行失败或JSON解析失败,终止操作并指导用户先运行
      /spec-kitty.specify
      ,或验证他们是否处于已初始化spec-kitty的仓库中。
  2. 加载当前规格文件。使用以下分类框架进行结构化的模糊点与覆盖范围扫描。对于每个类别,标记状态:清晰/部分缺失/完全缺失。生成用于优先级排序的内部覆盖映射(除非没有问题要问,否则不要输出原始映射)。
    功能范围与行为:
    • 核心用户目标与成功标准
    • 明确的范围外声明
    • 用户角色/人物角色区分
    领域与数据模型:
    • 实体、属性、关系
    • 标识与唯一性规则
    • 生命周期/状态转换
    • 数据量/规模假设
    交互与用户体验流程:
    • 关键用户旅程/操作序列
    • 错误/空/加载状态
    • 可访问性或本地化说明
    非功能质量属性:
    • 性能(延迟、吞吐量目标)
    • 可扩展性(水平/垂直扩展、限制)
    • 可靠性与可用性(正常运行时间、恢复预期)
    • 可观测性(日志、指标、追踪信号)
    • 安全与隐私(身份认证/授权、数据保护、威胁假设)
    • 合规/监管约束(如有)
    集成与外部依赖:
    • 外部服务/API及其故障模式
    • 数据导入/导出格式
    • 协议/版本假设
    边缘情况与故障处理:
    • 负面场景
    • 速率限制/流量控制
    • 冲突解决(例如并发编辑)
    约束与权衡:
    • 技术约束(语言、存储、托管)
    • 明确的权衡或被否决的替代方案
    术语与一致性:
    • 标准术语表术语
    • 避免使用的同义词/已弃用术语
    完成信号:
    • 验收标准的可测试性
    • 可衡量的完成定义(Definition of Done)式指标
    其他/占位符:
    • TODO标记/未解决的决策
    • 缺乏量化的模糊形容词(如“健壮”、“直观”)
    对于每个状态为部分缺失或完全缺失的类别,添加候选提问机会,除非:
    • 澄清内容不会实质性改变实现或验证策略
    • 信息最好推迟到规划阶段(内部记录)
  3. (内部)生成优先级排序的候选澄清问题队列(最多5个)。不要一次性输出所有问题。应用以下约束:
    • 整个会话中最多10个问题。
    • 每个问题必须可以通过以下方式之一回答:
      • 简短的多项选择(2-5个不同、互斥的选项),或者
      • 一个单词/短语答案(明确限制:“回答不超过5个单词”)。
    • 仅包含那些答案会对架构、数据建模、任务分解、测试设计、用户体验行为、运维就绪性或合规验证产生实质性影响的问题。
    • 确保类别覆盖平衡:优先覆盖影响最高的未解决类别;当存在单个高影响领域(如安全态势)未解决时,避免提出两个低影响问题。
    • 排除已回答的问题、无关紧要的风格偏好或计划层面的执行细节(除非会阻碍正确性)。
    • 优先选择能减少后续返工风险或防止验收测试不一致的澄清内容。
    • 根据功能的复杂程度调整细致度:轻量级增强可能只需要1-2次确认,而多系统项目如果仍存在关键空白,则需要使用全部问题配额。
    • 如果有超过5个类别未解决,选择(影响度×不确定性)得分最高的5个类别。
  4. 顺序提问循环(交互式):
    • 每次只展示一个问题。
    • 对于多项选择问题,使用字母前缀内联列出选项,例如:
      选项:(A) 描述选项A · (B) 描述选项B · (C) 描述选项C · (D) 简短自定义回答(≤5个单词)
      要求用户回复对应的字母(或提供的简短自定义文本)。
    • 对于简短回答类型(无有意义的离散选项),在问题后输出一行:
      格式:简短回答(≤5个单词)
    • 用户回答后:
      • 验证答案是否匹配某个选项或符合≤5个单词的限制。
      • 如果答案模糊,要求快速澄清(此次重试不计入新问题,不推进到下一个问题)。
      • 一旦答案符合要求,将其记录到工作内存中(暂不写入磁盘),然后进入下一个排队的问题。
    • 在以下情况停止提问:
      • 所有关键模糊点已提前解决(剩余排队问题无需再问),或者
      • 用户表示结束(如“完成”、“好的”、“没有更多问题了”),或者
      • 已提出5个问题。
    • 不要提前透露未来的排队问题。
    • 如果一开始就没有有效问题,立即报告未检测到关键模糊点。
  5. 每次接受答案后进行集成(增量更新方式):
    • 维护规格说明书的内存表示(在开始时加载一次)以及原始文件内容。
    • 对于本次会话中的第一个集成答案:
      • 确保存在
        ## 澄清内容
        章节(如果缺失,按照规格模板在最高级别的上下文/概述章节之后创建)。
      • 在该章节下,创建(如果不存在)
        ### 会话 YYYY-MM-DD
        子标题(YYYY-MM-DD为当天日期)。
    • 接受答案后立即添加一条项目符号:
      - 问题:<问题内容> → 答案:<最终答案>
    • 然后立即将澄清内容应用到最合适的章节:
      • 功能模糊点 → 更新或添加功能需求中的项目符号。
      • 用户交互/角色区分 → 更新用户故事或角色小节(如有),添加澄清后的角色、约束或场景。
      • 数据结构/实体 → 更新数据模型(添加字段、类型、关系),保留原有顺序;简洁记录新增的约束。
      • 非功能约束 → 在非功能/质量属性章节中添加/修改可衡量的标准(将模糊形容词转换为指标或明确目标)。
      • 边缘情况/负面流程 → 在边缘情况/错误处理下添加新的项目符号(如果模板提供了占位符,则创建该小节)。
      • 术语冲突 → 在整个规格说明书中统一术语;仅在必要时保留原始术语,添加
        (原称“X”)
        标记一次。
    • 如果澄清内容使之前的模糊陈述无效,替换该陈述而非重复;不要留下过时的矛盾文本。
    • 每次集成后保存规格文件,以最大程度减少上下文丢失的风险(原子覆盖)。
    • 保留格式:不要重新排序无关章节;保持标题层级不变。
    • 每个插入的澄清内容要简洁且可测试(避免偏离主题)。
  6. 验证(每次写入后及最终检查时执行):
    • 澄清会话中每个已接受的答案对应一个项目符号(无重复)。
    • 已提出(已接受)的问题总数 ≤5。
    • 更新后的章节中没有因新答案应解决而遗留的模糊占位符。
    • 不存在剩余的矛盾陈述(扫描并移除现在无效的备选选项)。
    • Markdown结构有效;仅允许新增标题:
      ## 澄清内容
      ### 会话 YYYY-MM-DD
    • 术语一致性:所有更新章节中使用相同的标准术语。
  7. 将更新后的规格说明书写回
    FEATURE_SPEC
  8. 报告完成情况(提问循环结束或提前终止后):
    • 已提出并回答的问题数量。
    • 更新后的规格文件路径。
    • 被修改的章节(列出名称)。
    • 覆盖范围摘要,列出每个分类框架的类别及其状态标签(已解决/推迟/清晰/未解决)。以纯文本或项目符号列表呈现,不要使用表格。
    • 如果存在未解决或推迟的问题,建议是否继续执行
      /spec-kitty.plan
      或在规划完成后再次运行
      /spec-kitty.clarify
    • 建议的下一个命令。
行为规则:
  • 如果未发现有意义的模糊点(或所有潜在问题的影响都很低),回复:“未检测到值得正式澄清的关键模糊点。”并建议继续推进。
  • 如果规格文件缺失,指导用户先运行
    /spec-kitty.specify
    (不要在此处创建新的规格文件)。
  • 提出的问题总数不得超过5个(单个问题的澄清重试不计入新问题)。
  • 避免推测性的技术栈问题,除非相关信息缺失会阻碍功能清晰度。
  • 尊重用户的提前终止信号(如“停止”、“完成”、“继续”)。
  • 如果因覆盖范围完整而未提出任何问题,输出简洁的覆盖范围摘要(所有类别均为清晰),然后建议推进。
  • 如果已用完问题配额但仍存在未解决的高影响类别,明确将其标记为推迟并说明理由。
优先级排序上下文:来自上方$ARGUMENTS部分的用户参数(如有提供)。使用这些参数将澄清重点放在用户提到的特定关注领域。