accelint-onboard-openspec

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Onboard OpenSpec

接入OpenSpec

Guide the user through a conversational interview to produce a complete, project-specific
openspec/config.yaml
configured for the QRSPI methodology.
引导用户通过对话式访谈生成符合QRSPI方法论、针对特定项目的完整
openspec/config.yaml

Companion Skill

配套Skill

This skill produces the project DNA layer of the agent instruction stack: structural facts about what the project is. It is the companion to the
accelint-onboard-agents
skill, which produces the behavior layer (
AGENTS.md
/
CLAUDE.md
): how the agent acts, communicates, and makes decisions.
If during this interview the user volunteers behavioral content (commit conventions, workflow steps, decision heuristics, tool preferences), acknowledge it and redirect: "That's behavioral — it belongs in AGENTS.md. I'll note it here for reference, but the
accelint-onboard-agents
skill is the right place to capture it."
Do not write behavioral content into
config.yaml
.
AGENTS.md / CLAUDE.md  → accelint-onboard-agents skill  → HOW the agent behaves
openspec/config.yaml   → this skill             → WHAT the project is

这个skill生成Agent指令栈的项目DNA层:关于项目是什么的结构化事实。它是
accelint-onboard-agents
skill的配套工具,后者生成行为层
AGENTS.md
/
CLAUDE.md
):定义Agent如何行动、沟通和做决策。
如果在访谈过程中用户主动提到行为相关内容(提交规范、工作流步骤、决策启发式、工具偏好),请确认并重定向:“这属于行为相关内容,应该放在AGENTS.md里。我会在这里记录下来供参考,但
accelint-onboard-agents
skill才是记录这类内容的正确位置。”
不要将行为相关内容写入
config.yaml
AGENTS.md / CLAUDE.md  → accelint-onboard-agents skill  → 定义Agent的行为方式
openspec/config.yaml   → 本skill             → 定义项目的属性信息

Mental Model

心智模型

The config has two jobs:
  1. context:
    — Objective facts about the codebase injected into every AI artifact. Think of it as the "DNA" that makes AI suggestions feel native to the project. Facts only, no opinions.
  2. rules:
    — Per-artifact checkpoints (proposal / design / tasks / spec) that encode the team's quality bar.
配置有两个核心作用:
  1. context:
    —— 注入到每个AI产物中的代码库客观事实。可以把它看作让AI建议看起来原生适配项目的“DNA”,只记录事实,不包含主观意见。
  2. rules:
    —— 每个产物(提案/设计/任务/规范)的检查点,编码了团队的质量标准。

Phases

执行阶段

Phase 0 — File State Detection

阶段0 —— 文件状态检测

Before any interview question is asked, check whether
openspec/config.yaml
exists and assess its state. Never silently pick a mode — always announce the detected mode to the user and confirm before proceeding.
Detection logic:
Does openspec/config.yaml exist?
├── No → MODE 1: Create
│         Full interview from scratch.
└── Yes → Read the file, then assess:
          ├── Empty or near-blank (schema: line only, no context/rules)?
          │     → MODE 1: Create (with overwrite confirmation)
          │       Ask: "config.yaml exists but appears empty — should I
          │       populate it from scratch, or preserve any current content?"
          ├── Contains recognised fields?
          │   (context: block present, rules: block with known artifact keys)
          │     → MODE 3: Refresh
          │       Abbreviated interview covering only detected drift and
          │       unresolved # TODO: fill in markers.
          └── Contains real content in an unrecognised shape?
                → MODE 2: Import
                  Present three options (A / B / C) before proceeding.
Recognised shape = file is valid YAML with at least a
context:
key whose value is a non-empty string, or a
rules:
key with at least one of the known artifact IDs (
proposal
,
specs
,
design
,
tasks
).

在提出任何访谈问题之前,先检查
openspec/config.yaml
是否存在并评估其状态。绝对不要静默选择模式,始终向用户告知检测到的模式并确认后再继续。
检测逻辑:
openspec/config.yaml是否存在?
├── 不存在 → 模式1:新建
│         从头开始完整访谈。
└── 存在 → 读取文件后评估:
          ├── 空或接近空白(仅有schema行,无context/rules)?
          │     → 模式1:新建(需确认覆盖)
          │       询问:“config.yaml已存在但看起来是空的,我应该从头填充内容,还是保留现有内容?”
          ├── 包含可识别字段?
          │   (存在context: 块,rules: 块包含已知的产物key)
          │     → 模式3:刷新
          │       仅针对检测到的信息漂移和未解决的# TODO: fill in标记开展简化访谈。
          └── 包含有效内容但结构无法识别?
                → 模式2:导入
                  继续前先提供三个选项(A / B / C)供用户选择。
可识别结构 = 文件是有效的YAML,至少包含值为非空字符串的
context:
键,或者
rules:
键下至少有一个已知产物ID(
proposal
specs
design
tasks
)。

Mode 1: Create

模式1:新建

Run the full Phase 1 → Phase 2 → Phase 3 → Phase 4 interview. This is the happy path for a fresh repo.

执行完整的阶段1 → 阶段2 → 阶段3 → 阶段4访谈,这是新仓库的理想路径。

Mode 2: Import

模式2:导入

The file has real content that was not generated by this skill. Present the user with three options before touching anything:
"This
config.yaml
has existing content with a structure I don't recognise. How would you like to proceed?
(a) Restructure — I'll import your existing content, map it onto the
context:
/
rules:
schema, flag any material that belongs in
AGENTS.md
instead (workflow steps, commit conventions, tool preferences), run a targeted interview to fill gaps, and produce a merged file ready to replace the current one.
(b) Append — I'll run the full interview and add the skill's
context:
and
rules:
sections alongside your existing content without modifying what's already there.
(c) Dry run — I'll run the full interview and show you exactly what I would have generated, with no changes to the filesystem. Use this to evaluate fit before committing."
If option (a) is chosen:
  1. Read the file in full.
  2. Map existing content onto
    context:
    sub-sections and
    rules:
    artifact keys where possible.
  3. Flag any content that violates the separation-of-concerns boundary (e.g., commit conventions, workflow steps, tool preferences, agent decision heuristics) — these belong in
    AGENTS.md
    . For each violation, ask: "This looks behavioral — it belongs in AGENTS.md. Should I move it there and remove it from config.yaml?"
  4. Run a targeted interview covering only the gaps (context sub-sections with no existing coverage; artifact keys with no rules).
  5. Show a merged preview before writing. Existing content is labelled
    # from existing file
    ; new content is labelled
    # new
    .
If option (b) is chosen: Run the full Phase 1 → Phase 4 interview and write the generated
context:
and
rules:
blocks alongside existing content. Add a comment at the top:
# Sections below added by accelint-onboard-openspec skill
.
If option (c) is chosen: Run the full Phase 1 → Phase 4 interview and present the output in the conversation. Explicitly state: "No files were changed." Offer to re-run as (a) or (b) if the user is satisfied.

文件包含非本skill生成的有效内容,修改任何内容前先向用户提供三个选项:
“这个
config.yaml
已有内容,但结构我无法识别。你希望如何处理?
(a) 重构 —— 我会导入现有内容,映射到
context:
/
rules:
schema,标记所有应该属于
AGENTS.md
的内容(工作流步骤、提交规范、工具偏好),开展针对性访谈填补空缺,生成合并后的文件替换现有文件。
(b) 追加 —— 我会执行完整访谈,将生成的
context:
rules:
块追加到现有内容旁,不修改已有内容。
(c) dry run —— 我会执行完整访谈,只展示生成的内容,不修改文件系统。你可以先评估是否符合预期再决定是否落地。"
如果选择选项(a):
  1. 完整读取文件。
  2. 尽可能将现有内容映射到
    context:
    子块和
    rules:
    产物key。
  3. 标记所有违反关注点分离边界的内容(比如提交规范、工作流步骤、工具偏好、Agent决策启发式)——这些内容应该放在
    AGENTS.md
    里。每个违规项都询问:“这看起来是行为相关内容,应该放在AGENTS.md里,我需要把它移到AGENTS.md并从config.yaml里删除吗?”
  4. 仅针对空缺内容开展针对性访谈(没有覆盖的context子块、没有规则的产物key)。
  5. 写入前展示合并预览,现有内容标记
    # from existing file
    ,新内容标记
    # new
如果选择选项(b): 执行完整的阶段1 → 阶段4访谈,将生成的
context:
rules:
块追加到现有内容旁,在顶部添加注释:
# Sections below added by accelint-onboard-openspec skill
如果选择选项(c): 执行完整的阶段1 → 阶段4访谈,在对话中展示输出,明确说明:“没有修改任何文件”。如果用户满意,可提供重新以(a)或(b)模式运行的选项。

Mode 3: Refresh

模式3:刷新

The file matches the skill's expected schema — it was likely produced by a previous run. Run an abbreviated interview covering only:
  1. Drift detection — scan the codebase for changes since the file was last updated:
    SignalWhere to look
    Runtime / Node version changed
    .nvmrc
    ,
    .node-version
    ,
    Dockerfile
    New packages / frameworks added
    package.json
    deps, workspace roots
    TypeScript config tightened
    tsconfig.json
    — new
    strict*
    flags
    New packages in monorepo
    pnpm-workspace.yaml
    ,
    turbo.json
    Build tooling changed
    vite.config.*
    ,
    tsup.config.*
    CI/CD workflows added
    .github/workflows/
    New domain conceptsNew top-level directories, new entity types in source
    Anti-patterns deprecated
    @deprecated
    tags,
    // TODO: replace
    comments added
  2. Unresolved TODOs — find all
    # TODO: fill in
    markers left from the previous run and surface them as targeted questions.
  3. Announce findings before asking anything:
    "I found [N] context sections that may have drifted and [M] unresolved TODOs. I'll only ask about those — the rest looks current."
  4. After the targeted interview, show only the changed sections in the preview before writing. Do not re-emit unchanged sections.

文件符合skill的预期schema,很可能是之前运行生成的。仅针对以下内容开展简化访谈:
  1. 漂移检测 —— 扫描代码库自上次文件更新后的变化:
    信号检查位置
    运行时/Node版本变更
    .nvmrc
    ,
    .node-version
    ,
    Dockerfile
    新增包/框架
    package.json
    依赖、工作区根目录
    TypeScript配置收紧
    tsconfig.json
    —— 新增的
    strict*
    标识
    Monorepo新增包
    pnpm-workspace.yaml
    ,
    turbo.json
    构建工具变更
    vite.config.*
    ,
    tsup.config.*
    新增CI/CD工作流
    .github/workflows/
    新增领域概念新增的顶层目录、源码中新增的实体类型
    弃用的反模式新增的
    @deprecated
    标记、
    // TODO: replace
    注释
  2. 未解决的TODO —— 找到上一次运行留下的所有
    # TODO: fill in
    标记,作为针对性问题提出。
  3. 提问前先告知发现结果:
    “我发现有[N]个context部分可能存在信息漂移,还有[M]个未解决的TODO,我只会询问这些内容,其余部分看起来都是最新的。"
  4. 针对性访谈结束后,写入前仅展示变更的部分,不要重新输出未变更的内容。

Phase 1 — Discovery Interview

阶段1 —— 信息收集访谈

Run the interview conversationally. Don't dump all questions at once. Group them into natural topic turns. If the user mentions a stack, infer related tooling and confirm rather than asking again.
Turn 1 — Project Identity
  • What is the project name and its primary purpose?
  • Monorepo, single package, or something else? If monorepo, what workspaces?
  • Build system / task orchestration? (Turbo, Nx, Make, npm scripts, Makefile…)
  • Package manager and any private registries? (npm, pnpm, yarn, bun…)
Turn 2 — Tech Stack (ask as a grouped block, not one by one)
  • Runtime and version (Node.js 20, Bun 1.x, Python 3.12, etc.)
  • Language + config (TypeScript strict?
    exactOptionalPropertyTypes
    ? Python type hints?)
  • Framework(s) and version (React 18, Next.js 14, Express, FastAPI, etc.)
  • Key domain libraries (Deck.gl, Apache Arrow, Prisma, SQLAlchemy, etc.)
  • Data layer (Postgres, MongoDB, DynamoDB, ORM/query builder, data formats)
  • Testing setup (Vitest, Jest, Pytest, testing-library, Playwright, etc.)
  • Linting / formatting (ESLint, Biome, Prettier, Black, Ruff, etc.)
  • Build tools (Vite, tsup, esbuild, Webpack, etc.)
  • CI/CD (GitHub Actions, CircleCI, etc.)
  • Versioning approach (Changesets, standard-version, conventional commits, etc.)
Turn 3 — Architecture
  • How is the codebase organised? (feature-based, layer-based, domain-driven?)
  • Where does shared/utility code live?
  • Any path aliases? (
    @/
    ,
    ~/
    ,
    src/
    ,
    #lib/
    , etc.)
  • Design patterns commonly in use? (factory, repository, observer, CQRS, etc.)
Turn 4 — Domain Concepts
  • What are the 3–5 most important domain entities? Example prompt: "For a mapping app this might be Layer, Source, Viewport, Feature, Style."
  • Any domain-specific terminology the AI should know?
  • Any specialised concepts with non-obvious meanings in this codebase? Example: "orchestration" means something specific to us — it's the runtime layer that merges style with data, not a general workflow term.
Turn 5 — Performance
  • Any concrete performance targets? (p95 < 200 ms, 60 fps, < 50 MB heap, etc.)
  • Known hot paths or performance-critical areas?
  • Memory or bundle-size constraints?
Turn 6 — Code Patterns
  • Export style: named exports, default exports, or mixed?
  • Naming conventions: files, variables, functions, constants? Example: "kebab-case files, camelCase vars, SCREAMING_SNAKE_CASE for enums, PascalCase for types."
  • Error handling: throw,
    Result<T,E>
    , error boundaries, something else?
  • Testing structure:
    describe/it
    ,
    test/expect
    , AAA pattern?
  • Test file location: co-located with source or a separate
    __tests__/
    tree?
  • Fixture / factory approach for test data?
Note: Commit message convention is a workflow procedure — it belongs in
AGENTS.md
, not here. If the user raises it now, capture it mentally and surface it in the
accelint-onboard-agents
skill. Do not add it to
config.yaml
.
Turn 7 — Anti-Patterns
  • Any patterns explicitly banned in code review?
  • Deprecated patterns still in the codebase that new code should NOT emulate?
  • Known performance traps specific to this stack?
Turn 8 — Proposal Rules What does YOUR team require in a proposal? Good prompts:
  • "Do you need proposals to call out database migration impact?"
  • "Do you need proposals to flag API breaking changes?"
  • "Any security review checklist items?"
Turn 9 — Design Rules Project-specific design concerns to encode? Good prompts:
  • "Docker / Kubernetes resource changes to document?"
  • "Performance implications section required?"
  • "Specific architecture diagram style (ASCII, Mermaid)?"
Turn 10 — Task Rules
  • How do you tag tasks by package or module? Example:
    [PKG:auth]
    ,
    [MODULE:pipeline]
    , GitHub labels…
  • Rollback plan required for database changes?
  • Deployment-specific test gates (smoke tests, canary checks)?

以对话方式开展访谈,不要一次性抛出所有问题,按自然主题分组提问。如果用户提到了技术栈,可以推断相关工具并确认,无需重复提问。
第一轮 —— 项目基本信息
  • 项目名称和核心用途是什么?
  • 是Monorepo、单包还是其他结构?如果是Monorepo,有哪些工作区?
  • 构建系统/任务编排工具是什么?(Turbo、Nx、Make、npm脚本、Makefile…)
  • 包管理器和私有源?(npm、pnpm、yarn、bun…)
第二轮 —— 技术栈 (以分组块提问,不要逐个问)
  • 运行时和版本(Node.js 20、Bun 1.x、Python 3.12等)
  • 语言+配置(是否开启TypeScript strict?
    exactOptionalPropertyTypes
    ?Python类型提示?)
  • 框架和版本(React 18、Next.js 14、Express、FastAPI等)
  • 核心领域库(Deck.gl、Apache Arrow、Prisma、SQLAlchemy等)
  • 数据层(Postgres、MongoDB、DynamoDB、ORM/查询构建器、数据格式)
  • 测试配置(Vitest、Jest、Pytest、testing-library、Playwright等)
  • lint/格式化工具(ESLint、Biome、Prettier、Black、Ruff等)
  • 构建工具(Vite、tsup、esbuild、Webpack等)
  • CI/CD(GitHub Actions、CircleCI等)
  • 版本管理方案(Changesets、standard-version、约定式提交等)
第三轮 —— 架构
  • 代码库的组织方式?(按功能、按层级、领域驱动?)
  • 公共/工具代码放在哪里?
  • 有没有路径别名?(
    @/
    ~/
    src/
    #lib/
    等)
  • 常用的设计模式?(工厂、仓储、观察者、CQRS等)
第四轮 —— 领域概念
  • 最重要的3-5个领域实体是什么? 示例提示:“对于地图应用来说可能是图层、数据源、视口、要素、样式。”
  • 有没有AI需要知道的领域特定术语?
  • 代码库中有没有含义特殊的专业概念? 示例:“orchestration在我们这里有特定含义,指的是合并样式和数据的运行时层,不是通用的工作流术语。”
第五轮 —— 性能
  • 有没有明确的性能指标?(p95 < 200ms、60fps、堆内存<50MB等)
  • 已知的热路径或性能关键区域?
  • 内存或包体积限制?
第六轮 —— 代码模式
  • 导出风格:命名导出、默认导出还是混合?
  • 命名规范:文件、变量、函数、常量? 示例:“文件用kebab-case,变量用camelCase,枚举用大写下划线,类型用PascalCase。”
  • 错误处理:throw、
    Result<T,E>
    、错误边界还是其他?
  • 测试结构:
    describe/it
    test/expect
    、AAA模式?
  • 测试文件位置:和源码放在一起还是单独的
    __tests__/
    目录?
  • 测试数据的夹具/工厂方案?
注意: 提交信息规范属于工作流流程,应该放在
AGENTS.md
里,不要放在这里。如果用户现在提到,先记下来,在
accelint-onboard-agents
skill里提出,不要添加到
config.yaml
第七轮 —— 反模式
  • 代码评审中明确禁止的模式有哪些?
  • 代码库中还存在但新代码不应该模仿的弃用模式?
  • 这个技术栈特有的已知性能陷阱?
第八轮 —— 提案规则 你的团队对提案有什么要求?参考提示:
  • “你们需要提案里说明数据库迁移的影响吗?”
  • “你们需要提案里标记API破坏性变更吗?”
  • “有没有安全评审检查项?”
第九轮 —— 设计规则 需要编码的项目特定设计关注点?参考提示:
  • “需要记录Docker/Kubernetes资源变更吗?”
  • “要求包含性能影响部分吗?”
  • “特定的架构图风格(ASCII、Mermaid)?”
第十轮 —— 任务规则
  • 你如何按包或模块给任务打标签? 示例:
    [PKG:auth]
    [MODULE:pipeline]
    、GitHub标签…
  • 数据库变更需要回滚计划吗?
  • 部署特定的测试闸门(冒烟测试、金丝雀检查)?

Phase 2 — Smart Defaults

阶段2 —— 智能默认值

After each stack answer, surface relevant conventions to confirm. Use these examples as a pattern; extend to other stacks as appropriate.
Next.js + TypeScript + Tailwind → suggest confirming:
  • App Router vs Pages Router and which patterns apply
  • Server Component vs Client Component boundary rules
  • "use client"
    directive placement convention
  • API route organisation (
    app/api/
    vs
    pages/api/
    )
React + Vitest + testing-library → suggest confirming:
  • userEvent
    over
    fireEvent
    preference
  • screen
    query priority (role > label > testid)
  • render
    wrapper for providers
Python + FastAPI → suggest confirming:
  • Pydantic v1 vs v2 (different field-validator syntax)
  • Dependency injection for DB sessions (
    Depends
    )
  • Alembic migration workflow
  • lifespan
    vs
    startup
    /
    shutdown
    event hooks
Node.js + Prisma → suggest confirming:
  • prisma.$transaction
    patterns
  • Soft-delete vs hard-delete convention
  • Migration naming convention

用户回答完每个技术栈相关问题后,给出相关的惯例建议供确认。以下是示例模式,可扩展到其他技术栈:
Next.js + TypeScript + Tailwind → 建议确认:
  • App Router还是Pages Router,适用哪些模式
  • Server Component和Client Component的边界规则
  • "use client"
    指令的放置规范
  • API路由组织方式(
    app/api/
    vs
    pages/api/
React + Vitest + testing-library → 建议确认:
  • 优先使用
    userEvent
    而非
    fireEvent
  • screen
    查询优先级(role > label > testid)
  • providers的
    render
    封装
Python + FastAPI → 建议确认:
  • Pydantic v1还是v2(字段验证器语法不同)
  • DB会话的依赖注入(
    Depends
  • Alembic迁移工作流
  • lifespan
    vs
    startup
    /
    shutdown
    事件钩子
Node.js + Prisma → 建议确认:
  • prisma.$transaction
    模式
  • 软删除还是硬删除规范
  • 迁移命名规范

Phase 3 — Codebase Inference (fill gaps before generating)

阶段3 —— 代码库推断(生成前填补空缺)

After the interview, audit every config field that still has no answer. For each gap, attempt to derive the answer directly from the codebase before asking the user or leaving the field empty. All config sections are load-bearing — a missing field degrades every downstream AI artifact, so inference is always preferable to omission.
Inference targets and where to look:
GapFiles / signals to inspect
Runtime / Node version
.nvmrc
,
.node-version
,
package.json#engines
,
Dockerfile
TypeScript config
tsconfig.json
(compilerOptions flags, paths aliases)
Package manager
package-lock.json
,
yarn.lock
,
pnpm-lock.yaml
,
bun.lockb
Monorepo workspaces
package.json#workspaces
,
pnpm-workspace.yaml
,
turbo.json
,
nx.json
Build tools
vite.config.*
,
webpack.config.*
,
tsup.config.*
,
esbuild
scripts in
package.json
Test framework
vitest.config.*
,
jest.config.*
,
pytest.ini
,
pyproject.toml#tool.pytest
Linting / formatting
.eslintrc*
,
biome.json
,
.prettierrc*
,
ruff.toml
CI/CD
.github/workflows/
,
.circleci/
,
Jenkinsfile
Versioning
.changeset/
,
CHANGELOG.md
,
commitlint.config.*
,
.releaserc*
Path aliases
tsconfig.json#compilerOptions.paths
,
vite.config#resolve.alias
Architecture organisationDirectory tree of
src/
or workspace roots — infer feature-based vs layer-based
Design patternsSample source files — look for factory functions, repository objects, observer hooks
Export styleSample 3–5 source files; tally named vs default exports
Naming conventionsSample file names, exported identifiers; describe what you observe
Error handlingGrep for
throw
,
Result
,
Either
,
tryCatch
, error boundary components
Test structureSample test files — describe/it nesting depth, file location relative to source
Anti-patterns
eslint
rule overrides marked
off
or
warn
, comments like
// TODO: replace
,
@deprecated
After inference, for each field resolved this way, note the source in the preview with a trailing comment, e.g.:
yaml
- Runtime: Node.js 20 LTS   # inferred from .nvmrc
- Language: TypeScript 5.4, strict, exactOptionalPropertyTypes   # inferred from tsconfig.json
If a field genuinely cannot be inferred (e.g., performance targets, domain concepts, team-specific rules), mark it with
# TODO: fill in
rather than omitting it. The user can resolve these after reviewing the preview. Do not silently drop a section — an explicit TODO is a prompt to act; an absent section is an invisible gap.

访谈结束后,检查所有仍未得到答案的配置字段。对于每个空缺,先尝试直接从代码库推导答案,再考虑询问用户或留空。所有配置部分都是核心支撑,缺失的字段会降低所有下游AI产物的质量,所以推断始终优于省略。
推断目标和检查位置:
空缺项检查的文件/信号
运行时/Node版本
.nvmrc
.node-version
package.json#engines
Dockerfile
TypeScript配置
tsconfig.json
(compilerOptions标识、paths别名)
包管理器
package-lock.json
yarn.lock
pnpm-lock.yaml
bun.lockb
Monorepo工作区
package.json#workspaces
pnpm-workspace.yaml
turbo.json
nx.json
构建工具
vite.config.*
webpack.config.*
tsup.config.*
package.json
中的esbuild脚本
测试框架
vitest.config.*
jest.config.*
pytest.ini
pyproject.toml#tool.pytest
Lint/格式化
.eslintrc*
biome.json
.prettierrc*
ruff.toml
CI/CD
.github/workflows/
.circleci/
Jenkinsfile
版本管理
.changeset/
CHANGELOG.md
commitlint.config.*
.releaserc*
路径别名
tsconfig.json#compilerOptions.paths
vite.config#resolve.alias
架构组织
src/
或工作区根目录的目录结构,推断是按功能还是按层级组织
设计模式示例源码文件,查找工厂函数、仓储对象、观察者钩子
导出风格抽样3-5个源码文件,统计命名导出和默认导出的占比
命名规范抽样文件名、导出的标识符,描述观察到的规则
错误处理搜索
throw
Result
Either
tryCatch
、错误边界组件
测试结构抽样测试文件,describe/it嵌套深度、相对于源码的文件位置
反模式标记为
off
warn
的eslint规则覆盖、
// TODO: replace
这类注释、
@deprecated
标记
推断完成后,每个通过这种方式解决的字段,在预览中添加 trailing 注释标注来源,例如:
yaml
- Runtime: Node.js 20 LTS   # 从.nvmrc推断
- Language: TypeScript 5.4, strict, exactOptionalPropertyTypes   # 从tsconfig.json推断
如果某个字段确实无法推断(例如性能指标、领域概念、团队特定规则),标记为
# TODO: fill in
,不要省略。用户可以在查看预览后补充这些内容。不要静默删除某个部分——明确的TODO是行动提示,缺失的部分是看不见的空缺。

Phase 4 — Generation

阶段4 —— 生成

  1. Show a labeled preview of the full config before writing anything. Inferred values carry their source comment; unresolved fields carry
    # TODO: fill in
    . This gives the user a complete picture of confidence level across every field.
  2. Ask: "Does this look right? Any sections to correct or expand before I write the file?"
  3. After confirmation, write to
    openspec/config.yaml
    (create directory if needed), stripping the inference source comments — they are for review only, not the final file.
  4. Print a brief summary of what was configured, what was inferred vs answered directly, and which
    # TODO
    fields still need human input.

  1. 写入前先展示带标记的完整配置预览,推断值带来源注释,未解决的字段带
    # TODO: fill in
    。让用户全面了解每个字段的置信度。
  2. 询问:“看起来是否正确?写入文件前有没有需要修改或补充的部分?”
  3. 确认后,写入
    openspec/config.yaml
    (必要时创建目录),移除推断来源注释——这些注释仅用于评审,不要出现在最终文件中。
  4. 打印简短摘要,说明配置了哪些内容,哪些是推断的、哪些是用户直接回答的,还有哪些
    # TODO
    字段需要人工补充。

Config Template

配置模板

Use this exact structure. Fill every
[placeholder]
with content from the interview or codebase inference. If a field cannot be resolved by either means, replace its placeholder with
# TODO: fill in
— never omit the field. Every section is load-bearing for downstream AI artifact quality.
yaml
schema: spec-driven
使用这个严格的结构,用访谈或代码库推断的内容填充所有
[占位符]
。如果两种方式都无法确定字段值,将占位符替换为
# TODO: fill in
——绝对不要省略字段。每个部分对下游AI产物的质量都至关重要。
yaml
schema: spec-driven

Project Context

项目上下文

Injected into every AI-generated artifact (proposal, design, spec, tasks).

注入到每个AI生成的产物(提案、设计、规范、任务)中

QRSPI principle: objective research layer — facts only, no opinions.

QRSPI原则:客观研究层——仅记录事实,不包含主观意见

context: |

═══════════════════════════════════════════════════════════════════════════

STACK FACTS

═══════════════════════════════════════════════════════════════════════════

Project Identity

[project name and one-sentence purpose] [repo structure: monorepo / single-package / workspaces list] [build system and task orchestration] [package manager + registries]

Tech Stack

  • Runtime: [e.g., Node.js 20 LTS]
  • Language: [e.g., TypeScript 5.4, strict mode, exactOptionalPropertyTypes]
  • Framework: [e.g., Next.js 14 App Router]
  • Key Libraries: [domain-specific dependencies with versions]
  • Data Layer: [databases, ORMs, data formats, query builders]
  • Testing: [framework, utilities, coverage tooling]
  • Linting/Formatting: [tools and config files in use]
  • Build Tools: [bundlers, compilers, transpilers]
  • CI/CD: [platform and key workflow names]
  • Versioning: [release strategy and changelog tooling]

Architecture Patterns

  • Organisation: [feature-based / layer-based / domain-driven / other]
  • Shared code: [path to shared utilities / packages]
  • Path aliases: [list of aliases and their resolved paths]
  • Key patterns: [design patterns in common use]

Domain Concepts

  • [Entity or concept]: [one-line definition]
  • [Entity or concept]: [one-line definition]
  • [Entity or concept]: [one-line definition]

Performance Targets

  • [metric]: [target value and context]

═══════════════════════════════════════════════════════════════════════════

PATTERNS TO FOLLOW

═══════════════════════════════════════════════════════════════════════════

Code Patterns

  • Exports: [named / default / mixed — and when each applies]
  • Naming: [files, variables, functions, constants, types]
  • Error handling: [throw / Result<T,E> / boundaries / other]
  • Validation: [approach and library]
  • Constants: [enum pattern or constant object pattern]

Architecture Patterns

  • [pattern name]: [brief description of how it's used here]

Testing Patterns

  • Structure: [describe/it nesting convention]
  • File location:[co-located / tests / other]
  • Fixtures: [factory functions / fixture files / inline data]
  • Assertions: [preferred assertion style]
  • Benchmarks: [approach if any]

NOTE: Commit message convention, PR workflow, and tool preferences

are behavioral — they belong in AGENTS.md, not here.

═══════════════════════════════════════════════════════════════════════════

PATTERNS TO AVOID

═══════════════════════════════════════════════════════════════════════════

  • [anti-pattern]: [why it's banned or deprecated]
  • [anti-pattern]: [why it's banned or deprecated]
context: |

═══════════════════════════════════════════════════════════════════════════

技术栈事实

═══════════════════════════════════════════════════════════════════════════

项目基本信息

[项目名称和一句话用途] [仓库结构:monorepo / 单包 / 工作区列表] [构建系统和任务编排工具] [包管理器 + 源]

技术栈

  • Runtime: [例如:Node.js 20 LTS]
  • Language: [例如:TypeScript 5.4, strict mode, exactOptionalPropertyTypes]
  • Framework: [例如:Next.js 14 App Router]
  • Key Libraries: [带版本的领域特定依赖]
  • Data Layer: [数据库、ORM、数据格式、查询构建器]
  • Testing: [框架、工具、覆盖率工具]
  • Linting/Formatting: [使用的工具和配置文件]
  • Build Tools: [打包器、编译器、转译器]
  • CI/CD: [平台和核心工作流名称]
  • Versioning: [发布策略和变更日志工具]

架构模式

  • Organisation: [按功能 / 按层级 / 领域驱动 / 其他]
  • Shared code: [公共工具/包的路径]
  • Path aliases: [别名列表和对应的解析路径]
  • Key patterns: [常用的设计模式]

领域概念

性能指标

═══════════════════════════════════════════════════════════════════════════

需遵循的模式

═══════════════════════════════════════════════════════════════════════════

代码模式

  • Exports: [命名导出 / 默认导出 / 混合——以及适用场景]
  • Naming: [文件、变量、函数、常量、类型]
  • Error handling: [throw / Result<T,E> / 错误边界 / 其他]
  • Validation: [方案和库]
  • Constants: [枚举模式或常量对象模式]

架构模式

测试模式

  • Structure: [describe/it嵌套规范]
  • File location:[和源码同目录 / tests / 其他]
  • Fixtures: [工厂函数 / 夹具文件 / 内联数据]
  • Assertions: [偏好的断言风格]
  • Benchmarks: [如果有的话说明方案]

注意:提交信息规范、PR工作流和工具偏好属于行为相关内容,应该放在AGENTS.md里,不要放在这里。

═══════════════════════════════════════════════════════════════════════════

需避免的模式

═══════════════════════════════════════════════════════════════════════════

═══════════════════════════════════════════════════════════════════════════

═══════════════════════════════════════════════════════════════════════════

PER-ARTIFACT RULES

各产物规则

═══════════════════════════════════════════════════════════════════════════

═══════════════════════════════════════════════════════════════════════════

rules: proposal: # QRSPI: Scope definition, not a plan. - State the requirement or ticket driving this change - Define scope boundaries — explicitly list what is OUT of scope - Keep under 100 lines (tight and focused) [user-specific proposal rules]
design: # QRSPI: The "brain surgery" checkpoint — reviewed before any code is written. # Target ~200 lines capturing current state, desired state, open questions.
# Required sections (in this order):
- Start with "Current State": what the code does today, key files, entry
  points, relevant data flows
- "Desired End State": what changes after this work, what stays the same
- "Patterns to Follow": ONLY if specific files/functions to reference exist
  for this change's domain
- "Patterns to Avoid": ONLY if specific anti-patterns apply to this change
- "Open Questions": genuine uncertainties requiring human input. If none,
  state explicitly "No unresolved questions."
- "Resolved Decisions": numbered (Decision 1, Decision 2…) with Choice,
  Rationale, Alternatives Considered

# Technical depth:
- Use ASCII diagrams for data flows, state machines, architecture
- Call out performance implications where relevant
[user-specific design rules]

# Constraints:
- Keep under 250 lines total
tasks: # QRSPI: Vertical slicing for early failure detection.
# Vertical slicing (strong preference):
- Order as vertical slices — each task delivers a testable end-to-end path
- Do NOT group by architectural layer unless explicitly justified
- Horizontal (layer-by-layer) only for pure infrastructure; include
  justification in the task description when used
- Each task MUST include an explicit "Test:" line describing what to verify
  before proceeding to the next task
- Prefer 3–5 major slices; more than 5 suggests scope is too large

# Granularity:
- Max 2 hours per task; break larger work into subtasks
[user-specific task tagging, e.g., [PKG:name] or [MODULE:name]]
- Call out inter-task dependencies explicitly
[user-specific rollback requirements]
[user-specific deployment test gates]
spec: - Use Given/When/Then for behaviour specifications - Include concrete example data relevant to the domain - Document edge cases explicitly [user-specific spec rules]

---
rules: proposal: # QRSPI:范围定义,不是计划 - 说明驱动本次变更的需求或工单 - 定义范围边界——明确列出不包含的内容 - 控制在100行以内(紧凑聚焦) [用户特定的提案规则]
design: # QRSPI:“脑外科”检查点——写代码前评审 # 目标约200行,包含当前状态、期望状态、待解决问题
# 必填部分(按此顺序):
- 以“当前状态”开头:当前代码的功能、核心文件、入口点、相关数据流
- “期望最终状态”:本次工作完成后哪些变更、哪些保持不变
- “需遵循的模式”:仅当本次变更领域存在可参考的特定文件/函数时填写
- “需避免的模式”:仅当本次变更适用特定反模式时填写
- “待解决问题”:需要人工输入的真实不确定性,如果没有明确说明“无未解决问题”
- “已解决决策”:编号(决策1、决策2…)包含选择、理由、考虑过的替代方案

# 技术深度:
- 用ASCII图展示数据流、状态机、架构
- 相关场景要说明性能影响
[用户特定的设计规则]

# 约束:
- 总长度控制在250行以内
tasks: # QRSPI:垂直切片用于提前发现问题
# 垂直切片(强烈推荐):
- 按垂直切片排序——每个任务交付可测试的端到端路径
- 不要按架构层分组,除非有明确理由
- 仅纯基础设施工作可以按水平(逐层)拆分,使用时需在任务描述中说明理由
- 每个任务必须包含明确的“Test:”行,描述进入下一个任务前需要验证的内容
- 优先3-5个主要切片,超过5个说明范围过大

# 粒度:
- 每个任务最多2小时,更大的工作拆分为子任务
[用户特定的任务标签,例如[PKG:name]或[MODULE:name]]
- 明确说明任务间的依赖关系
[用户特定的回滚要求]
[用户特定的部署测试闸门]
spec: - 行为规范使用Given/When/Then格式 - 包含领域相关的具体示例数据 - 明确记录边缘情况 [用户特定的规范规则]

---

Interaction Principles

交互原则

  • Conversational, not interrogative. Bundle related questions into a single turn. Use natural language, not bullet-dump forms.
  • Infer and confirm. "You mentioned Vitest — I'll assume you're using
    @testing-library/react
    for component tests; correct?" is better than asking from scratch.
  • Examples reduce ambiguity. When asking about naming conventions, give an example first so the user can pattern-match.
  • Iterative. Let the user amend answers. Don't lock them into the first response.
  • Preview before writing. Always show the full generated config and get explicit confirmation before touching the filesystem.
  • Infer before asking, ask before omitting. Always attempt codebase inference for any unanswered field. If inference fails, surface a
    # TODO
    rather than dropping the section. A config with explicit TODOs is actionable; a config with missing sections silently degrades every artifact it drives.
  • 对话式,不是审问式,将相关问题打包到一次交互中,使用自然语言,不要抛出一堆子弹点表单。
  • 先推断再确认,“你提到了Vitest,我猜你用
    @testing-library/react
    做组件测试,对吗?”比从零开始提问更好。
  • 示例减少歧义,询问命名规范时先给出示例,方便用户模式匹配。
  • 迭代式,允许用户修改答案,不要强制用户接受第一次回复。
  • 写入前预览,始终展示完整的生成配置,修改文件系统前获得明确确认。
  • 先推断再提问,先提问再省略,任何未回答的字段都先尝试代码库推断,如果推断失败,添加
    # TODO
    标记,不要删除部分。带有明确TODO的配置是可执行的,缺失部分的配置会默默降低所有依赖它的产物质量。