writing-skills

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Writing Skills

Agent技能撰写

Objective

目标

Produce professional-grade skills: high-signal, safe, portable, and reliably triggerable. This skill:
  • Writes or updates a skill directory (
    SKILL.md
    + optional
    scripts/
    ,
    references/
    ,
    assets/
    )
  • Generates
    agents/openai.yaml
    UI metadata
  • Runs validation (
    skillcheck
    )
  • Runs a critic review using
    $reviewing-skills
    and iterates until the bar is met
产出专业级的Agent技能:信息密度高、安全、可移植且触发可靠。本技能可:
  • 撰写或更新技能目录(包含
    SKILL.md
    及可选的
    scripts/
    references/
    assets/
    目录)
  • 生成
    agents/openai.yaml
    UI元数据
  • 运行验证工具
    skillcheck
  • 通过
    $reviewing-skills
    执行评审,并迭代直至达到质量标准

When to use / When not to use

适用场景与禁用场景

适用场景:

Use when:
  • The user asks to create a new skill (
    SKILL.md
    + optional
    scripts/
    ,
    references/
    ,
    assets/
    ).
  • The user asks to refactor, tighten, or “upgrade” an existing skill for trigger precision and token efficiency.
Do not use when:
  • The user only wants a rubric-based review/grade of an existing skill (use
    $reviewing-skills
    ).
  • The request is not about a skill directory containing
    SKILL.md
    .
  • 用户要求创建新技能(包含
    SKILL.md
    及可选的
    scripts/
    references/
    assets/
  • 用户要求重构、精简或“升级”现有技能,以提升触发精准度和Token效率

Quality Bar (default)

禁用场景:

Target outcome:
  • No spec violations, and
  • Weighted score ≥ 4.5/5.0 (A- or better), and
  • No P1 findings
The rubric is owned by:
$reviewing-skills
reviewing-skills/references/skills-rubric.md
(single source of truth).
  • 用户仅需对现有技能进行基于评分标准的评审/打分(此时应使用
    $reviewing-skills
  • 请求内容不涉及包含
    SKILL.md
    的技能目录

Safety / Constraints (non-negotiable)

默认质量标准

  • Never read, request, or paste secrets (
    .env
    , API keys, tokens, private keys, credentials).
  • Only write inside the user-specified skill directory. If the target path is unclear, ask.
  • Do not run commands that modify the repo unless the user explicitly asked for those changes.
  • Do not browse the web or call external systems unless the user explicitly requests it.
  • Do not execute untrusted code in the target repo (scripts/binaries/tests) unless the user explicitly asks and you can justify the risk.
  • If the skill being written can perform destructive actions, add explicit confirmation gates and “never do” rules.
目标成果:
  • 无规范违反,且
  • 加权得分≥4.5/5.0(即A-及以上),且
  • 无P1级问题
评分细则由
$reviewing-skills
维护,其唯一权威来源为:
reviewing-skills/references/skills-rubric.md

Portability Requirement (Codex + Claude Code/Desktop + OpenCode)

安全约束(不可协商)

Write skill instructions in capability language (search/read/edit/run commands) and avoid hard-coding one vendor’s tool names. If mentioning a product-specific tool, provide a short adapter note (“if unavailable, use shell + rg/sed”). For portability guidance, use references/portability.md.
  • 绝不要读取、请求或粘贴机密信息(如
    .env
    文件、API密钥、令牌、私钥、凭证)
  • 仅在用户指定的技能目录内写入文件。若目标路径不明确,请询问用户
  • 除非用户明确要求,否则不要运行会修改代码库的命令
  • 除非用户明确要求,否则不要浏览网页或调用外部系统
  • 除非用户明确要求且你能证明风险可控,否则不要在目标代码库中执行不可信代码(如脚本/二进制文件/测试用例)
  • 若所撰写的技能会执行破坏性操作,需添加明确的确认步骤和“禁止操作”规则

Workflow (decision-complete)

可移植性要求(适配Codex + Claude Code/Desktop + OpenCode)

Update mode (keep diffs small)

If the user asked to update an existing skill (not create a new one):
  • Change only the requested parts; do not rewrite unrelated sections for style.
  • Preserve existing behavior unless it is a spec violation or causes mis-triggering.
  • Prioritize: trigger precision (description “when to use”), safety/guardrails, validation loop, then token efficiency.
使用能力描述语言(如搜索/读取/编辑/运行命令)撰写技能说明,避免硬编码特定厂商的工具名称。 若需提及特定产品工具,请附上简短适配说明(例如:“若该工具不可用,可使用shell + rg/sed替代”)。 可参考references/portability.md获取可移植性指导。

0) Intake (ask only what matters)

工作流(决策闭环)

更新模式(最小化差异)

Collect:
  • Skill name (hyphen-case)
  • What it does (1 sentence)
  • When to use (concrete triggers: file types, paths, scenarios)
  • Inputs/outputs (artifacts produced)
  • Safety constraints (read-only? destructive ops? secrets? web browsing?)
  • Resources needed:
    scripts/
    vs
    references/
    vs
    assets/
若用户要求更新现有技能(而非创建新技能):
  • 仅修改用户指定的部分;不要为了风格统一重写无关章节
  • 保留现有行为,除非该行为违反规范或会导致触发错误
  • 优先级排序:触发精准度(“适用场景”描述)、安全防护、验证循环、Token效率

1) Skill Split Proposal (prevent mega-skills)

0) 需求收集(仅询问关键信息)

Before writing anything, produce a short proposal:
  • Should this be one skill or multiple?
  • Recommend companion skills when appropriate, e.g.:
    • reviewer/critic skill (grading, audits)
    • installer skill (wiring tools or repo integration)
    • domain-reference skill (big schemas or policies)
Rule of thumb:
  • If the request spans multiple disjoint workflows, split.
  • If the skill needs deterministic, repeatable logic, add a
    scripts/
    helper.
收集以下信息:
  • 技能名称(连字符分隔格式)
  • 功能描述(1句话)
  • 适用场景(具体触发条件:文件类型、路径、场景)
  • 输入输出(生成的产物)
  • 安全约束(只读?破坏性操作?涉及机密?需要网页浏览?)
  • 所需资源:是否需要
    scripts/
    references/
    assets/
    目录

2) Scaffold the skill directory

1) 技能拆分提案(避免大而全的技能)

Create the skill directory under the user-specified path:
  • <skill-name>/SKILL.md
    — use references/skill-skeleton.md as the template
  • <skill-name>/agents/openai.yaml
    — include
    interface.display_name
    ,
    short_description
    (25–64 chars), and
    default_prompt
    (must mention
    $<skill-name>
    )
  • Resource subdirs (
    scripts/
    ,
    references/
    ,
    assets/
    ) — only create the ones you will use
Prefer minimal resources. Validation is expected to fail until you fill in the TODOs in Step 3.
开始撰写前,先提交简短提案:
  • 该需求应拆分为单个技能还是多个技能
  • 适时推荐配套技能,例如:
    • 评审/校验技能(打分、审计)
    • 安装技能(工具集成或代码库配置)
    • 领域参考技能(大型 schema 或政策文件)
经验法则:
  • 若需求涵盖多个不相关的工作流,则拆分
  • 若技能需要确定性、可重复的逻辑,添加
    scripts/
    辅助脚本

3) Write
SKILL.md
(core)

2) 搭建技能目录结构

Use references/skill-skeleton.md as the canonical outline.
Hard requirements:
  • Frontmatter
    description
    must include what + when to use.
  • Include guardrails and explicit “do not do” rules when relevant.
  • Include validation loops (what to check after writing/running).
  • Keep SKILL.md lean; move bulk examples/specs to
    references/
    .
在用户指定路径下创建技能目录:
  • <skill-name>/SKILL.md
    — 以references/skill-skeleton.md为模板
  • <skill-name>/agents/openai.yaml
    — 包含
    interface.display_name
    short_description
    (25-64字符)和
    default_prompt
    (必须提及
    $<skill-name>
  • 资源子目录(
    scripts/
    references/
    assets/
    )— 仅创建需要用到的目录
优先采用极简资源。在步骤3填充TODO内容前,验证工具大概率会报错。

4) Add resources (only if they buy reliability)

3) 撰写
SKILL.md
(核心内容)

Use references/resource-patterns.md:
  • Put deterministic logic in
    scripts/
    with a stable CLI.
  • Put large but needed knowledge in
    references/
    (loaded on demand).
  • Put templates/boilerplate in
    assets/
    .
references/skill-skeleton.md为标准大纲。
硬性要求:
  • 前置元数据
    description
    必须包含功能+适用场景
  • 若相关,需包含防护规则和明确的“禁止操作”条款
  • 包含验证循环(撰写/运行后需检查的内容)
  • 保持SKILL.md精简;将大量示例/规范移至
    references/
    目录

5) Validate (hard gate)

4) 添加资源(仅用于提升可靠性)

Run both linters — both must pass before proceeding.
skillcheck (project rules):
  • Inside this repo with
    packages/skillcheck/dist/
    present:
    node packages/skillcheck/bin/skillcheck.js <skill-dir>
  • Inside this repo without
    dist/
    :
    cd packages/skillcheck && npm install && npm run build
    , then run the above.
  • Outside this repo:
    npx skillcheck <skill-dir>
agnix (specification rules):
npx agnix <skill-dir>
Fix all reported errors before proceeding. Each linter collects every violation in a single run.
参考references/resource-patterns.md
  • 将确定性逻辑放入
    scripts/
    目录,并提供稳定的CLI接口
  • 将大型但必要的知识内容放入
    references/
    目录(按需加载)
  • 将模板/样板代码放入
    assets/
    目录

Quality Gate

5) 验证(硬性关卡)

Two-phase review after validation. Target: Quality Bar (score >= 4.5, no P1 findings).
运行两个检查工具 — 必须全部通过才能进入下一阶段。
skillcheck(项目规则检查):
  • 若当前代码库包含
    packages/skillcheck/dist/
    :执行
    node packages/skillcheck/bin/skillcheck.js <skill-dir>
  • 若当前代码库无
    dist/
    目录:先执行
    cd packages/skillcheck && npm install && npm run build
    ,再运行上述命令
  • 若不在当前代码库:执行
    npx skillcheck <skill-dir>
agnix(规范规则检查): 执行
npx agnix <skill-dir>
修复所有报错后再继续。每个检查工具会在单次运行中收集所有违规项。

Phase 1: Self-critic review

质量关卡

Grade the skill against
$reviewing-skills
reviewing-skills/references/skills-rubric.md
. Re-read
SKILL.md
as if encountering it for the first time and score each rubric dimension (spec compliance, trigger precision, workflow quality, token efficiency, safety, robustness, portability).
Check for:
  • Vague or missing "when to use" triggers
  • Missing guardrails for destructive/network actions
  • Bloated prose that should be bullets or moved to
    references/
  • Workflow steps that leave key decisions ambiguous
Actions:
  • Fixable without re-analysis -> fix inline.
  • Requires re-analysis or user input -> flag and continue.
  • If score < 4.5 or P1 findings remain, fix before proceeding to Phase 2.
验证完成后,进行两轮评审。目标:达到质量标准(得分≥4.5,无P1级问题)。

Phase 2: Fresh-context subagent review

阶段1:自我评审

Run a fresh-context critic pass using
$reviewing-skills
:
  • If your environment supports subagents, spawn a fresh-context subagent and give it the skill path.
  • Otherwise, invoke
    $reviewing-skills
    directly and provide the path.
  • If
    $reviewing-skills
    is not available, self-review against the 7 rubric dimensions and note the gap in the deliverable.
  • If the skill is git-tracked and you changed it, require the critic to cite the relevant diff hunk or commit short-hash for any change-driven P1/P2 findings.
  • Apply P1 + P2 fixes (P3 last).
  • Re-run validation.
  • Repeat up to 3 loops, stop early when the Quality Bar is met or two consecutive iterations show no score improvement (plateau).
对照
$reviewing-skills
中的
reviewing-skills/references/skills-rubric.md
对技能进行评分。以首次接触的视角重读
SKILL.md
,并对每个评分维度(规范合规性、触发精准度、工作流质量、Token效率、安全性、健壮性、可移植性)打分。
检查要点:
  • “适用场景”描述模糊或缺失
  • 破坏性/网络操作未设置防护规则
  • 冗余内容应改为列表或移至
    references/
    目录
  • 工作流步骤存在关键决策模糊的情况
处理动作:
  • 无需重新分析即可修复的问题 → 直接修改
  • 需要重新分析或用户输入的问题 → 标记后继续
  • 若得分<4.5或仍存在P1级问题,修复后再进入阶段2

7) Finalize

阶段2:全新上下文子Agent评审

Deliver:
  • The final skill folder path(s)
  • Any suggested follow-on skills (from the split proposal)
  • A short note explaining why the skill will trigger correctly (tie to description “when to use”)
使用
$reviewing-skills
执行全新上下文评审
  • 若环境支持子Agent,启动全新上下文的子Agent并提供技能路径
  • 否则,直接调用
    $reviewing-skills
    并提供路径
  • $reviewing-skills
    不可用,自行对照7个评分维度评审,并在交付结果中说明此情况
  • 若技能已纳入Git版本控制且有修改,要求评审者针对任何由修改引发的P1/P2级问题,引用相关差异块或提交短哈希
  • 优先修复P1 + P2级问题(P3级问题最后处理)
  • 重新运行验证
  • 最多重复3轮,若达到质量标准或连续两轮无得分提升(进入平台期)则提前终止

Edge Cases

7) 最终交付

  • User provides no skill name or path: ask before proceeding; do not guess.
  • Target directory already has a SKILL.md: enter update mode; do not overwrite without confirmation.
  • Linters not available (no Node.js / npx): warn the user; skip validation but note it was skipped in the deliverable.
交付内容:
  • 最终技能文件夹路径
  • 拆分提案中建议的配套技能(若有)
  • 简短说明,解释该技能为何能正确触发(关联“适用场景”描述)

Output Rules

边缘场景处理

  • No placeholders (
    TODO
    ,
    TBD
    ) in the final skill.
  • Avoid deep reference chains: SKILL.md links directly to every resource it expects to be read.
  • Prefer minimal, directive prose over explanations of common concepts.
  • 用户未提供技能名称或路径:先询问用户,不要自行猜测
  • 目标目录已存在SKILL.md:进入更新模式;未经确认不要覆盖
  • 检查工具不可用(无Node.js / npx):向用户发出警告;跳过验证并在交付结果中说明

输出规则

  • 最终技能中不得包含占位符(如
    TODO
    TBD
  • 避免深层引用链:SKILL.md应直接链接到所有需要读取的资源
  • 优先使用简洁、指令性的表述,而非对通用概念的解释