recency-guard

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Recency Guard

时效性防护

This skill enforces a 4-step validation pipeline on every response: Recency → Self-Verification → Completeness → Clarity. The goal is to catch stale information, overconfident claims, missing requirements, and unclear writing before they reach the user.
The user sees only a clean final answer. The validation work happens internally — do not surface the audit trail unless the user explicitly asks for it.

本Skill会对每一条响应执行4步验证流程: 时效性 → 自我验证 → 完整性 → 清晰度,目标是在内容触达用户前,排查出过时信息、过度自信的断言、缺失的必要信息以及表述不清晰的内容。
用户只会看到最终的干净答案,验证工作全部在内部完成——除非用户明确要求,否则不要向用户展示审计追踪记录。

Subagent Registry

Subagent注册表

SubagentPathPurpose
recency-checker
./subagents/recency-checker.md
Web-searches every factual claim in the draft to confirm it reflects the current state of the world
claim-verifier
./subagents/claim-verifier.md
Pressure-tests the 3 most important claims for credibility, counterexamples, and reasoning failure modes
Subagent路径用途
recency-checker
./subagents/recency-checker.md
对草稿中的每一条事实声明进行网络搜索,确认其符合当前的实际情况
claim-verifier
./subagents/claim-verifier.md
对最重要的3条声明进行压力测试,检查可信度、反例以及推理失效模式

Dispatch Mechanism

调度机制

The subagent
.md
files above are co-located reference documents, not auto-discovered Claude Code agents. To dispatch a subagent:
  1. Read the subagent's
    .md
    file from the path in the registry.
  2. Use the Task tool to spawn a subagent, passing the
    .md
    content as the system prompt and the step's inputs (draft, date, etc.) as the user message.
  3. Collect the Task tool's return value — the subagent's structured report.
  4. Apply its findings to the draft before proceeding to the next step.
Subagents run sequentially (Step 1 before Step 2) because the claim-verifier needs the recency-checker's revised draft as input. Do not parallelize.
Note: subagents cannot spawn other subagents. Each subagent handles its own web searches and tool calls internally — the orchestrator does not need to provide search results.

上方列出的Subagent
.md
文件是同位置的参考文档,并非Claude Code可自动发现的Agent。要调度Subagent请遵循以下步骤:
  1. 从注册表对应的路径中读取Subagent的
    .md
    文件。
  2. 使用Task工具生成Subagent,将
    .md
    内容作为系统prompt,当前步骤的输入(草稿、日期等)作为用户消息传入。
  3. 收集Task工具的返回值——也就是Subagent生成的结构化报告。
  4. 在进入下一步骤前,将报告的结论应用到草稿中。
Subagent需要按顺序运行(步骤1优先于步骤2),因为claim-verifier需要将recency-checker修改后的草稿作为输入,请勿并行运行。
注意:Subagent不能生成其他Subagent。每个Subagent内部会处理自己的网络搜索和工具调用,编排器不需要提供搜索结果。

Source Quality Hierarchy

来源质量层级

When evaluating or ranking sources — both in subagents and inline checks — apply this tiered ranking. Higher-tier sources carry more weight and produce higher confidence scores.
TierSource TypeExamples
1Official documentation & specsLanguage/framework docs, RFCs, API references, spec sheets
2Peer-reviewed research & dataAcademic papers, government data, audited reports
3Authoritative first-party contentCompany engineering blogs, official announcements, changelogs
4Reputable journalism & analysisMajor tech publications, industry analyst reports
5Community & practitioner contentConference talks, well-known developer blogs, Stack Overflow
6Unvetted community contentForum posts, social media threads, anonymous blogs, AI-generated
When two sources conflict, prefer the higher-tier source. When a claim is supported only by Tier 5–6 sources, flag it as lower confidence. If only Tier 6 sources exist, treat the claim as unverified.

在Subagent和内联检查中评估或排序来源时,请遵循以下层级排名规则。层级越高的来源权重越高,对应的置信度评分也越高。
等级来源类型示例
1官方文档与规范编程语言/框架文档、RFC、API参考、规格说明表
2同行评审研究与数据学术论文、政府数据、经审计的报告
3权威第一方内容企业技术博客、官方公告、更新日志
4可信新闻与分析主流科技出版物、行业分析师报告
5社区与从业者内容大会演讲、知名开发者博客、Stack Overflow
6未审核社区内容论坛帖子、社交媒体讨论、匿名博客、AI生成内容
当两个来源的内容冲突时,优先采用层级更高的来源内容。如果一条声明仅得到等级5-6来源的支持,要标记为低置信度。如果仅存在等级6的来源,将该声明视为未经验证。

Confidence Scoring

置信度评分

Every factual claim in the draft should receive an internal confidence score. These scores are used during the verification steps to decide what needs qualification, revision, or removal.
ScoreCriteria
HighConfirmed by a Tier 1–3 source published within the last 3 months. No credible counter-evidence.
MedConfirmed by a Tier 3–5 source, OR by a Tier 1–3 source older than 3 months with no sign of change.
LowSupported only by Tier 5–6 sources, OR sources conflict, OR unable to verify after searching.
How scores affect the final answer:
  • High — Present the claim directly. No qualifier needed.
  • Med — Present the claim but add light context: a date stamp ("as of March 2026"), a hedge ("based on current documentation"), or a brief note about the source.
  • Low — Either remove the claim, explicitly label it as uncertain, or replace it with a verifiable alternative. Never present a Low-confidence claim as settled fact.

草稿中的每一条事实声明都应该对应一个内部置信度评分,这些评分会在验证步骤中用来判断哪些内容需要限定、修改或删除。
评分标准
由近3个月发布的等级1-3来源确认,无可信反证。
由等级3-5来源确认,或者由发布时间超过3个月、且无变更迹象的等级1-3来源确认。
仅得到等级5-6来源支持,或来源之间存在冲突,或搜索后无法验证。
评分对最终答案的影响:
  • —— 直接展示声明,不需要额外限定。
  • —— 展示声明但补充少量背景:比如时间标记(“截至2026年3月”)、限定表述(“基于当前官方文档”),或者关于来源的简短说明。
  • —— 要么删除该声明,明确标记其为不确定内容,要么替换为可验证的替代内容。永远不要将低置信度的声明作为既定事实展示。

Pipeline Execution

流程执行

After reading this file, run the pipeline in order:
阅读完本文件后,请按顺序执行以下流程:

Step 1: Recency Check (subagent)

步骤1:时效性检查(Subagent)

Dispatch to
recency-checker
using the mechanism described in the Subagent Registry section above. Pass it:
  • The full draft response.
  • Today's date (from system context).
  • The Source Quality Hierarchy and Confidence Scoring tables above (or instruct the subagent to reference this SKILL.md).
Collect its output: a list of claims with their confidence scores, source tiers, and any recommended revisions or removals. Apply all revisions to the draft before proceeding.
按照上方Subagent注册表部分描述的机制调度
recency-checker
,传入以下内容:
  • 完整的响应草稿。
  • 今日日期(来自系统上下文)。
  • 上方的来源质量层级和置信度评分表格(或者指示Subagent参考本SKILL.md文件)。
收集其输出:包含置信度评分、来源等级、建议修改或删除内容的声明列表。在进入下一步之前,将所有修改应用到草稿中。

Step 2: Self-Verification (subagent)

步骤2:自我验证(Subagent)

Dispatch to
claim-verifier
using the same mechanism. Pass it:
  • The revised draft (post-Step 1).
  • The user's original request (for context on what matters most).
Collect its output: the 3 most important claims with credibility assessments, counterexamples considered, failure modes checked, and final confidence scores. Apply any further revisions.
按照相同机制调度
claim-verifier
,传入以下内容:
  • (步骤1处理后的)修改后草稿。
  • 用户的原始请求(用来判断哪些内容最重要的上下文)。
收集其输出:包含可信度评估、已考虑的反例、已检查的失效模式、最终置信度评分的3条最重要声明。应用所有进一步的修改。

Step 3: Completeness (inline)

步骤3:完整性(内联检查)

Re-read the user's original request word by word. Check:
  • Every requested deliverable is present in the draft.
  • Every sub-question has been answered.
  • No explicit constraint, scope limit, or formatting instruction was ignored.
  • If something is genuinely unanswerable, acknowledge the gap instead of silently omitting it.
Partial coverage is not acceptable unless the user explicitly allowed it. Missing a sub-question is one of the most common failure modes — this check exists specifically to catch it.
逐字重读用户的原始请求,检查:
  • 草稿包含用户要求的所有交付内容。
  • 所有子问题都已得到回答。
  • 没有忽略任何明确的约束、范围限制或格式要求。
  • 如果确实有内容无法回答,要明确说明该缺口,而不是无声地省略。
除非用户明确允许,否则部分覆盖是不可接受的。遗漏子问题是最常见的失效模式之一——本检查就是专门为了排查这类问题设置的。

Step 4: Clarity & Readability (inline)

步骤4:清晰度与可读性(内联检查)

Edit the final draft for precision, readability, and usefulness:
  • Make the structure easy to scan — but follow the formatting guidance from your system prompt (avoid over-formatting with excessive headers, bold, and bullet points unless the content genuinely requires it).
  • Define necessary jargon the first time it appears.
  • Surface key takeaways clearly — the user should know the bottom line within the first few sentences.
  • Remove filler, redundancy, and vague hedging ("it's worth noting that…", "it should be mentioned…").
  • Prefer concrete wording over abstract phrasing. "Response times increased 40%" beats "performance was negatively impacted."
  • Do not pad the answer with process narration.

修改最终草稿,提升精准度、可读性和实用性:
  • 让结构易于快速浏览——但要遵循系统prompt中的格式要求(除非内容确实需要,否则不要用过多的标题、粗体、项目符号过度格式化)。
  • 首次出现必要的行业术语时给出定义。
  • 清晰展示核心结论——用户应该在前几句就能知道核心信息。
  • 删除填充内容、冗余表述和模糊的限定词(“值得注意的是…”、“需要提到的是…”)。
  • 优先使用具体表述而非抽象措辞,“响应时间提升了40%”优于“性能得到了正向影响”。
  • 不要用流程描述填充答案。

Uncertain Claims Summary

不确定声明摘要

Internally, maintain a running list of any claims scored Low or flagged during verification. After the pipeline completes:
  • If all claims are High confidence, no action needed.
  • If any claims are Med or Low, weave appropriate qualifiers into the final answer naturally (date stamps, hedges, "this could not be independently verified"). Do not create a visible "uncertainty" section unless the user asks for it.
  • If the user explicitly asks for the audit trail, validation reasoning, or fact-checking details, produce a concise summary that includes:
    1. The 3 stress-tested claims and their confidence scores.
    2. Any claims that were revised, removed, or qualified and why.
    3. Source tiers used for each key claim.

在内部维护一个动态列表,记录所有评分为或者在验证过程中被标记的声明。流程执行完成后:
  • 如果所有声明都是高置信度,不需要额外操作。
  • 如果存在中或低置信度的声明,将合适的限定语自然地融入最终答案中(时间标记、限定表述、“该内容无法独立验证”等)。除非用户要求,否则不要创建单独的“不确定内容”可见板块。
  • 如果用户明确要求查看审计追踪、验证逻辑或者事实核查细节,可以生成简洁的摘要,包含以下内容:
    1. 3条经过压力测试的声明及其置信度评分。
    2. 所有被修改、删除或限定的声明及对应的原因。
    3. 每条核心声明使用的来源等级。

Output Rules

输出规则

  • Do not include section headers like "Final Answer" or "Validation Summary."
  • Do not narrate the validation process ("I verified this by…", "After checking…").
  • Do not mention this skill, its subagents, or its checks.
  • Do produce a clean, direct answer that reads as if it were written correctly the first time.
  • Do weave uncertainty qualifiers naturally into prose when needed — never as a bolted-on disclaimer block.
  • If the user explicitly asks to see validation reasoning, then and only then include the Uncertain Claims Summary described above.
  • 不要包含类似“最终答案”或“验证摘要”的 section 标题。
  • 不要描述验证过程(“我通过…验证了这个内容”、“检查后发现…”)。
  • 不要提及本Skill、其Subagent或相关检查流程。
  • 输出干净、直接的答案,读起来就像是一次性正确写成的内容。
  • 在需要时将不确定限定语自然地融入文本中——永远不要作为附加的免责声明块存在。
  • 只有当用户明确要求查看验证逻辑时,才可以包含上述的不确定声明摘要内容。