wiki-status

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Wiki Status — Audit & Delta

Wiki 状态 — 审计与差异计算

You are computing the current state of the wiki: what's been ingested, what's new since last ingest, and what the delta looks like. This helps the user decide whether to append (ingest the delta) or rebuild (archive and reprocess everything).
你正在计算wiki的当前状态:已摄入的内容、上次摄入后新增的内容,以及差异情况。这可以帮助用户决定是选择追加(仅摄入差异内容)还是重建(归档并重新处理所有内容)。

Before You Start

开始前准备

  1. Read
    .env
    to get
    OBSIDIAN_VAULT_PATH
    ,
    OBSIDIAN_SOURCES_DIR
    ,
    CLAUDE_HISTORY_PATH
    ,
    CODEX_HISTORY_PATH
  2. Read
    .manifest.json
    at the vault root — this is the ingest tracking ledger
  1. 读取
    .env
    文件获取
    OBSIDIAN_VAULT_PATH
    OBSIDIAN_SOURCES_DIR
    CLAUDE_HISTORY_PATH
    CODEX_HISTORY_PATH
    参数
  2. 读取vault根目录下的
    .manifest.json
    文件——这是摄入跟踪台账

The Manifest

说明清单

The manifest lives at
$OBSIDIAN_VAULT_PATH/.manifest.json
. It tracks every source file that has been ingested. If it doesn't exist, this is a fresh vault with nothing ingested.
json
{
  "version": 1,
  "last_updated": "2026-04-06T10:30:00Z",
  "sources": {
    "/absolute/path/to/file.md": {
      "ingested_at": "2026-04-06T10:30:00Z",
      "size_bytes": 4523,
      "modified_at": "2026-04-05T08:00:00Z",
      "source_type": "document",
      "project": null,
      "pages_created": ["concepts/transformers.md"],
      "pages_updated": ["entities/vaswani.md"]
    },
    "~/.claude/projects/-Users-name-my-app/abc123.jsonl": {
      "ingested_at": "2026-04-06T11:00:00Z",
      "size_bytes": 128000,
      "modified_at": "2026-04-06T09:00:00Z",
      "source_type": "claude_conversation",
      "project": "my-app",
      "pages_created": ["entities/my-app.md"],
      "pages_updated": ["skills/react-debugging.md"]
    }
  },
  "projects": {
    "my-app": {
      "source_path": "~/.claude/projects/-Users-name-my-app",
      "vault_path": "projects/my-app",
      "last_ingested": "2026-04-06T11:00:00Z",
      "conversations_ingested": 5,
      "conversations_total": 8,
      "memory_files_ingested": 3
    }
  },
  "stats": {
    "total_sources_ingested": 42,
    "total_pages": 87,
    "total_projects": 6,
    "last_full_rebuild": null
  }
}
说明清单存放在
$OBSIDIAN_VAULT_PATH/.manifest.json
路径下,它会跟踪所有已摄入的源文件。如果该文件不存在,说明这是一个全新的vault,还没有任何内容被摄入。
json
{
  "version": 1,
  "last_updated": "2026-04-06T10:30:00Z",
  "sources": {
    "/absolute/path/to/file.md": {
      "ingested_at": "2026-04-06T10:30:00Z",
      "size_bytes": 4523,
      "modified_at": "2026-04-05T08:00:00Z",
      "source_type": "document",
      "project": null,
      "pages_created": ["concepts/transformers.md"],
      "pages_updated": ["entities/vaswani.md"]
    },
    "~/.claude/projects/-Users-name-my-app/abc123.jsonl": {
      "ingested_at": "2026-04-06T11:00:00Z",
      "size_bytes": 128000,
      "modified_at": "2026-04-06T09:00:00Z",
      "source_type": "claude_conversation",
      "project": "my-app",
      "pages_created": ["entities/my-app.md"],
      "pages_updated": ["skills/react-debugging.md"]
    }
  },
  "projects": {
    "my-app": {
      "source_path": "~/.claude/projects/-Users-name-my-app",
      "vault_path": "projects/my-app",
      "last_ingested": "2026-04-06T11:00:00Z",
      "conversations_ingested": 5,
      "conversations_total": 8,
      "memory_files_ingested": 3
    }
  },
  "stats": {
    "total_sources_ingested": 42,
    "total_pages": 87,
    "total_projects": 6,
    "last_full_rebuild": null
  }
}

Step 1: Scan Current Sources

步骤1:扫描当前数据源

Build an inventory of everything available to ingest right now:
梳理当前所有可摄入的内容清单:

Documents (from
OBSIDIAN_SOURCES_DIR
)

文档(来自
OBSIDIAN_SOURCES_DIR

Glob each directory in OBSIDIAN_SOURCES_DIR for all text files
Record: path, size, modification time
遍历OBSIDIAN_SOURCES_DIR下每个目录的所有文本文件
记录:路径、大小、修改时间

Claude History (from
CLAUDE_HISTORY_PATH
)

Claude历史记录(来自
CLAUDE_HISTORY_PATH

Glob: ~/.claude/projects/*/          → project directories
Glob: ~/.claude/projects/*/*.jsonl   → conversation files
Glob: ~/.claude/projects/*/memory/*.md → memory files
Record: path, size, modification time, parent project
遍历规则:~/.claude/projects/*/          → 项目目录
遍历规则:~/.claude/projects/*/*.jsonl   → 对话文件
遍历规则:~/.claude/projects/*/memory/*.md → 记忆文件
记录:路径、大小、修改时间、所属项目

Codex History (from
CODEX_HISTORY_PATH
)

Codex历史记录(来自
CODEX_HISTORY_PATH

Glob: ~/.codex/session_index.jsonl            → session inventory index
Glob: ~/.codex/sessions/**/rollout-*.jsonl    → session rollout transcripts
Glob: ~/.codex/history.jsonl                  → optional local history log
Glob: ~/.codex/archived_sessions/**/rollout-*.jsonl → archived rollouts (if user wants archive coverage)
Record: path, size, modification time, inferred project from cwd when available
遍历规则:~/.codex/session_index.jsonl            → 会话库存索引
遍历规则:~/.codex/sessions/**/rollout-*.jsonl    → 会话执行转录文件
遍历规则:~/.codex/history.jsonl                  → 可选本地历史日志
遍历规则:~/.codex/archived_sessions/**/rollout-*.jsonl → 已归档执行记录(如果用户需要覆盖归档内容)
记录:路径、大小、修改时间、可推断时从工作目录提取所属项目

Any other sources the user has pointed at previously

用户此前指定的其他数据源

Check the manifest for source paths outside the standard directories.
检查说明清单中标准目录之外的源路径。

Step 2: Compute the Delta

步骤2:计算差异

Compare current sources against the manifest. Classify each source file:
StatusMeaningAction needed
NewFile exists on disk, not in manifestNeeds ingesting
ModifiedFile in manifest, hash differs from
content_hash
Needs re-ingesting
TouchedFile in manifest, mtime newer but hash unchangedSkip — content identical, no re-ingest needed
UnchangedFile in manifest, mtime and hash both matchNothing to do
DeletedIn manifest, but file no longer exists on diskNote it — wiki pages may be stale
When a manifest entry has no
content_hash
(older entry), fall back to mtime comparison only.
For Claude history specifically, also compute:
  • New projects (directories in
    ~/.claude/projects/
    not in manifest)
  • New conversations within existing projects
  • Updated memory files
For Codex history specifically, also compute:
  • New rollout files under
    sessions/**
  • Updated
    session_index.jsonl
    entries (session title/freshness changes)
  • Archived rollout delta only when archive coverage is requested
将当前数据源与说明清单进行对比,对每个源文件进行分类:
状态含义需要执行的操作
新增文件存在于磁盘中,但不在说明清单内需要摄入
已修改文件在说明清单中,但哈希值与
content_hash
不一致
需要重新摄入
仅触碰文件在说明清单中,修改时间更新但哈希值未变跳过——内容完全相同,无需重新摄入
未变更文件在说明清单中,修改时间和哈希值均匹配无需操作
已删除存在于说明清单中,但磁盘上已无该文件标记——wiki页面可能已过时
如果说明清单条目没有
content_hash
(旧条目),则仅回退到修改时间对比。
针对Claude历史记录,还需额外计算:
  • 新增项目(
    ~/.claude/projects/
    下不在说明清单中的目录)
  • 现有项目内的新增对话
  • 更新的记忆文件
针对Codex历史记录,还需额外计算:
  • sessions/**
    下的新增执行文件
  • 更新的
    session_index.jsonl
    条目(会话标题/新鲜度变更)
  • 仅当请求覆盖归档内容时计算已归档执行记录的差异

Step 3: Report the Status

步骤3:状态汇报

Present a clear summary:
markdown
undefined
展示清晰的汇总信息:
markdown
undefined

Wiki Status

Wiki状态

Overview

概览

  • Total wiki pages: 87 across 6 categories
  • Total sources ingested: 42
  • Projects tracked: 6
  • Last ingest: 2026-04-06T11:00:00Z
  • 总wiki页面数: 87,分布在6个分类下
  • 已摄入总源文件数: 42
  • 已跟踪项目数: 6
  • 上次摄入时间: 2026-04-06T11:00:00Z

Delta (what's changed since last ingest)

差异(上次摄入后的变更内容)

New sources (never ingested): 12

新增源文件(从未摄入):12个

SourceTypeSize
~/Documents/research/new-paper.pdfdocument2.1 MB
~/.claude/projects/-Users-.../session-xyz.jsonlclaude_conversation340 KB
~/.codex/sessions/2026/04/12/rollout-...jsonlcodex_rollout220 KB
...
源文件类型大小
~/Documents/research/new-paper.pdfdocument2.1 MB
~/.claude/projects/-Users-.../session-xyz.jsonlclaude_conversation340 KB
~/.codex/sessions/2026/04/12/rollout-...jsonlcodex_rollout220 KB
...

Modified sources (need re-ingesting): 3

已修改源文件(需要重新摄入):3个

SourceLast ingestedLast modifiedDelta
~/notes/architecture.md2026-04-012026-04-054 days newer
...
源文件上次摄入时间上次修改时间差异
~/notes/architecture.md2026-04-012026-04-05新了4天
...

New projects (not yet in wiki): 2

新增项目(尚未加入wiki):2个

  • tractorex (3 conversations, 2 memory files)
  • papertech (1 conversation, 0 memory files)
  • tractorex(3条对话,2个记忆文件)
  • papertech(1条对话,0个记忆文件)

Deleted sources (ingested but gone): 0

已删除源文件(曾摄入但已不存在):0个

Summary

汇总

  • Ready to ingest: 12 new + 3 modified = 15 sources
  • Up to date: 27 sources unchanged
  • Recommendation: Append (delta is small relative to total)
undefined
  • 可摄入内容: 12个新增 + 3个修改 = 15个源文件
  • 已更新内容: 27个未变更源文件
  • 建议: 追加(差异占总内容的比例很小)
undefined

Step 4: Recommend Action

步骤4:操作建议

Based on the delta, recommend one of:
SituationRecommendation
Delta is small (<20% of total)Append — just ingest the new/modified sources
Delta is large (>50% of total)Rebuild — archive and reprocess everything
Many deleted sourcesLint first — check for stale pages, then decide
First time / empty vaultFull ingest — process everything
User just wants to see statusNo action — just report
Tell the user:
  • "You have X new sources and Y modified sources. I'd recommend [append/rebuild]."
  • "Want me to [ingest the delta / rebuild from scratch / just look at a specific project]?"
根据差异情况,推荐以下操作之一:
场景推荐操作
差异较小(<总内容的20%)追加 —— 仅摄入新增/修改的源文件
差异较大(>总内容的50%)重建 —— 归档并重新处理所有内容
存在大量已删除源文件先检查 —— 排查过时页面,再做决定
首次使用/空vault全量摄入 —— 处理所有内容
用户仅想查看状态无操作 —— 仅做汇报
告知用户:
  • "你有X个新增源文件和Y个已修改源文件,我推荐[追加/重建]操作。"
  • "需要我[摄入差异内容/从头重建/仅处理指定项目]吗?"

Insights Mode

洞察模式

Triggered when the user asks something like "wiki insights", "what's central in my wiki", "show me the hubs", "cross-domain bridges", "what pages are most important", or "wiki structure". This mode is additive — it doesn't replace the delta report, it analyzes the shape of the wiki itself.
Where the delta report tells the user what's pending, insights mode tells them what they've already built and where the interesting structure lives. Complements
wiki-lint
(which finds problems) by surfacing interesting structure.
当用户询问「wiki洞察」「我的wiki核心内容是什么」「展示中心节点」「跨领域桥接」「哪些页面最重要」或者「wiki结构」这类问题时触发该模式。此模式是附加功能——不会替换差异报告,它会分析wiki本身的结构。
差异报告告知用户待处理的内容,而洞察模式则告知用户已经构建的内容以及有意思的结构所在。它是对
wiki-lint
(排查问题)的补充,用于挖掘有价值的结构

What to compute

计算内容

First, build the wikilink graph. Glob all
.md
pages, extract every
[[wikilink]]
, and build:
  • incoming[page]
    = count of other pages that link to this page
  • outgoing[page]
    = count of pages this page links out to
  • tags[page]
    = set of tags from frontmatter
  • category[page]
    = directory prefix (concepts/, entities/, skills/, etc.)
You'll reuse this graph across all sections below.

  1. Anchor pages (top hubs). Pages with the most incoming links — the load-bearing concepts.
    • Rank all pages by
      incoming
      count, take top 10
    • For each, note both incoming and outgoing counts: pages with high incoming and high outgoing are connector hubs (most valuable)
    • Pages with high incoming but zero outgoing are sink hubs — flag as cross-linker candidates
  2. Bridge pages. Pages that connect otherwise-disconnected tag clusters — removing them would partition the graph. These are often more structurally important than raw hub count suggests.
    • For each page P, find pairs of pages (A, B) where:
      • A links to P, B is linked from P (or vice versa)
      • A and B share no tags with each other
      • P is the only path between A's tag cluster and B's tag cluster within 2 hops
    • Rank by how many cross-cluster pairs P bridges; show top 5
    • Label each: "
      P
      bridges
      [tag-cluster-A]
      [tag-cluster-B]
      "
  3. Tag cluster cohesion. For each tag with ≥ 5 pages, score how tightly the pages within it are interconnected:
    • n
      = number of pages sharing this tag
    • actual_links
      = number of wikilinks between any two pages in this tag group
    • cohesion = actual_links / (n × (n−1) / 2)
      — ratio of actual links to maximum possible
    • Fragmented clusters (cohesion < 0.15, n ≥ 5): these pages share a topic but aren't woven together. Surface them as cross-linker targets.
    • Show top 5 tags by cohesion (strongest clusters) and bottom 5 (most fragmented)
  4. Surprising connections. Cross-category wikilinks that are non-obvious — scored by how unexpected they are:
    • Score each wikilink that crosses category boundaries (e.g.,
      concepts/
      entities/
      ,
      skills/
      synthesis/
      ):
      • +3 if the linking page or claim is marked
        ^[ambiguous]
        (uncertain connection, worth reviewing)
      • +2 if the linking page is marked
        ^[inferred]
        (synthesized, not directly stated)
      • +2 if the categories are in different knowledge layers (e.g.,
        concepts
        entities
        more surprising than
        concepts
        concepts
        )
      • +2 if source page has ≤ 2 total links (peripheral) but target has ≥ 8 (hub) — unexpected reach from edge to center
    • Show top 5 scored connections with a plain-language reason for each
  5. Orphan-adjacent suggestions. Pages linked from a top-10 hub but with zero outgoing links of their own. Dead-ends in high-traffic areas — prime cross-linker candidates.
  6. Rough clusters. Group anchor pages by dominant tag. (Simple tag intersection — just for orientation.)
  7. Graph delta since last run. Compare the current link graph to the snapshot stored in the previous
    _insights.md
    :
    • Read the
      <!-- GRAPH_SNAPSHOT: ... -->
      line at the bottom of the previous
      _insights.md
      (if it exists) — it contains a compact JSON edge list
    • Compute: new pages added, pages removed, new wikilinks created, wikilinks removed
    • Flag: pages that were isolated last run but now have incoming links ("newly connected: X, Y")
    • Flag: pages that lost incoming links since last run ("link target may have been renamed: A, B")
    • If no previous snapshot exists, skip this section
  8. Suggested questions. Questions this wiki structure is uniquely positioned to answer — or that reveal gaps:
    • From
      ^[ambiguous]
      claims: "Resolve: What is the exact relationship between
      X
      and
      Y
      ?"
    • From bridge pages: "Explore: Why does
      P
      connect
      [cluster-A]
      to
      [cluster-B]
      ?"
    • From pages with zero incoming links: "Link:
      X
      has no incoming links — what should reference it?"
    • From fragmented clusters (cohesion < 0.15): "Audit: Should tag
      [T]
      be split into more focused sub-tags?"
    • Show up to 7, prioritizing AMBIGUOUS first, then bridge nodes, then isolates

首先,构建wiki链接图谱。 遍历所有
.md
页面,提取所有
[[wikilink]]
,构建以下数据:
  • incoming[page]
    = 指向该页面的其他页面数量
  • outgoing[page]
    = 该页面指向的其他页面数量
  • tags[page]
    = 页面 frontmatter 中的标签集合
  • category[page]
    = 目录前缀(concepts/、entities/、skills/等)
以下所有部分都会复用这个图谱。

  1. 锚点页面(顶级中心节点)。 入站链接最多的页面——核心承载概念。
    • incoming
      计数对所有页面排序,取前10名
    • 每个页面同时标注入站和出站计数:入站和出站都高的页面是连接中心(价值最高)
    • 入站高但出站为0的页面是下沉中心——标记为跨链接候选
  2. 桥接页面。 连接原本互不关联的标签集群的页面——移除它们会拆分图谱。这类页面的结构重要性通常比原始中心计数体现的更高。
    • 对每个页面P,寻找符合以下条件的页面对(A, B):
      • A指向P,B被P指向(或反之)
      • A和B互相没有共同标签
      • P是2跳范围内A的标签集群和B的标签集群之间的唯一路径
    • 按P桥接的跨集群对数量排序,展示前5名
    • 标注每个条目:"
      P
      桥接
      [标签集群A]
      [标签集群B]
      "
  3. 标签集群内聚度。 对每个关联≥5个页面的标签,评分该标签下页面的互联紧密程度:
    • n
      = 共享该标签的页面数量
    • actual_links
      = 该标签组内任意两个页面之间的wiki链接数量
    • 内聚度 = actual_links / (n × (n−1) / 2)
      —— 实际链接数与最大可能链接数的比值
    • 碎片化集群(内聚度<0.15,n≥5):这些页面主题相同但没有互联,标记为跨链接目标
    • 展示内聚度最高的前5个标签(最强集群)和最低的5个标签(最碎片化)
  4. 意外关联。 非显而易见的跨分类wiki链接,按意外程度评分:
    • 对每个跨分类边界的wiki链接(例如
      concepts/
      entities/
      skills/
      synthesis/
      )评分:
      • +3 如果链接页面或声明标注了
        ^[ambiguous]
        (关联不确定,值得复查)
      • +2 如果链接页面标注了
        ^[inferred]
        (合成内容,非直接陈述)
      • +2 如果分类属于不同知识层(例如
        concepts
        entities
        concepts
        concepts
        更意外)
      • +2 如果源页面总链接数≤2(边缘页面)但目标页面≥8(中心节点)——从边缘到中心的意外关联
    • 展示评分最高的5个关联,每个标注直白的原因
  5. 近孤立页面建议。 被前10名中心页面链接,但自身没有出站链接的页面。高流量区域的死胡同——优先跨链接候选。
  6. 粗略集群。 按主导标签对锚点页面分组(简单标签交集,仅用于定位)。
  7. 上次运行以来的图谱差异。 对比当前链接图谱与上一次
    _insights.md
    中存储的快照:
    • 读取上一次
      _insights.md
      (如果存在)底部的
      <!-- GRAPH_SNAPSHOT: ... -->
      行——它包含紧凑的JSON边列表
    • 计算:新增页面、删除页面、新增wiki链接、移除的wiki链接
    • 标记:上次运行时孤立但现在有入站链接的页面("新增关联:X, Y")
    • 标记:上次运行以来丢失入站链接的页面("链接目标可能已重命名:A, B")
    • 如果没有历史快照,跳过该部分
  8. 建议问题。 该wiki结构最适合回答的问题——或能暴露缺口的问题:
    • 来自
      ^[ambiguous]
      声明:"待确认:
      X
      Y
      之间的确切关系是什么?"
    • 来自桥接页面:"待探索:为什么
      P
      连接了
      [集群A]
      [集群B]
      ?"
    • 来自无入站链接的页面:"待关联:
      X
      没有入站链接——哪些内容应该引用它?"
    • 来自碎片化集群(内聚度<0.15):"待审计:标签
      [T]
      是否应该拆分为更聚焦的子标签?"
    • 最多展示7个,优先排序:歧义声明>桥接节点>孤立页面

Output

输出

Write the result to
_insights.md
at the vault root. Overwrite freely — it's regenerable. At the very end, embed a compact graph snapshot as an HTML comment so the next run can diff against it.
markdown
undefined
将结果写入vault根目录的
_insights.md
文件,可自由覆盖——内容可重新生成。在文件最末尾,嵌入紧凑的图谱快照作为HTML注释,方便下次运行时对比差异。
markdown
undefined

Wiki Insights — <TIMESTAMP>

Wiki 洞察 — <时间戳>

Anchor Pages (top 10 hubs)

锚点页面(前10大中心节点)

PageIncomingOutgoingNote
[[concepts/transformer-architecture]]238connector hub
[[entities/andrej-karpathy]]170sink hub — cross-linker candidate
页面入站链接数出站链接数备注
[[concepts/transformer-architecture]]238连接中心
[[entities/andrej-karpathy]]170下沉中心 —— 跨链接候选

Bridge Pages (top 5)

桥接页面(前5名)

PageBridgesCross-cluster pairs
[[concepts/exponential-growth]]#ml ↔ #economics4 pairs
页面桥接集群跨集群对数量
[[concepts/exponential-growth]]#ml ↔ #economics4对

Tag Cluster Cohesion

标签集群内聚度

Most cohesive (well-linked)

内聚度最高(关联完善)

  • #ml — 12 pages, cohesion 0.41
  • #ml —— 12个页面,内聚度0.41

Most fragmented (cross-linker targets)

最碎片化(跨链接目标)

  • #systems — 7 pages, cohesion 0.06 ⚠️ run cross-linker on this tag
  • #systems ——7个页面,内聚度0.06 ⚠️ 对该标签运行跨链接工具

Surprising Connections (top 5)

意外关联(前5名)

  • [[concepts/scaling-laws]] → [[entities/gordon-moore]] — score 5
    • Reason: cross-layer (concepts ↔ entities), marked ^[inferred]
  • ...
  • [[concepts/scaling-laws]] → [[entities/gordon-moore]] —— 得分5
    • 原因:跨层(concepts ↔ entities),标记为^[inferred]
  • ...

Orphan-Adjacent (dead-ends near hubs)

近孤立页面(中心节点附近的死胡同)

  • [[concepts/foo]] — linked from 3 hubs, 0 outbound links
  • [[concepts/foo]] —— 被3个中心节点链接,0个出站链接

Rough Clusters

粗略集群

  • #ml — transformer-architecture, attention-mechanism, scaling-laws
  • #systems — distributed-consensus, raft, paxos
  • #ml —— transformer-architecture, attention-mechanism, scaling-laws
  • #systems —— distributed-consensus, raft, paxos

Graph Delta Since Last Run

上次运行以来的图谱差异

  • +3 new pages, +11 new wikilinks
  • Newly connected: [[concepts/bar]], [[entities/baz]]
  • Lost incoming links: [[references/old-paper]] (target may have been renamed)
  • +3个新页面,+11条新wiki链接
  • 新增关联:[[concepts/bar]], [[entities/baz]]
  • 丢失入站链接:[[references/old-paper]](目标可能已重命名)

Questions Worth Asking

值得探索的问题

  1. Resolve: What is the exact relationship between
    scaling-laws
    and
    moore's-law
    ? (^[ambiguous] claim)
  2. Explore: Why does
    exponential-growth
    bridge #ml and #economics?
  3. Link:
    references/foo.md
    has no incoming links — what should reference it?
  4. Audit: Should tag
    #systems
    be split? (cohesion 0.06, 7 pages)
<!-- GRAPH_SNAPSHOT: {"nodes":["concepts/foo","entities/bar"],"edges":[["concepts/foo","entities/bar"]]} -->

After writing the file, append to `log.md`:
  • [TIMESTAMP] STATUS_INSIGHTS anchors=10 bridges=N cohesion_checked=T surprising=5 questions=7 delta="+N pages +M links"
undefined
  1. 确认:
    scaling-laws
    moore's-law
    之间的确切关系是什么?(^[ambiguous] 声明)
  2. 探索:为什么
    exponential-growth
    桥接了#ml和#economics?
  3. 关联:
    references/foo.md
    没有入站链接——哪些内容应该引用它?
  4. 审计:标签
    #systems
    是否应该拆分?(内聚度0.06,7个页面)
<!-- GRAPH_SNAPSHOT: {"nodes":["concepts/foo","entities/bar"],"edges":[["concepts/foo","entities/bar"]]} -->

写入文件后,追加到`log.md`:
  • [时间戳] STATUS_INSIGHTS anchors=10 bridges=N cohesion_checked=T surprising=5 questions=7 delta="+N pages +M links"
undefined

When to skip

跳过场景

  • Vaults with fewer than 20 pages — not enough graph structure. Tell the user and skip.
  • After a fresh
    wiki-rebuild
    — wait until at least one ingest has happened.
  • 页面少于20个的vault——图谱结构不足,告知用户后跳过
  • 刚执行完
    wiki-rebuild
    之后——等待至少完成一次摄入后再运行

Notes

注意事项

  • If the manifest doesn't exist, report everything as "new" and recommend a full ingest
  • This skill only reads and reports — it doesn't modify anything (except writing
    _insights.md
    in insights mode, which is regenerable)
  • The actual ingest work is done by the ingest skills (
    wiki-ingest
    ,
    claude-history-ingest
    ,
    codex-history-ingest
    ,
    data-ingest
    )
  • Those skills are responsible for updating the manifest after they finish
  • 如果说明清单不存在,将所有内容标记为「新增」并推荐全量摄入
  • 该skill仅执行读取和汇报操作——不会修改任何内容(洞察模式下写入可重新生成的
    _insights.md
    除外)
  • 实际的摄入工作由摄入类skill完成(
    wiki-ingest
    claude-history-ingest
    codex-history-ingest
    data-ingest
  • 这些skill负责在运行完成后更新说明清单