doc-daily-digest

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Daily Digest

每日摘要

Process an Obsidian daily note by classifying raw items, fetching/researching content, creating structured notes, and replacing raw items with wikilinks.
通过分类原始条目、抓取/调研内容、创建结构化笔记、用wikilinks替换原始条目,处理Obsidian每日笔记。

Prerequisites

前置条件

SkillRequiredPurpose
doc-obsidianYesVault CRUD via notesmd-cli + search via qmd
res-xFor X/Twitter URLsFetch tweet content via xAI
res-deepFor loose ideasMulti-round research
Optional dependency:
scrapling
-- headless browser fallback for sites that block WebFetch (403, empty content, captcha pages). Install with:
uv tool install 'scrapling[all]'
技能是否必填用途
doc-obsidian通过notesmd-cli实现知识库CRUD,通过qmd实现搜索
res-x处理X/Twitter URL时需要通过xAI抓取推文内容
res-deep处理零散想法时需要多轮调研
可选依赖:
scrapling
-- 针对拦截WebFetch的站点(403、内容为空、验证码页面)的无头浏览器降级方案。安装命令:
uv tool install 'scrapling[all]'

Workflow

工作流

Step 0: Setup → Step 1: Scan & Classify → Step 2: Process → Step 3: Create Notes → Step 4: Update Daily → Step 5: Re-index & Report
Step 0: Setup → Step 1: Scan & Classify → Step 2: Process → Step 3: Create Notes → Step 4: Update Daily → Step 5: Re-index & Report

Step 0: Setup

步骤0:初始化设置

Run all three checks:
bash
undefined
运行以下三项检查:
bash
undefined

1. Vault path

1. 知识库路径

VAULT=$(notesmd-cli print-default --path-only)
VAULT=$(notesmd-cli print-default --path-only)

2. Read daily note (today or user-specified date)

2. 读取每日笔记(今日或用户指定日期)

DATE=$(date '+%Y-%m-%d') notesmd-cli print "$DATE"
DATE=$(date '+%Y-%m-%d') notesmd-cli print "$DATE"

3. xAI key (needed for res-x and res-deep full mode)

3. xAI密钥(res-x和res-deep完整模式需要)

security find-generic-password -s "xai-api" -w ~/Library/Keychains/claude-keys.keychain-db 2>/dev/null && echo "XAI_AVAILABLE=true" || echo "XAI_AVAILABLE=false"

If user specifies a date, use that instead of today.
security find-generic-password -s "xai-api" -w ~/Library/Keychains/claude-keys.keychain-db 2>/dev/null && echo "XAI_AVAILABLE=true" || echo "XAI_AVAILABLE=false"

如果用户指定了日期,则使用指定日期代替今日。

Step 1: Scan & Classify

步骤1:扫描与分类

Parse the daily note and classify every item. Items live in these sections:
## Notes
,
## Log
,
## Links
.
解析每日笔记并对每个条目分类。条目位于以下区块:
## Notes
## Log
## Links

Classification Rules

分类规则

TypePatternAction
Skip
[[wikilink]]
anywhere in line
Already processed — skip
SkipSection headers (
##
), frontmatter, empty lines, task checkboxes
Structural — skip
X tweetURL matching
https://(x\.com|twitter\.com)/\w+/status/\d+
Fetch via res-x
X articleURL matching
https://(x\.com|twitter\.com)/i/article/[\w-]+
Fetch via res-x
GitHub repoURL matching
https://github\.com/[\w-]+/[\w-]+
WebFetch repo page
Web URLAny other
https://...
URL
WebFetch page
Loose ideaNon-empty text that is not a URL, not a wikilink, not structuralDeep research via res-deep
类型匹配规则处理动作
跳过行内任意位置包含
[[wikilink]]
已处理,直接跳过
跳过区块标题(
##
开头)、前置元数据、空行、任务复选框
结构元素,直接跳过
X推文URL匹配
https://(x\.com|twitter\.com)/\w+/status/\d+
通过res-x抓取
X文章URL匹配
https://(x\.com|twitter\.com)/i/article/[\w-]+
通过res-x抓取
GitHub仓库URL匹配
https://github\.com/[\w-]+/[\w-]+
WebFetch抓取仓库页面
网页URL其他所有
https://...
格式的URL
WebFetch抓取页面
零散想法非空文本,且不是URL、不是wikilink、不是结构元素通过res-deep开展深度调研

Present Classification

分类结果展示

Before processing, show the user a classification table:
undefined
处理前,向用户展示分类表格:
undefined

Daily Digest: {DATE}

Daily Digest: {DATE}

#SectionTypeItem (truncated)Action
1LinksX tweethttps://x.com/user/status/123...res-x fetch
2NotesLoose ideaTrain a model to click on...res-deep
3LinksGitHubhttps://github.com/org/repoWebFetch
4LogSkip[[already-processed]] — ...skip

Ask user to confirm or exclude items before proceeding. User may:
- Approve all
- Exclude specific items by number
- Change action for an item (e.g., skip an idea, or upgrade a URL to res-deep)
#区块类型条目(截断)处理动作
1LinksX tweethttps://x.com/user/status/123...res-x fetch
2NotesLoose ideaTrain a model to click on...res-deep
3LinksGitHubhttps://github.com/org/repoWebFetch
4LogSkip[[already-processed]] — ...skip

继续操作前请用户确认或排除条目。用户可执行以下操作:
- 全部批准
- 按编号排除特定条目
- 修改某个条目的处理动作(例如:跳过某个想法,或将某个URL升级为res-deep处理)

Step 2: Process Items

步骤2:处理条目

Process approved items. Run independent fetches in parallel where possible.
处理已批准的条目。可并行执行的独立抓取任务尽量并行运行。

X/Twitter URLs

X/Twitter URL

Requires xAI key (XAI_AVAILABLE=true).
bash
uv run ~/.claude/skills/res-x/scripts/x_fetch.py fetch "URL1" "URL2" "URL3"
The script batches 3 URLs per API call. Extract from results:
  • Author handle and display name
  • Full tweet text
  • Engagement metrics (likes, reposts, replies, views)
  • Thread context and quoted tweets if present
If XAI_AVAILABLE=false, report that X URLs require xAI key and skip them.
需要xAI密钥(XAI_AVAILABLE=true)。
bash
uv run ~/.claude/skills/res-x/scripts/x_fetch.py fetch "URL1" "URL2" "URL3"
脚本每次API调用批量处理3个URL。从结果中提取:
  • 作者账号和显示名称
  • 完整推文文本
  • 互动指标(点赞、转发、回复、浏览量)
  • 线程上下文和引用的推文(如果有)
如果XAI_AVAILABLE=false,提示用户X URL需要xAI密钥并跳过这些条目。

GitHub URLs

GitHub URL

WebFetch: https://github.com/{owner}/{repo}
Prompt: "Extract: repo name, description, star count, language, license, last update date, and a 2-3 sentence summary of what this project does based on the README."
Scrapling fallback: If WebFetch returns 403, empty content, a captcha page, or a blocked response, retry using the auto-escalation protocol from cli-web-scrape:
  1. scrapling extract get "URL" /tmp/scrapling-fallback.md
    → Read → validate content
  2. If content is thin (JS-only shell, no data) →
    scrapling extract fetch "URL" /tmp/scrapling-fallback.md --network-idle --disable-resources
    → Read → validate
  3. If still blocked →
    scrapling extract stealthy-fetch "URL" /tmp/scrapling-fallback.md --solve-cloudflare
  4. All tiers fail → note failure and move on
WebFetch: https://github.com/{owner}/{repo}
Prompt: "Extract: repo name, description, star count, language, license, last update date, and a 2-3 sentence summary of what this project does based on the README."
Scrapling降级方案: 如果WebFetch返回403、空内容、验证码页面或被拦截的响应,使用cli-web-scrape的自动升级协议重试:
  1. scrapling extract get "URL" /tmp/scrapling-fallback.md
    → 读取 → 验证内容
  2. 如果内容过短(仅JS外壳,无数据)→
    scrapling extract fetch "URL" /tmp/scrapling-fallback.md --network-idle --disable-resources
    → 读取 → 验证
  3. 如果仍然被拦截 →
    scrapling extract stealthy-fetch "URL" /tmp/scrapling-fallback.md --solve-cloudflare
  4. 所有层级都失败 → 记录失败并继续处理下一个条目

Web URLs

网页URL

WebFetch: {URL}
Prompt: "Extract: page title, author if available, publication date if available, and a 3-5 sentence summary of the key content."
Scrapling fallback: If WebFetch returns 403, empty content, a captcha page, or a blocked response, retry using the auto-escalation protocol from cli-web-scrape:
  1. scrapling extract get "URL" /tmp/scrapling-fallback.md
    → Read → validate content
  2. If content is thin (JS-only shell, no data) →
    scrapling extract fetch "URL" /tmp/scrapling-fallback.md --network-idle --disable-resources
    → Read → validate
  3. If still blocked →
    scrapling extract stealthy-fetch "URL" /tmp/scrapling-fallback.md --solve-cloudflare
  4. All tiers fail → note failure and move on
WebFetch: {URL}
Prompt: "Extract: page title, author if available, publication date if available, and a 3-5 sentence summary of the key content."
Scrapling降级方案: 如果WebFetch返回403、空内容、验证码页面或被拦截的响应,使用cli-web-scrape的自动升级协议重试:
  1. scrapling extract get "URL" /tmp/scrapling-fallback.md
    → 读取 → 验证内容
  2. 如果内容过短(仅JS外壳,无数据)→
    scrapling extract fetch "URL" /tmp/scrapling-fallback.md --network-idle --disable-resources
    → 读取 → 验证
  3. 如果仍然被拦截 →
    scrapling extract stealthy-fetch "URL" /tmp/scrapling-fallback.md --solve-cloudflare
  4. 所有层级都失败 → 记录失败并继续处理下一个条目

Loose Ideas

零散想法

Invoke res-deep skill with the idea text as the query. Use
quick
depth (1 round, 10-15 sources) unless user requests deeper research.
For ideas, the res-deep output becomes the note body directly.
以想法文本为查询词调用res-deep技能。默认使用
quick
深度(1轮调研,10-15个信息源),除非用户要求更深层级的调研。
对于想法类条目,res-deep的输出直接作为笔记正文。

Step 3: Create Notes

步骤3:创建笔记

For each processed item, create an Obsidian note.
为每个处理完成的条目创建一条Obsidian笔记。

Note Naming

笔记命名规则

TypeNaming PatternExample
X tweet
{topic}-{descriptor}
from content
scrapling-undetectable-web-scraping
X article
{author}-x-article-{date}
irabukht-x-article-2026-02-23
GitHub repo
{repo-name}
scrapling
or
huggingface-skills-agent-plugins
Web page
{topic}-{descriptor}
from title
kubernetes-practical-learning-path
Loose idea
{concept}-{descriptor}
agent-sort-through-the-slop
Deep research
{topic}-deep-research
scrapling-deep-research
All names: kebab-case, lowercase, no special characters.
Check for existing notes with same name before creating. If exists, append
-2
or ask user.
类型命名模式示例
X推文从内容提取
{主题}-{描述符}
scrapling-undetectable-web-scraping
X文章
{作者}-x-article-{日期}
irabukht-x-article-2026-02-23
GitHub仓库
{仓库名称}
scrapling
huggingface-skills-agent-plugins
网页从标题提取
{主题}-{描述符}
kubernetes-practical-learning-path
零散想法
{概念}-{描述符}
agent-sort-through-the-slop
深度调研
{主题}-deep-research
scrapling-deep-research
所有名称使用小写kebab-case格式,不含特殊字符。
创建前检查是否已有同名笔记,如果存在,追加
-2
后缀或询问用户处理方式。

Note Structure

笔记结构

For X tweets / web pages / GitHub repos (quick captures):
bash
notesmd-cli create "NOTE_NAME" --content "---
tags: [TYPE_TAG]
source: SOURCE_URL
author: AUTHOR
date: DATE
---
X推文/网页/GitHub仓库(快速捕获类):
bash
notesmd-cli create "NOTE_NAME" --content "---
tags: [TYPE_TAG]
source: SOURCE_URL
author: AUTHOR
date: DATE
---

TITLE

TITLE

Key Points

核心要点

  • Point 1
  • Point 2
  • Point 3
  • 要点1
  • 要点2
  • 要点3

Summary

摘要

Brief paragraph summarizing the content.
简要概括内容的段落。

Source

来源

  • Original"

Type tags: `tweet` for X, `github` for GitHub, `web` for web pages, `idea` for ideas.

**For deep research (ideas):**

The res-deep skill produces its own structured output. Create the note with that output as body, adding frontmatter:

```bash
notesmd-cli create "NOTE_NAME" --content "---
tags: [idea, research]
date: DATE
---

{res-deep output here}"
  • Original"

类型标签:X类内容用`tweet`,GitHub类用`github`,网页用`web`,想法用`idea`。

**深度调研(想法类):**

res-deep技能会生成自包含的结构化输出。创建笔记时使用该输出作为正文,添加前置元数据:

```bash
notesmd-cli create "NOTE_NAME" --content "---
tags: [idea, research]
date: DATE
---

{res-deep output here}"

Step 4: Update Daily Note

步骤4:更新每日笔记

For each processed item, replace the raw text in the daily note with a wikilink.
将每个已处理条目的原始文本替换为wikilink。

Wikilink Format by Section

不同区块的wikilink格式

## Links section (URLs from bookmarks/saves):
- [[note-name]] — @author: summary with key metrics (stars, likes, etc.)
## Notes section (ideas and thoughts):
- [[note-name]] — Brief: what the idea/research covers
## Log section (activity entries):
- [[note-name]] — Summary of what was captured
## Links区块(书签/收藏的URL):
- [[note-name]] — @author: 包含核心指标(星数、点赞数等)的摘要
## Notes区块(想法和思考):
- [[note-name]] — 简介:该想法/调研覆盖的内容
## Log区块(活动记录):
- [[note-name]] — 捕获内容的摘要

Edit Procedure

编辑流程

  1. Read the daily note:
    notesmd-cli print "$DATE"
  2. Resolve vault path:
    VAULT=$(notesmd-cli print-default --path-only)
  3. Use the Edit tool to replace each raw item with its wikilink line
  4. Replace one item at a time to avoid Edit conflicts
  5. Verify the final note by reading it again
  1. 读取每日笔记:
    notesmd-cli print "$DATE"
  2. 解析知识库路径:
    VAULT=$(notesmd-cli print-default --path-only)
  3. 使用编辑工具将每个原始条目替换为对应的wikilink行
  4. 每次替换一个条目,避免编辑冲突
  5. 再次读取最终笔记验证修改结果

Rules

规则

  • Preserve existing wikilinks — never modify already-processed lines
  • Keep section structure intact (## headers, empty lines between items)
  • If an item spans multiple lines (e.g., a paragraph idea), replace all lines with one wikilink line
  • The wikilink summary should be concise (under 120 chars) but include key metrics when available
  • 保留已有wikilinks — 永远不要修改已处理的行
  • 保持区块结构完整(##标题、条目之间的空行)
  • 如果条目跨多行(例如段落形式的想法),将所有行替换为一行wikilink
  • wikilink摘要应简洁(不超过120字符),如有核心指标需包含在内

Step 5: Re-index & Report

步骤5:重新索引与报告

Re-index Vault

重新索引知识库

bash
qmd update && qmd embed
bash
qmd update && qmd embed

Summary Report

摘要报告

Present a summary table:
undefined
展示摘要表格:
undefined

Digest Complete: {DATE}

摘要处理完成: {DATE}

#TypeNote CreatedStatus
1X tweet[[note-name]]Created
2Loose idea[[note-name]]Created (res-deep quick)
3GitHub[[note-name]]Created
4Web URLFailed (403)
Notes created: 3 Items skipped: 2 (already processed) Items failed: 1 Vault re-indexed: Yes
undefined
#类型已创建笔记状态
1X tweet[[note-name]]已创建
2零散想法[[note-name]]已创建(res-deep快速模式)
3GitHub[[note-name]]已创建
4网页URL处理失败(403)
已创建笔记:3条 跳过条目:2条(已处理) 处理失败:1条 知识库已重新索引:是
undefined

Modes

运行模式

Full (default)

全量模式(默认)

Process all unprocessed items in the daily note.
"Process my daily note" / "Daily digest"
处理每日笔记中所有未处理的条目。
"处理我的每日笔记" / "每日摘要"

Selective

选择模式

Process only specific items or sections.
"Process only the links in today's daily note" "Digest just the X URLs"
仅处理特定条目或区块。
"仅处理今日每日笔记中的链接" "只消化X URL"

Date Override

指定日期模式

Process a specific date's daily note.
"Process yesterday's daily note" "Digest 2026-02-20"
处理指定日期的每日笔记。
"处理昨天的每日笔记" "消化2026-02-20的笔记"

Dry Run

试运行模式

Classify and show the table (Step 1) without processing.
"What's unprocessed in my daily note?" "Show me what needs digesting"
仅执行分类并展示表格(步骤1),不实际处理。
"我的每日笔记里还有什么未处理的?" "告诉我需要消化的内容"

Constraints

约束

DO:
  • Always run Step 0 (vault path + daily note + xAI check) first
  • Present classification table and wait for user approval before processing
  • Process items in parallel where independent (multiple WebFetch calls, multiple X URLs in one batch)
  • Check for existing notes before creating to avoid duplicates
  • Read the daily note before editing — never guess content
  • Resolve vault path dynamically via
    notesmd-cli print-default --path-only
DON'T:
  • Process items the user excluded from the classification table
  • Modify already-processed wikilink lines
  • Hardcode vault paths
  • Skip the classification approval step
  • Run res-deep at default/deep depth unless user explicitly requests it — use quick for daily digest
  • Create notes without frontmatter
必须做:
  • 永远先执行步骤0(知识库路径 + 每日笔记 + xAI检查)
  • 处理前展示分类表格,等待用户批准
  • 独立任务尽量并行处理(多个WebFetch调用、单批次处理多个X URL)
  • 创建笔记前检查是否存在同名文件,避免重复
  • 编辑前读取每日笔记 — 永远不要猜测内容
  • 通过
    notesmd-cli print-default --path-only
    动态解析知识库路径
禁止做:
  • 处理用户在分类表格中排除的条目
  • 修改已处理的wikilink行
  • 硬编码知识库路径
  • 跳过分类确认步骤
  • 除非用户明确要求,否则不要使用默认/深度层级的res-deep调研 — 每日摘要默认使用快速模式
  • 创建没有前置元数据的笔记