doc-daily-digest
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseDaily Digest
每日摘要
Process an Obsidian daily note by classifying raw items, fetching/researching content, creating structured notes, and replacing raw items with wikilinks.
通过分类原始条目、抓取/调研内容、创建结构化笔记、用wikilinks替换原始条目,处理Obsidian每日笔记。
Prerequisites
前置条件
| Skill | Required | Purpose |
|---|---|---|
| doc-obsidian | Yes | Vault CRUD via notesmd-cli + search via qmd |
| res-x | For X/Twitter URLs | Fetch tweet content via xAI |
| res-deep | For loose ideas | Multi-round research |
Optional dependency: -- headless browser fallback for sites that block WebFetch (403, empty content, captcha pages). Install with:
scraplinguv tool install 'scrapling[all]'| 技能 | 是否必填 | 用途 |
|---|---|---|
| doc-obsidian | 是 | 通过notesmd-cli实现知识库CRUD,通过qmd实现搜索 |
| res-x | 处理X/Twitter URL时需要 | 通过xAI抓取推文内容 |
| res-deep | 处理零散想法时需要 | 多轮调研 |
可选依赖: -- 针对拦截WebFetch的站点(403、内容为空、验证码页面)的无头浏览器降级方案。安装命令:
scraplinguv tool install 'scrapling[all]'Workflow
工作流
Step 0: Setup → Step 1: Scan & Classify → Step 2: Process → Step 3: Create Notes → Step 4: Update Daily → Step 5: Re-index & ReportStep 0: Setup → Step 1: Scan & Classify → Step 2: Process → Step 3: Create Notes → Step 4: Update Daily → Step 5: Re-index & ReportStep 0: Setup
步骤0:初始化设置
Run all three checks:
bash
undefined运行以下三项检查:
bash
undefined1. Vault path
1. 知识库路径
VAULT=$(notesmd-cli print-default --path-only)
VAULT=$(notesmd-cli print-default --path-only)
2. Read daily note (today or user-specified date)
2. 读取每日笔记(今日或用户指定日期)
DATE=$(date '+%Y-%m-%d')
notesmd-cli print "$DATE"
DATE=$(date '+%Y-%m-%d')
notesmd-cli print "$DATE"
3. xAI key (needed for res-x and res-deep full mode)
3. xAI密钥(res-x和res-deep完整模式需要)
security find-generic-password -s "xai-api" -w ~/Library/Keychains/claude-keys.keychain-db 2>/dev/null && echo "XAI_AVAILABLE=true" || echo "XAI_AVAILABLE=false"
If user specifies a date, use that instead of today.security find-generic-password -s "xai-api" -w ~/Library/Keychains/claude-keys.keychain-db 2>/dev/null && echo "XAI_AVAILABLE=true" || echo "XAI_AVAILABLE=false"
如果用户指定了日期,则使用指定日期代替今日。Step 1: Scan & Classify
步骤1:扫描与分类
Parse the daily note and classify every item. Items live in these sections: , , .
## Notes## Log## Links解析每日笔记并对每个条目分类。条目位于以下区块:、、。
## Notes## Log## LinksClassification Rules
分类规则
| Type | Pattern | Action |
|---|---|---|
| Skip | | Already processed — skip |
| Skip | Section headers ( | Structural — skip |
| X tweet | URL matching | Fetch via res-x |
| X article | URL matching | Fetch via res-x |
| GitHub repo | URL matching | WebFetch repo page |
| Web URL | Any other | WebFetch page |
| Loose idea | Non-empty text that is not a URL, not a wikilink, not structural | Deep research via res-deep |
| 类型 | 匹配规则 | 处理动作 |
|---|---|---|
| 跳过 | 行内任意位置包含 | 已处理,直接跳过 |
| 跳过 | 区块标题( | 结构元素,直接跳过 |
| X推文 | URL匹配 | 通过res-x抓取 |
| X文章 | URL匹配 | 通过res-x抓取 |
| GitHub仓库 | URL匹配 | WebFetch抓取仓库页面 |
| 网页URL | 其他所有 | WebFetch抓取页面 |
| 零散想法 | 非空文本,且不是URL、不是wikilink、不是结构元素 | 通过res-deep开展深度调研 |
Present Classification
分类结果展示
Before processing, show the user a classification table:
undefined处理前,向用户展示分类表格:
undefinedDaily Digest: {DATE}
Daily Digest: {DATE}
| # | Section | Type | Item (truncated) | Action |
|---|---|---|---|---|
| 1 | Links | X tweet | https://x.com/user/status/123... | res-x fetch |
| 2 | Notes | Loose idea | Train a model to click on... | res-deep |
| 3 | Links | GitHub | https://github.com/org/repo | WebFetch |
| 4 | Log | Skip | [[already-processed]] — ... | skip |
Ask user to confirm or exclude items before proceeding. User may:
- Approve all
- Exclude specific items by number
- Change action for an item (e.g., skip an idea, or upgrade a URL to res-deep)| # | 区块 | 类型 | 条目(截断) | 处理动作 |
|---|---|---|---|---|
| 1 | Links | X tweet | https://x.com/user/status/123... | res-x fetch |
| 2 | Notes | Loose idea | Train a model to click on... | res-deep |
| 3 | Links | GitHub | https://github.com/org/repo | WebFetch |
| 4 | Log | Skip | [[already-processed]] — ... | skip |
继续操作前请用户确认或排除条目。用户可执行以下操作:
- 全部批准
- 按编号排除特定条目
- 修改某个条目的处理动作(例如:跳过某个想法,或将某个URL升级为res-deep处理)Step 2: Process Items
步骤2:处理条目
Process approved items. Run independent fetches in parallel where possible.
处理已批准的条目。可并行执行的独立抓取任务尽量并行运行。
X/Twitter URLs
X/Twitter URL
Requires xAI key (XAI_AVAILABLE=true).
bash
uv run ~/.claude/skills/res-x/scripts/x_fetch.py fetch "URL1" "URL2" "URL3"The script batches 3 URLs per API call. Extract from results:
- Author handle and display name
- Full tweet text
- Engagement metrics (likes, reposts, replies, views)
- Thread context and quoted tweets if present
If XAI_AVAILABLE=false, report that X URLs require xAI key and skip them.
需要xAI密钥(XAI_AVAILABLE=true)。
bash
uv run ~/.claude/skills/res-x/scripts/x_fetch.py fetch "URL1" "URL2" "URL3"脚本每次API调用批量处理3个URL。从结果中提取:
- 作者账号和显示名称
- 完整推文文本
- 互动指标(点赞、转发、回复、浏览量)
- 线程上下文和引用的推文(如果有)
如果XAI_AVAILABLE=false,提示用户X URL需要xAI密钥并跳过这些条目。
GitHub URLs
GitHub URL
WebFetch: https://github.com/{owner}/{repo}
Prompt: "Extract: repo name, description, star count, language, license, last update date, and a 2-3 sentence summary of what this project does based on the README."Scrapling fallback: If WebFetch returns 403, empty content, a captcha page, or a blocked response, retry using the auto-escalation protocol from cli-web-scrape:
- → Read → validate content
scrapling extract get "URL" /tmp/scrapling-fallback.md - If content is thin (JS-only shell, no data) → → Read → validate
scrapling extract fetch "URL" /tmp/scrapling-fallback.md --network-idle --disable-resources - If still blocked →
scrapling extract stealthy-fetch "URL" /tmp/scrapling-fallback.md --solve-cloudflare - All tiers fail → note failure and move on
WebFetch: https://github.com/{owner}/{repo}
Prompt: "Extract: repo name, description, star count, language, license, last update date, and a 2-3 sentence summary of what this project does based on the README."Scrapling降级方案: 如果WebFetch返回403、空内容、验证码页面或被拦截的响应,使用cli-web-scrape的自动升级协议重试:
- → 读取 → 验证内容
scrapling extract get "URL" /tmp/scrapling-fallback.md - 如果内容过短(仅JS外壳,无数据)→ → 读取 → 验证
scrapling extract fetch "URL" /tmp/scrapling-fallback.md --network-idle --disable-resources - 如果仍然被拦截 →
scrapling extract stealthy-fetch "URL" /tmp/scrapling-fallback.md --solve-cloudflare - 所有层级都失败 → 记录失败并继续处理下一个条目
Web URLs
网页URL
WebFetch: {URL}
Prompt: "Extract: page title, author if available, publication date if available, and a 3-5 sentence summary of the key content."Scrapling fallback: If WebFetch returns 403, empty content, a captcha page, or a blocked response, retry using the auto-escalation protocol from cli-web-scrape:
- → Read → validate content
scrapling extract get "URL" /tmp/scrapling-fallback.md - If content is thin (JS-only shell, no data) → → Read → validate
scrapling extract fetch "URL" /tmp/scrapling-fallback.md --network-idle --disable-resources - If still blocked →
scrapling extract stealthy-fetch "URL" /tmp/scrapling-fallback.md --solve-cloudflare - All tiers fail → note failure and move on
WebFetch: {URL}
Prompt: "Extract: page title, author if available, publication date if available, and a 3-5 sentence summary of the key content."Scrapling降级方案: 如果WebFetch返回403、空内容、验证码页面或被拦截的响应,使用cli-web-scrape的自动升级协议重试:
- → 读取 → 验证内容
scrapling extract get "URL" /tmp/scrapling-fallback.md - 如果内容过短(仅JS外壳,无数据)→ → 读取 → 验证
scrapling extract fetch "URL" /tmp/scrapling-fallback.md --network-idle --disable-resources - 如果仍然被拦截 →
scrapling extract stealthy-fetch "URL" /tmp/scrapling-fallback.md --solve-cloudflare - 所有层级都失败 → 记录失败并继续处理下一个条目
Loose Ideas
零散想法
Invoke res-deep skill with the idea text as the query. Use depth (1 round, 10-15 sources) unless user requests deeper research.
quickFor ideas, the res-deep output becomes the note body directly.
以想法文本为查询词调用res-deep技能。默认使用深度(1轮调研,10-15个信息源),除非用户要求更深层级的调研。
quick对于想法类条目,res-deep的输出直接作为笔记正文。
Step 3: Create Notes
步骤3:创建笔记
For each processed item, create an Obsidian note.
为每个处理完成的条目创建一条Obsidian笔记。
Note Naming
笔记命名规则
| Type | Naming Pattern | Example |
|---|---|---|
| X tweet | | |
| X article | | |
| GitHub repo | | |
| Web page | | |
| Loose idea | | |
| Deep research | | |
All names: kebab-case, lowercase, no special characters.
Check for existing notes with same name before creating. If exists, append or ask user.
-2| 类型 | 命名模式 | 示例 |
|---|---|---|
| X推文 | 从内容提取 | |
| X文章 | | |
| GitHub仓库 | | |
| 网页 | 从标题提取 | |
| 零散想法 | | |
| 深度调研 | | |
所有名称使用小写kebab-case格式,不含特殊字符。
创建前检查是否已有同名笔记,如果存在,追加后缀或询问用户处理方式。
-2Note Structure
笔记结构
For X tweets / web pages / GitHub repos (quick captures):
bash
notesmd-cli create "NOTE_NAME" --content "---
tags: [TYPE_TAG]
source: SOURCE_URL
author: AUTHOR
date: DATE
---X推文/网页/GitHub仓库(快速捕获类):
bash
notesmd-cli create "NOTE_NAME" --content "---
tags: [TYPE_TAG]
source: SOURCE_URL
author: AUTHOR
date: DATE
---TITLE
TITLE
Key Points
核心要点
- Point 1
- Point 2
- Point 3
- 要点1
- 要点2
- 要点3
Summary
摘要
Brief paragraph summarizing the content.
简要概括内容的段落。
Source
来源
- Original"
Type tags: `tweet` for X, `github` for GitHub, `web` for web pages, `idea` for ideas.
**For deep research (ideas):**
The res-deep skill produces its own structured output. Create the note with that output as body, adding frontmatter:
```bash
notesmd-cli create "NOTE_NAME" --content "---
tags: [idea, research]
date: DATE
---
{res-deep output here}"- Original"
类型标签:X类内容用`tweet`,GitHub类用`github`,网页用`web`,想法用`idea`。
**深度调研(想法类):**
res-deep技能会生成自包含的结构化输出。创建笔记时使用该输出作为正文,添加前置元数据:
```bash
notesmd-cli create "NOTE_NAME" --content "---
tags: [idea, research]
date: DATE
---
{res-deep output here}"Step 4: Update Daily Note
步骤4:更新每日笔记
For each processed item, replace the raw text in the daily note with a wikilink.
将每个已处理条目的原始文本替换为wikilink。
Wikilink Format by Section
不同区块的wikilink格式
## Links section (URLs from bookmarks/saves):
- [[note-name]] — @author: summary with key metrics (stars, likes, etc.)## Notes section (ideas and thoughts):
- [[note-name]] — Brief: what the idea/research covers## Log section (activity entries):
- [[note-name]] — Summary of what was captured## Links区块(书签/收藏的URL):
- [[note-name]] — @author: 包含核心指标(星数、点赞数等)的摘要## Notes区块(想法和思考):
- [[note-name]] — 简介:该想法/调研覆盖的内容## Log区块(活动记录):
- [[note-name]] — 捕获内容的摘要Edit Procedure
编辑流程
- Read the daily note:
notesmd-cli print "$DATE" - Resolve vault path:
VAULT=$(notesmd-cli print-default --path-only) - Use the Edit tool to replace each raw item with its wikilink line
- Replace one item at a time to avoid Edit conflicts
- Verify the final note by reading it again
- 读取每日笔记:
notesmd-cli print "$DATE" - 解析知识库路径:
VAULT=$(notesmd-cli print-default --path-only) - 使用编辑工具将每个原始条目替换为对应的wikilink行
- 每次替换一个条目,避免编辑冲突
- 再次读取最终笔记验证修改结果
Rules
规则
- Preserve existing wikilinks — never modify already-processed lines
- Keep section structure intact (## headers, empty lines between items)
- If an item spans multiple lines (e.g., a paragraph idea), replace all lines with one wikilink line
- The wikilink summary should be concise (under 120 chars) but include key metrics when available
- 保留已有wikilinks — 永远不要修改已处理的行
- 保持区块结构完整(##标题、条目之间的空行)
- 如果条目跨多行(例如段落形式的想法),将所有行替换为一行wikilink
- wikilink摘要应简洁(不超过120字符),如有核心指标需包含在内
Step 5: Re-index & Report
步骤5:重新索引与报告
Re-index Vault
重新索引知识库
bash
qmd update && qmd embedbash
qmd update && qmd embedSummary Report
摘要报告
Present a summary table:
undefined展示摘要表格:
undefinedDigest Complete: {DATE}
摘要处理完成: {DATE}
| # | Type | Note Created | Status |
|---|---|---|---|
| 1 | X tweet | [[note-name]] | Created |
| 2 | Loose idea | [[note-name]] | Created (res-deep quick) |
| 3 | GitHub | [[note-name]] | Created |
| 4 | Web URL | — | Failed (403) |
Notes created: 3
Items skipped: 2 (already processed)
Items failed: 1
Vault re-indexed: Yes
undefined| # | 类型 | 已创建笔记 | 状态 |
|---|---|---|---|
| 1 | X tweet | [[note-name]] | 已创建 |
| 2 | 零散想法 | [[note-name]] | 已创建(res-deep快速模式) |
| 3 | GitHub | [[note-name]] | 已创建 |
| 4 | 网页URL | — | 处理失败(403) |
已创建笔记:3条
跳过条目:2条(已处理)
处理失败:1条
知识库已重新索引:是
undefinedModes
运行模式
Full (default)
全量模式(默认)
Process all unprocessed items in the daily note.
"Process my daily note" / "Daily digest"
处理每日笔记中所有未处理的条目。
"处理我的每日笔记" / "每日摘要"
Selective
选择模式
Process only specific items or sections.
"Process only the links in today's daily note" "Digest just the X URLs"
仅处理特定条目或区块。
"仅处理今日每日笔记中的链接" "只消化X URL"
Date Override
指定日期模式
Process a specific date's daily note.
"Process yesterday's daily note" "Digest 2026-02-20"
处理指定日期的每日笔记。
"处理昨天的每日笔记" "消化2026-02-20的笔记"
Dry Run
试运行模式
Classify and show the table (Step 1) without processing.
"What's unprocessed in my daily note?" "Show me what needs digesting"
仅执行分类并展示表格(步骤1),不实际处理。
"我的每日笔记里还有什么未处理的?" "告诉我需要消化的内容"
Constraints
约束
DO:
- Always run Step 0 (vault path + daily note + xAI check) first
- Present classification table and wait for user approval before processing
- Process items in parallel where independent (multiple WebFetch calls, multiple X URLs in one batch)
- Check for existing notes before creating to avoid duplicates
- Read the daily note before editing — never guess content
- Resolve vault path dynamically via
notesmd-cli print-default --path-only
DON'T:
- Process items the user excluded from the classification table
- Modify already-processed wikilink lines
- Hardcode vault paths
- Skip the classification approval step
- Run res-deep at default/deep depth unless user explicitly requests it — use quick for daily digest
- Create notes without frontmatter
必须做:
- 永远先执行步骤0(知识库路径 + 每日笔记 + xAI检查)
- 处理前展示分类表格,等待用户批准
- 独立任务尽量并行处理(多个WebFetch调用、单批次处理多个X URL)
- 创建笔记前检查是否存在同名文件,避免重复
- 编辑前读取每日笔记 — 永远不要猜测内容
- 通过动态解析知识库路径
notesmd-cli print-default --path-only
禁止做:
- 处理用户在分类表格中排除的条目
- 修改已处理的wikilink行
- 硬编码知识库路径
- 跳过分类确认步骤
- 除非用户明确要求,否则不要使用默认/深度层级的res-deep调研 — 每日摘要默认使用快速模式
- 创建没有前置元数据的笔记