Loading...
Loading...
Process Obsidian daily notes: classify raw URLs and loose ideas, fetch content (X tweets, GitHub repos, web pages), run deep research on ideas, create structured vault notes, replace raw items with wikilinks. Orchestrates doc-obsidian, res-x, and res-deep skills. Use when: processing daily note links, digesting saved URLs into notes, turning ideas into research, daily note cleanup. Triggers: daily digest, process daily, daily links, triage daily, digest daily note.
npx skill4agent add molechowski/claude-skills doc-daily-digest| Skill | Required | Purpose |
|---|---|---|
| doc-obsidian | Yes | Vault CRUD via notesmd-cli + search via qmd |
| res-x | For X/Twitter URLs | Fetch tweet content via xAI |
| res-deep | For loose ideas | Multi-round research |
scraplinguv tool install 'scrapling[all]'Step 0: Setup → Step 1: Scan & Classify → Step 2: Process → Step 3: Create Notes → Step 4: Update Daily → Step 5: Re-index & Report# 1. Vault path
VAULT=$(notesmd-cli print-default --path-only)
# 2. Read daily note (today or user-specified date)
DATE=$(date '+%Y-%m-%d')
notesmd-cli print "$DATE"
# 3. xAI key (needed for res-x and res-deep full mode)
security find-generic-password -s "xai-api" -w ~/Library/Keychains/claude-keys.keychain-db 2>/dev/null && echo "XAI_AVAILABLE=true" || echo "XAI_AVAILABLE=false"## Notes## Log## Links| Type | Pattern | Action |
|---|---|---|
| Skip | | Already processed — skip |
| Skip | Section headers ( | Structural — skip |
| X tweet | URL matching | Fetch via res-x |
| X article | URL matching | Fetch via res-x |
| GitHub repo | URL matching | WebFetch repo page |
| Web URL | Any other | WebFetch page |
| Loose idea | Non-empty text that is not a URL, not a wikilink, not structural | Deep research via res-deep |
## Daily Digest: {DATE}
| # | Section | Type | Item (truncated) | Action |
|---|---------|------|-------------------|--------|
| 1 | Links | X tweet | https://x.com/user/status/123... | res-x fetch |
| 2 | Notes | Loose idea | Train a model to click on... | res-deep |
| 3 | Links | GitHub | https://github.com/org/repo | WebFetch |
| 4 | Log | Skip | [[already-processed]] — ... | skip |uv run ~/.claude/skills/res-x/scripts/x_fetch.py fetch "URL1" "URL2" "URL3"WebFetch: https://github.com/{owner}/{repo}
Prompt: "Extract: repo name, description, star count, language, license, last update date, and a 2-3 sentence summary of what this project does based on the README."scrapling extract get "URL" /tmp/scrapling-fallback.mdscrapling extract fetch "URL" /tmp/scrapling-fallback.md --network-idle --disable-resourcesscrapling extract stealthy-fetch "URL" /tmp/scrapling-fallback.md --solve-cloudflareWebFetch: {URL}
Prompt: "Extract: page title, author if available, publication date if available, and a 3-5 sentence summary of the key content."scrapling extract get "URL" /tmp/scrapling-fallback.mdscrapling extract fetch "URL" /tmp/scrapling-fallback.md --network-idle --disable-resourcesscrapling extract stealthy-fetch "URL" /tmp/scrapling-fallback.md --solve-cloudflarequick| Type | Naming Pattern | Example |
|---|---|---|
| X tweet | | |
| X article | | |
| GitHub repo | | |
| Web page | | |
| Loose idea | | |
| Deep research | | |
-2notesmd-cli create "NOTE_NAME" --content "---
tags: [TYPE_TAG]
source: SOURCE_URL
author: AUTHOR
date: DATE
---
# TITLE
## Key Points
- Point 1
- Point 2
- Point 3
## Summary
Brief paragraph summarizing the content.
## Source
- [Original](SOURCE_URL)"tweetgithubwebideanotesmd-cli create "NOTE_NAME" --content "---
tags: [idea, research]
date: DATE
---
{res-deep output here}"- [[note-name]] — @author: summary with key metrics (stars, likes, etc.)- [[note-name]] — Brief: what the idea/research covers- [[note-name]] — Summary of what was capturednotesmd-cli print "$DATE"VAULT=$(notesmd-cli print-default --path-only)qmd update && qmd embed## Digest Complete: {DATE}
| # | Type | Note Created | Status |
|---|------|-------------|--------|
| 1 | X tweet | [[note-name]] | Created |
| 2 | Loose idea | [[note-name]] | Created (res-deep quick) |
| 3 | GitHub | [[note-name]] | Created |
| 4 | Web URL | — | Failed (403) |
Notes created: 3
Items skipped: 2 (already processed)
Items failed: 1
Vault re-indexed: Yes"Process my daily note" / "Daily digest"
"Process only the links in today's daily note" "Digest just the X URLs"
"Process yesterday's daily note" "Digest 2026-02-20"
"What's unprocessed in my daily note?" "Show me what needs digesting"
notesmd-cli print-default --path-only