Loading...
Loading...
Build and maintain a personal knowledge base using Karpathy's llm-wiki methodology across Claude Code, Codex, and OpenClaw agents.
npx skill4agent add aradotso/trending-skills llm-wiki-skillSkill by ara.so — Daily 2026 Skills collection.
[[bidirectional links]]https://github.com/sdyckjq-lab/llm-wiki-skill# Claude Code
bash install.sh --platform claude
# Codex
bash install.sh --platform codex
# OpenClaw
bash install.sh --platform openclaw
# Auto-detect (only if one platform directory exists)
bash install.sh --platform auto
# Custom target directory (OpenClaw non-standard path)
bash install.sh --platform openclaw --target-dir /path/to/your/skills| Platform | Path |
|---|---|
| Claude Code | |
| Codex | |
| OpenClaw | |
bash setup.sh
# This is now a compatibility shim for the unified installer# Check Chrome debug mode is running (needed for web extraction)
google-chrome --remote-debugging-port=9222 &
# Check uv is installed (needed for WeChat + YouTube extraction)
uv --version
# Install bun OR npm (one is enough for web extraction deps)
curl -fsSL https://bun.sh/install | bash
# OR: npm is already available in most environmentsplatforms/claude/CLAUDE.mdplatforms/codex/AGENTS.mdplatforms/openclaw/README.mdyour-wiki/
├── raw/ # Immutable source material
│ ├── articles/ # Web articles
│ ├── tweets/ # X/Twitter
│ ├── wechat/ # WeChat posts
│ ├── xiaohongshu/ # Xiaohongshu (manual paste only)
│ ├── zhihu/ # Zhihu
│ ├── pdfs/ # PDFs
│ ├── notes/ # Notes
│ └── assets/ # Images, attachments
├── wiki/ # AI-generated knowledge base
│ ├── entities/ # People, concepts, tools
│ ├── topics/ # Topic pages
│ ├── sources/ # Source summaries
│ ├── comparisons/ # Side-by-side analysis
│ └── synthesis/ # Cross-source synthesis
├── index.md # Master index
├── log.md # Operation log
└── .wiki-schema.md # Config# Ask your agent:
"Create a new knowledge base at ~/my-wiki"
# The agent will scaffold the directory structure,
# generate index.md, log.md, and .wiki-schema.md# Agent command pattern:
"Add this article to my wiki: https://example.com/article"
# Under the hood, the agent routes to baoyu-url-to-markdown:
npx baoyu-url-to-markdown https://example.com/article > raw/articles/article-slug.md# Agent command pattern:
"Digest this YouTube video into my knowledge base: https://youtube.com/watch?v=abc123"
# Uses youtube-transcript via uv:
uvx youtube-transcript https://youtube.com/watch?v=abc123 > raw/articles/video-slug.md# Agent command pattern:
"Add this WeChat article to my wiki: https://mp.weixin.qq.com/s/..."
# Uses wechat-article-to-markdown via uv:
uvx wechat-article-to-markdown https://mp.weixin.qq.com/s/... > raw/wechat/article-slug.md# Agent command pattern:
"Process this PDF into my wiki: /path/to/paper.pdf"
# OR drag a file into the chat
# No external tool needed — goes directly into main pipeline:
cp /path/to/paper.pdf raw/pdfs/paper.pdf# Just paste text to your agent:
"Add these notes to my wiki: [paste content]"
# Agent writes directly to:
echo "your content" > raw/notes/note-slug.md# Agent command pattern:
"Process all files in ~/Downloads/research into my wiki"
# Agent iterates over files and routes each by type
for f in ~/Downloads/research/*; do
# agent determines type and processes accordingly
done# Agent command pattern:
"Run a health check on my knowledge base"
# Agent checks for:
# - Orphaned pages (no incoming links)
# - Broken [[wiki links]]
# - Contradictory information across pages
# - Missing source summaries| Source Type | Tool Used | Requires |
|---|---|---|
| Web articles | | Chrome debug mode |
| X/Twitter | | Chrome debug mode + X login |
| Zhihu | | Chrome debug mode |
| | |
| YouTube | | |
| Xiaohongshu | Manual paste | Nothing |
| PDF / Markdown / Text | Direct pipeline | Nothing |
scripts/source-registry.tsvscripts/source-registry.sh# Andrej Karpathy
## Overview
Former OpenAI/Tesla researcher, creator of llm-wiki methodology.
## Key Ideas
- [[llm-wiki]] — compile knowledge once, maintain over time
- [[nanoGPT]] — minimal GPT implementation for education
## Sources
- [[sources/llm-wiki-gist-2024]]
- [[sources/karpathy-interview-2023]]
## Related
- [[topics/language-models]]
- [[entities/openai]]# Retrieval-Augmented Generation
## Summary
...
## Key Entities
- [[entities/langchain]]
- [[entities/llamaindex]]
## Comparisons
- [[comparisons/rag-vs-finetuning]]
## Sources
- [[sources/rag-paper-2020]]# Source: Article Title
- **URL**: https://example.com/article
- **Date ingested**: 2026-04-10
- **Type**: web article
## Key Points
1. ...
2. ...
## Entities Mentioned
- [[entities/...]]
## Raw**: [[raw/articles/article-slug]]# Start Chrome with remote debugging enabled
google-chrome --remote-debugging-port=9222 --no-first-run &
# Verify it's running
curl http://localhost:9222/json/version
# For X/Twitter: make sure you're logged in on that Chrome session
# Then retry the extraction# Install uv if missing
curl -LsSf https://astral.sh/uv/install.sh | sh
source $HOME/.cargo/env
# Re-run installer to pick up uv
bash install.sh --platform claude # or your platform
# Verify uv tools work
uvx youtube-transcript --help
uvx wechat-article-to-markdown --help# The installer auto-selects bun or npm — check which is available
which bun && echo "bun found"
which npm && echo "npm found"
# Manually install web extraction deps with npm
npm install -g baoyu-url-to-markdown# Old path still supported automatically:
~/.Codex/skills # capital C — installer handles both
~/.codex/skills # lowercase — new default# Verify install location for your platform
ls ~/.claude/skills/llm-wiki/ # Claude Code
ls ~/.codex/skills/llm-wiki/ # Codex
ls ~/.openclaw/skills/llm-wiki/ # OpenClaw
# Re-run installer if directory is missing
bash install.sh --platform <your-platform># Check registered sources and routing
cat scripts/source-registry.tsv
# Test routing for a URL
bash scripts/source-registry.sh route "https://mp.weixin.qq.com/s/example"[[name]]raw/your-wiki/Initialize wiki: "Create a new wiki at <path>"
Add URL: "Add <url> to my wiki"
Add file: "Process <file path> into my wiki"
Add text: "Add these notes to my wiki: <text>"
Batch process: "Process all files in <folder> into my wiki"
Health check: "Check my wiki for broken links and orphans"
Find information: "What does my wiki say about <topic>"
Update a page: "Update the [[entity]] page with new info from <source>"