Loading...
Loading...
Found 29 Skills
微信公众号文章抓取与导出。自动处理 mp.weixin.qq.com 的登录态获取与续期, 支持按公众号搜索、抓取文章列表与正文、按日期窗口导出 Markdown / JSON / CSV。 Trigger when the user wants to crawl a WeChat public account, export recent articles, or 提到 "wcx"、"微信公众号"、"公众号文章"、"mp.weixin"、"抓公众号"、 "crawl wechat official account"、"wxmp"、"最近十天的文章"。
Web search, content extraction, crawling, and research capabilities using Tavily API
Automatic generation system for A-share daily briefings. It crawls real-time data from East Money and generates daily reports covering complete information such as market indices, popular sectors, and capital trends.
Use this skill for XCrawl crawl tasks, including bulk site crawling, crawler rule design, async status polling, and delivery of crawl output for downstream scrape and search workflows.
Use this skill for XCrawl map tasks, including site URL discovery, regex filtering, scope estimation, and crawl planning before full-site crawling.
Crawlbase integration. Manage data, records, and automate workflows. Use when the user wants to interact with Crawlbase data.
Get web data now — fast, incremental, immediately responsive to what the user needs. The only way Claude can access live websites. USE FOR: - Fetching any URL or reading any webpage - Scraping prices, listings, reviews, jobs, stats, docs from any site - Discovering URLs on a site before bulk extraction - Calling public REST/XHR API endpoints - Web search and research (8 focus modes) - Bulk crawling website sections Must be pre-installed and authenticated. Run `nimble --version` to verify. For building reusable extraction workflows to run at scale over time, use nimble-agent-builder instead.
Scrape documentation websites into local markdown files for AI context. Takes a base URL and crawls the documentation, storing results in ./docs (or custom path). Uses crawl4ai with BFS deep crawling.
Web crawling and scraping with analysis. Use for crawling websites, security scanning, and extracting information from web pages.
统一信息源热点采集。从 X/Twitter、YouTube、B站、GitHub、Reddit、LinuxDo 六大平台免费获取热门内容,输出标准化热点池供内容创作流水线使用。全部使用免费公开 API,无需付费。
Web scraping, search, crawling, and browser automation via the Firecrawl CLI. Use this skill whenever the user wants to search the web, find articles, research a topic, look something up online, scrape a webpage, grab content from a URL, extract data from a website, crawl documentation, download a site, or interact with pages that need clicks or logins. Also use when they say "fetch this page", "pull the content from", "get the page at https://", or reference scraping external websites. This provides real-time web search with full page content extraction and cloud browser automation — capabilities beyond what Claude can do natively with built-in tools. Do NOT trigger for local file operations, git commands, deployments, or code editing tasks.
Crawl an existing website (capped, multi-page) and seed stardust/current/ with PRODUCT.md, DESIGN.md, DESIGN.json, a per-page inventory, and the consolidated brand surface.