Loading...
Loading...
Guide for creating evolving skills - detailed workflow plans that capture what you'll do, what tools you'll create, and learnings from execution. Use this when starting a new task that could benefit from a reusable workflow.
npx skill4agent add massgen/massgen evolving-skill-creatortasks/evolving_skill/
├── SKILL.md # Your workflow plan
└── scripts/ # Python tools you create during execution
├── scrape_data.py
└── generate_output.pynamedescription---
name: descriptive-skill-name # REQUIRED - used for identification
description: Clear explanation of what this workflow does and when to use it # REQUIRED - used for discovery
---
# Task Name
## Overview
Brief description of the problem this skill solves.
## Workflow
Detailed numbered steps:
1. First step - be specific
2. Second step - include commands/tools to use
3. ...
## Tools to Create
Python scripts you'll write. Document BEFORE writing them:
### scripts/example_tool.py
- **Purpose**: What it does
- **Inputs**: What it takes (args, files, etc.)
- **Outputs**: What it produces
- **Dependencies**: Required packages
## Tools to Use
(Discover what's available, list ones you'll use)
- servers/name: MCP server tools
- custom_tools/name: Python tool implementations
## Skills
- skill_name: how it will help
## Packages
- package_name (pip install package_name)
## Expected Outputs
- Files this workflow produces
- Formats and locations
## Learnings
(Add after execution)
### What Worked Well
- ...
### What Didn't Work
- ...
### Tips for Future Use
- ...## Tools to Create
### scripts/fetch_artist_data.py
- **Purpose**: Crawl Wikipedia and extract artist biographical data
- **Inputs**: artist_name (str), output_path (str)
- **Outputs**: JSON file with structured bio data
- **Dependencies**: crawl4ai, json
### scripts/build_site.py
- **Purpose**: Generate static HTML from artist data
- **Inputs**: data_path (str), theme (str), output_dir (str)
- **Outputs**: Complete website in output_dir/
- **Dependencies**: jinja2scripts/mkdir -p tasks/evolving_skillnameartist-website-builderbob-dylan-sitedescriptionartist-website-builderdata-scraper-to-static-sitepdf-report-generatorbob-dylan-projectsession-12345my-taskscripts/---
name: artist-website-builder
description: Build static biographical websites for artists by scraping public sources and generating themed HTML.
---
# Artist Website Builder
## Overview
Create professional artist websites by gathering biographical data and generating themed static HTML.
## Workflow
1. Research artist - gather name variations, active years
2. Scrape data using scripts/fetch_artist_data.py
3. Review and clean extracted data
4. Generate site using scripts/build_site.py with "minimalist-dark" theme
5. Review in browser, check mobile responsiveness
6. Iterate on styling if needed
## Tools to Create
### scripts/fetch_artist_data.py
- **Purpose**: Crawl Wikipedia and extract artist biographical data
- **Inputs**: artist_name (str)
- **Outputs**: artist_data.json
- **Dependencies**: crawl4ai
### scripts/build_site.py
- **Purpose**: Generate static HTML from artist data
- **Inputs**: artist_data.json, theme_name
- **Outputs**: Complete website in output/
- **Dependencies**: jinja2
## Tools to Use
- servers/context7: fetching crawl4ai and jinja2 documentation
- servers/browser: capturing site previews for review
- custom_tools/image_optimizer: compressing generated assets
## Skills
- web-scraping-patterns: structuring the crawl4ai approach
## Packages
- crawl4ai (pip install crawl4ai)
- jinja2 (pip install jinja2)
## Expected Outputs
- output/index.html
- output/discography.html
- output/assets/
## Learnings
### What Worked Well
- Wikipedia infoboxes have consistent structure
- crawl4ai async mode is 3x faster than sync
- "minimalist-dark" theme works best for musicians
### What Didn't Work
- AllMusic requires JS rendering - use Discogs API instead
- Initial theme had poor mobile layout
### Tips for Future Use
- Always check robots.txt before scraping
- Cache scraped data - re-running is slow
- Test on mobile early