evolving-skill-creator

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Evolving Skill Creator

可演进技能创建器

Create evolving skills - detailed workflow plans that become reusable through iteration.
创建可演进技能——这类详细的工作流规划可通过迭代实现复用。

What is an Evolving Skill?

什么是可演进技能?

An evolving skill is a workflow plan that:
  1. Documents specific steps to accomplish a goal
  2. Lists Python scripts you'll create as reusable tools
  3. Captures learnings after execution for future improvement
Unlike static skills, evolving skills are refined through use.
可演进技能是满足以下特点的工作流规划:
  1. 记录实现目标的具体步骤
  2. 列出你将创建的可复用Python脚本工具
  3. 沉淀执行后的经验,用于后续优化
和静态技能不同,可演进技能会在使用过程中不断打磨完善。

Directory Structure

目录结构

tasks/evolving_skill/
├── SKILL.md              # Your workflow plan
└── scripts/              # Python tools you create during execution
    ├── scrape_data.py
    └── generate_output.py
tasks/evolving_skill/
├── SKILL.md              # 你的工作流规划
└── scripts/              # 执行过程中创建的Python工具
    ├── scrape_data.py
    └── generate_output.py

SKILL.md Format

SKILL.md 格式

IMPORTANT: YAML Frontmatter is Required
Every evolving skill MUST start with YAML frontmatter containing
name
and
description
. These fields are critical for skill discovery - they determine how the skill is identified when loaded in future sessions.
yaml
---
name: descriptive-skill-name    # REQUIRED - used for identification
description: Clear explanation of what this workflow does and when to use it  # REQUIRED - used for discovery
---
重要提示:必须包含YAML前置元数据
所有可演进技能都必须以包含
name
description
字段的YAML前置元数据开头。这些字段对技能的检索至关重要,决定了后续会话加载时如何识别该技能。
yaml
---
name: descriptive-skill-name    # 必填,用于身份识别
description: 清晰说明该工作流的功能以及适用场景  # 必填,用于检索发现
---

Task Name

任务名称

Overview

概述

Brief description of the problem this skill solves.
简要说明该技能解决的问题。

Workflow

工作流

Detailed numbered steps:
  1. First step - be specific
  2. Second step - include commands/tools to use
  3. ...
详细的编号步骤:
  1. 第一步,内容要具体
  2. 第二步,包含要使用的命令/工具
  3. ...

Tools to Create

待创建工具

Python scripts you'll write. Document BEFORE writing them:
你将要编写的Python脚本,编写前先完成文档记录:

scripts/example_tool.py

scripts/example_tool.py

  • Purpose: What it does
  • Inputs: What it takes (args, files, etc.)
  • Outputs: What it produces
  • Dependencies: Required packages
  • 用途:脚本的功能
  • 输入:接收的参数(参数、文件等)
  • 输出:生成的产物
  • 依赖:需要的安装包

Tools to Use

待使用工具

(Discover what's available, list ones you'll use)
  • servers/name: MCP server tools
  • custom_tools/name: Python tool implementations
(发现可用工具,列出你要使用的项)
  • servers/name: MCP服务端工具
  • custom_tools/name: Python工具实现

Skills

关联技能

  • skill_name: how it will help
  • 技能名称:该技能的作用

Packages

依赖包

  • package_name (pip install package_name)
  • 包名(pip install 包名)

Expected Outputs

预期产出

  • Files this workflow produces
  • Formats and locations
  • 该工作流生成的文件
  • 格式和存储位置

Learnings

经验总结

(Add after execution)
(执行完成后补充)

What Worked Well

效果良好的做法

  • ...
  • ...

What Didn't Work

存在问题的做法

  • ...
  • ...

Tips for Future Use

后续使用建议

  • ...
undefined
  • ...
undefined

Tools to Create Section

待创建工具模块

This is the key differentiator. When your workflow involves writing Python scripts, document them upfront:
markdown
undefined
这是核心差异点。如果你的工作流需要编写Python脚本,请提前做好文档说明:
markdown
undefined

Tools to Create

待创建工具

scripts/fetch_artist_data.py

scripts/fetch_artist_data.py

  • Purpose: Crawl Wikipedia and extract artist biographical data
  • Inputs: artist_name (str), output_path (str)
  • Outputs: JSON file with structured bio data
  • Dependencies: crawl4ai, json
  • Purpose: Crawl Wikipedia and extract artist biographical data
  • Inputs: artist_name (str), output_path (str)
  • Outputs: JSON file with structured bio data
  • Dependencies: crawl4ai, json

scripts/build_site.py

scripts/build_site.py

  • Purpose: Generate static HTML from artist data
  • Inputs: data_path (str), theme (str), output_dir (str)
  • Outputs: Complete website in output_dir/
  • Dependencies: jinja2

After execution, the actual scripts live in `scripts/` and can be reused.
  • Purpose: Generate static HTML from artist data
  • Inputs: data_path (str), theme (str), output_dir (str)
  • Outputs: Complete website in output_dir/
  • Dependencies: jinja2

执行完成后,实际开发的脚本会存放在`scripts/`目录下,可重复使用。

Creating an Evolving Skill

创建可演进技能

  1. Create directory:
    mkdir -p tasks/evolving_skill
  2. Write SKILL.md with proper YAML frontmatter first:
    • name
      : Use a descriptive, reusable name (e.g.,
      artist-website-builder
      , not
      bob-dylan-site
      )
    • description
      : Explain what the workflow does and when to use it
  3. Execute workflow following your plan
  4. Create scripts as documented in Tools to Create
  5. Update SKILL.md with Learnings after completion
  1. 创建目录
    mkdir -p tasks/evolving_skill
  2. 编写SKILL.md,首先填写规范的YAML前置元数据:
    • name
      :使用具有描述性、可复用的名称(例如
      artist-website-builder
      ,而不是
      bob-dylan-site
    • description
      :说明该工作流的用途以及适用场景
  3. 按照规划执行工作流
  4. 按照「待创建工具」中的说明开发对应脚本
  5. 完成任务后更新SKILL.md的经验部分

Naming Guidelines

命名规范

Choose names that describe the type of task, not the specific instance:
  • Good:
    artist-website-builder
    ,
    data-scraper-to-static-site
    ,
    pdf-report-generator
  • Bad:
    bob-dylan-project
    ,
    session-12345
    ,
    my-task
The name should make it clear what the skill does when discovered in future sessions.
选择能够描述任务类型而非特定实例的名称:
  • 正面示例
    artist-website-builder
    data-scraper-to-static-site
    pdf-report-generator
  • 负面示例
    bob-dylan-project
    session-12345
    my-task
名称应该让人在后续会话中检索到时,就能清晰知道该技能的用途。

Updating After Execution

执行后更新

After completing your work:
  1. Refine Workflow - Update steps based on what actually worked
  2. Move scripts - Ensure working scripts are in
    scripts/
  3. Add Learnings - Document what worked, what didn't, tips
完成工作后:
  1. 优化工作流:根据实际可行的操作更新步骤
  2. 迁移脚本:确保可运行的脚本都存放在
    scripts/
    目录下
  3. 补充经验总结:记录有效的做法、存在的问题、后续使用建议

Example: Complete Evolving Skill

示例:完整的可演进技能

yaml
---
name: artist-website-builder
description: Build static biographical websites for artists by scraping public sources and generating themed HTML.
---
yaml
---
name: artist-website-builder
description: Build static biographical websites for artists by scraping public sources and generating themed HTML.
---

Artist Website Builder

艺术家网站构建器

Overview

概述

Create professional artist websites by gathering biographical data and generating themed static HTML.
通过收集生平数据、生成带主题的静态HTML,创建专业的艺术家网站。

Workflow

工作流

  1. Research artist - gather name variations, active years
  2. Scrape data using scripts/fetch_artist_data.py
  3. Review and clean extracted data
  4. Generate site using scripts/build_site.py with "minimalist-dark" theme
  5. Review in browser, check mobile responsiveness
  6. Iterate on styling if needed
  1. 调研艺术家,收集姓名变体、活跃年份
  2. 使用scripts/fetch_artist_data.py爬取数据
  3. 审核并清洗提取到的数据
  4. 使用scripts/build_site.py生成「minimalist-dark」主题的站点
  5. 在浏览器中预览,检查移动端适配效果
  6. 如有需要调整样式

Tools to Create

待创建工具

scripts/fetch_artist_data.py

scripts/fetch_artist_data.py

  • Purpose: Crawl Wikipedia and extract artist biographical data
  • Inputs: artist_name (str)
  • Outputs: artist_data.json
  • Dependencies: crawl4ai
  • 用途:爬取Wikipedia提取艺术家生平数据
  • 输入:artist_name (str)
  • 输出:artist_data.json
  • 依赖:crawl4ai

scripts/build_site.py

scripts/build_site.py

  • Purpose: Generate static HTML from artist data
  • Inputs: artist_data.json, theme_name
  • Outputs: Complete website in output/
  • Dependencies: jinja2
  • 用途:基于艺术家数据生成静态HTML
  • 输入:artist_data.json, theme_name
  • 输出:存放在output/目录下的完整网站
  • 依赖:jinja2

Tools to Use

待使用工具

  • servers/context7: fetching crawl4ai and jinja2 documentation
  • servers/browser: capturing site previews for review
  • custom_tools/image_optimizer: compressing generated assets
  • servers/context7: 拉取crawl4ai和jinja2的文档
  • servers/browser: 捕获站点预览图用于审核
  • custom_tools/image_optimizer: 压缩生成的资源文件

Skills

关联技能

  • web-scraping-patterns: structuring the crawl4ai approach
  • web-scraping-patterns: 搭建crawl4ai爬取逻辑

Packages

依赖包

  • crawl4ai (pip install crawl4ai)
  • jinja2 (pip install jinja2)
  • crawl4ai (pip install crawl4ai)
  • jinja2 (pip install jinja2)

Expected Outputs

预期产出

  • output/index.html
  • output/discography.html
  • output/assets/
  • output/index.html
  • output/discography.html
  • output/assets/

Learnings

经验总结

What Worked Well

效果良好的做法

  • Wikipedia infoboxes have consistent structure
  • crawl4ai async mode is 3x faster than sync
  • "minimalist-dark" theme works best for musicians
  • Wikipedia信息框结构统一
  • crawl4ai异步模式比同步模式快3倍
  • 「minimalist-dark」主题最适合音乐人

What Didn't Work

存在问题的做法

  • AllMusic requires JS rendering - use Discogs API instead
  • Initial theme had poor mobile layout
  • AllMusic需要JS渲染,改为使用Discogs API
  • 初始主题移动端布局效果差

Tips for Future Use

后续使用建议

  • Always check robots.txt before scraping
  • Cache scraped data - re-running is slow
  • Test on mobile early
undefined
  • 爬取前一定要检查robots.txt
  • 缓存爬取到的数据,重复运行速度很慢
  • 尽早完成移动端测试
undefined

Key Principles

核心原则

  1. Be specific - Workflow steps should be actionable, not vague
  2. Document tools upfront - Plan scripts before writing them
  3. Test like a user - Verify artifacts through interaction, not just observation (click buttons, play games, navigate pages, run with edge cases, etc)
  4. Update with learnings - The skill improves through use
  5. Keep scripts reusable - Design tools to work in similar future tasks
  1. 内容具体:工作流步骤应该可执行,不能模糊笼统
  2. 提前编写工具文档:在开发脚本前先做好规划
  3. 站在用户视角测试:通过交互验证产出物,而不只是肉眼观察(点击按钮、测试功能、页面跳转、边缘场景运行测试等)
  4. 同步更新经验总结:技能会在使用过程中不断优化
  5. 保证脚本可复用:设计工具时要考虑可以适配未来的同类任务