analyze-plugin
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChinesePlugin & Skill Analyzer
Plugin & Skill 分析器
Perform deep structural and content analysis on agent plugins and skills. Extract reusable patterns that feed the virtuous cycle of continuous improvement.
对Agent插件与技能进行深度结构与内容分析,提取可复用模式,为持续改进的良性循环提供支持。
Two Analysis Modes
两种分析模式
Single Plugin Mode
单插件模式
Deep-dive into one plugin. Use when you want to fully understand a plugin's architecture.
深入分析单个插件。当你想要全面理解某个插件的架构时使用此模式。
Comparative Mode
对比模式
Analyze multiple plugins side-by-side. Use when looking for common patterns across a collection.
同时分析多个插件。当你想要在一组插件中寻找通用模式时使用此模式。
Analysis Framework
分析框架
Execute these six phases sequentially. Do not skip phases.
按顺序执行以下六个阶段,不得跳过任何阶段。
Phase 1: Inventory
阶段1:清单统计
Run the deterministic inventory script first:
bash
python3 "plugins/agent-plugin-analyzer/skills/analyze-plugin/scripts/inventory_plugin.py" --path <plugin-dir> --format jsonIf the script is unavailable, manually enumerate:
-
Walk the directory tree
-
Classify every file by type:
- -> Skill definition
SKILL.md - -> Command definition
commands/*.md - -> Reference material (progressive disclosure)
references/*.md - -> Executable scripts (must live inside the skill folder, not at plugin level)
scripts/*.py - -> Plugin documentation
README.md - -> Connector abstractions
CONNECTORS.md - -> Plugin manifest
plugin.json - -> Configuration (MCP, hooks, etc.)
*.json - /
*.yaml-> Pipeline/config data*.yml - -> Artifact templates
*.html - -> Architecture diagrams
*.mmd - Other -> Assets/misc
-
Record for each file: path, type, line count, byte size
-
Output a structured inventory as a markdown checklist with one checkbox per file
首先运行确定性清单脚本:
bash
python3 "plugins/agent-plugin-analyzer/skills/analyze-plugin/scripts/inventory_plugin.py" --path <plugin-dir> --format json如果脚本不可用,则手动枚举:
-
遍历目录树
-
按类型对每个文件进行分类:
- -> 技能定义
SKILL.md - -> 命令定义
commands/*.md - -> 参考资料(渐进式披露)
references/*.md - -> 可执行脚本(必须存放在技能文件夹内,而非插件层级)
scripts/*.py - -> 插件文档
README.md - -> 连接器抽象层
CONNECTORS.md - -> 插件清单
plugin.json - -> 配置文件(MCP、钩子等)
*.json - /
*.yaml-> 流水线/配置数据*.yml - -> 工件模板
*.html - -> 架构图
*.mmd - 其他 -> 资源/杂项
-
记录每个文件的:路径、类型、行数、字节大小
-
以Markdown复选框清单的形式输出结构化统计结果,每个文件对应一个复选框
Phase 2: Structure Analysis
阶段2:结构分析
Evaluate the plugin's architectural decisions:
| Dimension | What to Look For |
|---|---|
| Layout | How are skills/commands/references organized? Flat vs nested? |
| Progressive Disclosure | Is SKILL.md lean (<500 lines) with depth in |
| Component Ratios | Skills vs commands vs scripts — what's the balance? |
| Naming Patterns | Are names descriptive? Follow kebab-case? Use gerund form? |
| README Quality | Does it have a file tree? Usage examples? Architecture diagram? |
| CONNECTORS.md | Does it use |
| Standalone vs Supercharged | Can it work without MCP tools? What's enhanced with them? |
评估插件的架构决策:
| 维度 | 检查要点 |
|---|---|
| 布局结构 | 技能/命令/参考资料如何组织?扁平结构还是嵌套结构? |
| 渐进式披露 | |
| 组件比例 | 技能、命令、脚本之间的比例平衡如何? |
| 命名模式 | 命名是否具有描述性?是否遵循短横线命名法(kebab-case)?是否使用动名词形式? |
| README质量 | 是否包含文件树?使用示例?架构图? |
| CONNECTORS.md | 是否使用 |
| 独立模式 vs 增强模式 | 无需MCP工具即可运行吗?使用MCP工具能带来哪些增强? |
Phase 3: Content Analysis
阶段3:内容分析
For each file, load the appropriate question set from and work through every checkbox. See the process diagram in for the full pipeline visualization.
references/analysis-questions-by-type.mdanalyze-plugin-flow.mmdFor each SKILL.md, evaluate:
Frontmatter Quality:
- Is the written in third person?
description - Does it include specific trigger phrases?
- Is it under 1024 characters?
- Does it clearly state WHEN to trigger?
Body Structure:
- Does it have a clear execution flow (numbered phases/steps)?
- Are there decision trees or branching logic?
- Does it use tables for structured information?
- Are there output templates or format specifications?
- Does it link to for deep content?
references/
Interaction Design:
- Does it use guided discovery interviews before execution?
- What question types are used? (open-ended, numbered options, yes/no, table-based comparisons)
- Does it present smart defaults with override options?
- Are there confirmation gates before expensive/irreversible operations?
- Does it use recap-before-execute to verify understanding?
- Does it offer numbered next-action menus after completion?
- Does it negotiate output format with the user?
- Are there inline progress indicators during multi-step workflows?
For Commands, evaluate:
- Are they written as instructions FOR the agent (not documentation for users)?
- Do they specify required arguments?
- Do they reference MCP tools with full namespaces?
For Reference Files, evaluate:
- Do they contain domain-specific deep knowledge?
- Are they organized by topic/domain?
- Do files >100 lines have a table of contents?
For Scripts, evaluate:
- Are they Python-only (no .sh/.ps1)?
- Do they have documentation?
--help - Do they handle errors gracefully?
- Are they cross-platform compatible?
针对每个文件,从加载对应的问题集,逐一完成所有检查项。可查看中的流程示意图了解完整流水线。
references/analysis-questions-by-type.mdanalyze-plugin-flow.mmd针对每个,评估以下内容:
SKILL.md前置元数据质量:
- 是否以第三人称撰写?
description - 是否包含具体的触发短语?
- 长度是否在1024字符以内?
- 是否明确说明触发时机?
正文结构:
- 是否有清晰的执行流程(分阶段/步骤编号)?
- 是否包含决策树或分支逻辑?
- 是否使用表格呈现结构化信息?
- 是否有输出模板或格式规范?
- 是否链接到获取详细内容?
references/
交互设计:
- 在执行前是否使用引导式发现访谈?
- 使用了哪些问题类型?(开放式、编号选项、是/否、表格对比)
- 是否提供智能默认值及覆盖选项?
- 在执行高成本/不可逆操作前是否有确认环节?
- 是否在执行前通过回顾确认理解?
- 完成后是否提供编号式后续操作菜单?
- 是否与用户协商输出格式?
- 在多步骤工作流中是否有内联进度指示器?
针对命令的评估:
- 是否以面向Agent的指令形式撰写(而非面向用户的文档)?
- 是否指定了必填参数?
- 是否使用完整命名空间引用MCP工具?
针对参考文件的评估:
- 是否包含领域专属深度知识?
- 是否按主题/领域组织内容?
- 超过100行的文件是否包含目录?
针对脚本的评估:
- 是否仅使用Python编写(无.sh/.ps1文件)?
- 是否有文档?
--help - 是否能优雅处理错误?
- 是否具备跨平台兼容性?
Phase 4: Pattern Extraction
阶段4:模式提取
Identify instances of known patterns from . Also watch for novel patterns not yet cataloged.
references/pattern-catalog.mdFor each pattern found, document:
Pattern: [name]
Plugin: [where found]
File: [specific file]
Description: [how it's used here]
Quality: [exemplary / good / basic]
Reusability: [high / medium / low]
Confidence: [high (≥3 plugins) / medium (2) / low (1)]
Lifecycle: [proposed / validated / canonical / deprecated]Before adding a new pattern, check the catalog's deduplication rules. If an existing pattern covers ≥80% of the behavior, update its frequency instead.
Key pattern categories to search for:
- Architectural Patterns — Standalone/supercharged, connector abstraction, meta-skills
- Execution Patterns — Phase-based workflows, decision trees, bootstrap/iteration modes
- Content Patterns — Severity frameworks, confidence scoring, priority tiers, checklists
- Output Patterns — HTML artifacts, structured tables, ASCII diagrams, template systems
- Knowledge Patterns — Progressive disclosure, dialect tables, domain references, tribal knowledge extraction
- Interaction Design Patterns — Discovery interviews, option menus, confirmation gates, smart defaults, recap-before-execute, output format negotiation, progress indicators
从中识别已知模式的实例,同时留意尚未收录的新型模式。
references/pattern-catalog.md针对每个发现的模式,需记录:
Pattern: [名称]
Plugin: [发现位置]
File: [具体文件]
Description: [在此处的使用方式]
Quality: [优秀 / 良好 / 基础]
Reusability: [高 / 中 / 低]
Confidence: [高(≥3个插件)/ 中(2个)/ 低(1个)]
Lifecycle: [提议 / 已验证 / 标准 / 已废弃]在添加新模式前,请检查目录中的去重规则。如果现有模式已覆盖≥80%的行为,则更新其出现频率即可。
需重点搜索的模式类别:
- 架构模式 —— 独立/增强模式、连接器抽象层、元技能
- 执行模式 —— 分阶段工作流、决策树、引导/迭代模式
- 内容模式 —— 严重程度框架、置信度评分、优先级层级、检查清单
- 输出模式 —— HTML工件、结构化表格、ASCII图、模板系统
- 知识模式 —— 渐进式披露、方言表、领域参考、隐性知识提取
- 交互设计模式 —— 发现访谈、选项菜单、确认环节、智能默认值、执行前回顾、输出格式协商、进度指示器
Phase 5: Anti-Pattern & Security Detection
阶段5:反模式与安全检测
Load the full check tables from .
references/security-checks.mdExecution order:
- Run security checks FIRST (P0 — Critical severity items)
- Then run structural anti-pattern checks
- Apply contextual severity based on plugin type/complexity
- Flag any LLM-native attack vectors (skill impersonation, context poisoning, injection via references)
If was run with , use its deterministic findings as ground truth.
inventory_plugin.py--security加载中的完整检查表。
references/security-checks.md执行顺序:
- 首先运行安全检查(P0 —— 严重级别项)
- 然后运行结构反模式检查
- 根据插件类型/复杂度应用上下文严重程度
- 标记任何LLM原生攻击向量(技能冒充、上下文污染、通过参考资料注入)
如果运行时使用了参数,则以其确定性检查结果为基准。
inventory_plugin.py--securityPhase 6: Synthesis & Scoring
阶段6:综合分析与评分
Load the maturity model and scoring rubric from .
references/maturity-model.mdSteps:
- Assign maturity level (L1-L5)
- Score each of the 6 dimensions (1-5) using the weighted rubric
- Calculate overall score (weighted average, Scoring v2.0)
- Generate the summary report using the template
- For comparative mode, generate the Ecosystem Scorecard
加载中的成熟度模型与评分规则。
references/maturity-model.md步骤:
- 分配成熟度等级(L1-L5)
- 使用加权规则对6个维度分别评分(1-5分)
- 计算总体得分(加权平均值,评分版本v2.0)
- 使用模板生成总结报告
- 若为对比模式,生成生态系统评分卡
Output
输出
Generate a structured markdown report. For single plugins, output inline. For collections, create an artifact file with the full analysis.
Iteration Directory Isolation: All analysis reports must be saved into explicitly versioned and isolated outputs (e.g. ) to prevent destructive overrides on re-runs.
Asynchronous Benchmark Metric Capture: Once the audit run completes, immediately log the resulting and to a file to calculate the cost of the deep-dive analysis.
analysis-reports/target-run-1/total_tokensduration_mstiming.jsonAlways end with Virtuous Cycle Recommendations: specific, actionable improvements for (this plugin), , and based on patterns discovered.
agent-plugin-analyzeragent-scaffoldersagent-skill-open-specifications生成结构化Markdown报告。单插件分析直接在当前位置输出,集合分析则创建工件文件存储完整分析结果。
迭代目录隔离:所有分析报告必须保存到明确版本化且隔离的输出目录中(例如),以避免重新运行时覆盖原有内容。
异步基准指标捕获:审计运行完成后,立即将最终的与记录到文件中,以计算深度分析的成本。
analysis-reports/target-run-1/total_tokensduration_mstiming.json最后必须附上良性循环建议:基于发现的模式,为(本插件)、与提供具体、可执行的改进建议。
agent-plugin-analyzeragent-scaffoldersagent-skill-open-specifications