dual-axis-skill-reviewer
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseDual Axis Skill Reviewer
双轴Skill审查工具
Run the dual-axis reviewer script and save reports to .
reports/The script supports:
- Random or fixed skill selection
- Auto-axis scoring with optional test execution
- LLM prompt generation
- LLM JSON review merge with weighted final score
- Cross-project review via
--project-root
运行双轴审查脚本并将报告保存至目录。
reports/该脚本支持:
- 随机或固定选择Skill
- 自动轴评分(可选执行测试)
- LLM提示词生成
- LLM JSON审查结果合并与加权最终评分
- 通过实现跨项目审查
--project-root
When to Use
适用场景
- Need reproducible scoring for one skill in .
skills/*/SKILL.md - Need improvement items when final score is below 90.
- Need both deterministic checks and qualitative LLM code/content review.
- Need to review skills in a different project from the command line.
- 需要为中的单个Skill生成可复现的评分。
skills/*/SKILL.md - 当最终评分低于90分时,需要获取改进项建议。
- 同时需要确定性检查与定性LLM代码/内容审查。
- 需要通过命令行审查其他项目中的Skill。
Prerequisites
前置条件
- Python 3.9+
- (recommended — auto-resolves
uvdependency via inline metadata)pyyaml - For tests: or equivalent in the target project
uv sync --extra dev - For LLM-axis merge: JSON file that follows the LLM review schema (see Resources)
- Python 3.9+
- (推荐——通过内联元数据自动解析
uv依赖)pyyaml - 若需执行测试:在目标项目中运行或等效命令
uv sync --extra dev - 若需合并LLM轴结果:符合LLM审查 schema 的JSON文件(参见资源部分)
Workflow
工作流程
Determine the correct script path based on your context:
- Same project:
skills/dual-axis-skill-reviewer/scripts/run_dual_axis_review.py - Global install:
~/.claude/skills/dual-axis-skill-reviewer/scripts/run_dual_axis_review.py
The examples below use as a placeholder. Set it once:
REVIEWERbash
undefined根据使用场景确定正确的脚本路径:
- 同一项目内:
skills/dual-axis-skill-reviewer/scripts/run_dual_axis_review.py - 全局安装后:
~/.claude/skills/dual-axis-skill-reviewer/scripts/run_dual_axis_review.py
以下示例使用作为占位符,只需设置一次:
REVIEWERbash
undefinedIf reviewing from the same project:
若在同一项目内审查:
REVIEWER=skills/dual-axis-skill-reviewer/scripts/run_dual_axis_review.py
REVIEWER=skills/dual-axis-skill-reviewer/scripts/run_dual_axis_review.py
If reviewing another project (global install):
若使用全局安装审查其他项目:
REVIEWER=~/.claude/skills/dual-axis-skill-reviewer/scripts/run_dual_axis_review.py
undefinedREVIEWER=~/.claude/skills/dual-axis-skill-reviewer/scripts/run_dual_axis_review.py
undefinedStep 1: Run Auto Axis + Generate LLM Prompt
步骤1:运行自动轴检查并生成LLM提示词
bash
uv run "$REVIEWER" \
--project-root . \
--emit-llm-prompt \
--output-dir reports/When reviewing a different project, point to it:
--project-rootbash
uv run "$REVIEWER" \
--project-root /path/to/other/project \
--emit-llm-prompt \
--output-dir reports/bash
uv run "$REVIEWER" \
--project-root . \
--emit-llm-prompt \
--output-dir reports/若审查其他项目,将指向目标项目路径:
--project-rootbash
uv run "$REVIEWER" \
--project-root /path/to/other/project \
--emit-llm-prompt \
--output-dir reports/Step 2: Run LLM Review
步骤2:运行LLM审查
- Use the generated prompt file in .
reports/skill_review_prompt_<skill>_<timestamp>.md - Ask the LLM to return strict JSON output.
- When running inside Claude Code, let Claude act as orchestrator: read the generated prompt, produce the LLM review JSON, and save it for the merge step.
- 使用中的生成提示词文件。
reports/skill_review_prompt_<skill>_<timestamp>.md - 要求LLM返回严格的JSON格式输出。
- 若在Claude Code中运行,可让Claude作为协调者:读取生成的提示词,生成LLM审查JSON并保存,用于后续合并步骤。
Step 3: Merge Auto + LLM Axes
步骤3:合并自动轴与LLM轴结果
bash
uv run "$REVIEWER" \
--project-root . \
--skill <skill-name> \
--llm-review-json <path-to-llm-review.json> \
--auto-weight 0.5 \
--llm-weight 0.5 \
--output-dir reports/bash
uv run "$REVIEWER" \
--project-root . \
--skill <skill-name> \
--llm-review-json <path-to-llm-review.json> \
--auto-weight 0.5 \
--llm-weight 0.5 \
--output-dir reports/Step 4: Optional Controls
步骤4:可选控制项
- Fix selection for reproducibility: or
--skill <name>--seed <int> - Review all skills at once:
--all - Skip tests for quick triage:
--skip-tests - Change report location:
--output-dir <dir> - Increase for stricter deterministic gating.
--auto-weight - Increase when qualitative/code-review depth is prioritized.
--llm-weight
- 固定选择以保证可复现性:或
--skill <name>--seed <int> - 一次性审查所有Skill:
--all - 跳过测试以快速分类:
--skip-tests - 修改报告存储位置:
--output-dir <dir> - 提高以强化确定性管控。
--auto-weight - 当优先考虑定性/代码审查深度时,提高。
--llm-weight
Output
输出结果
reports/skill_review_<skill>_<timestamp>.jsonreports/skill_review_<skill>_<timestamp>.md- (when
reports/skill_review_prompt_<skill>_<timestamp>.mdis enabled)--emit-llm-prompt
reports/skill_review_<skill>_<timestamp>.jsonreports/skill_review_<skill>_<timestamp>.md- (当启用
reports/skill_review_prompt_<skill>_<timestamp>.md时生成)--emit-llm-prompt
Installation (Global)
全局安装
To use this skill from any project, symlink it into :
~/.claude/skills/bash
ln -sfn /path/to/claude-trading-skills/skills/dual-axis-skill-reviewer \
~/.claude/skills/dual-axis-skill-reviewerAfter this, Claude Code will discover the skill in all projects, and the script is accessible at .
~/.claude/skills/dual-axis-skill-reviewer/scripts/run_dual_axis_review.py若要在任意项目中使用该Skill,将其符号链接至:
~/.claude/skills/bash
ln -sfn /path/to/claude-trading-skills/skills/dual-axis-skill-reviewer \
~/.claude/skills/dual-axis-skill-reviewer完成后,Claude Code将在所有项目中识别该Skill,脚本路径为。
~/.claude/skills/dual-axis-skill-reviewer/scripts/run_dual_axis_review.pyResources
资源
- Auto axis scores metadata, workflow coverage, execution safety, artifact presence, and test health.
- Auto axis detects skills and adjusts script/test expectations to avoid unfair penalties.
knowledge_only - LLM axis scores deep content quality (correctness, risk, missing logic, maintainability).
- Final score is weighted average.
- If final score is below 90, improvement items are required and listed in the markdown report.
- Script:
skills/dual-axis-skill-reviewer/scripts/run_dual_axis_review.py - LLM schema:
skills/dual-axis-skill-reviewer/references/llm_review_schema.md - Rubric detail:
skills/dual-axis-skill-reviewer/references/scoring_rubric.md
- 自动轴评分包含元数据、工作流程覆盖度、执行安全性、工件存在性与测试健康状态。
- 自动轴可检测类型Skill,并调整脚本/测试预期以避免不公平扣分。
knowledge_only - LLM轴评分针对内容深度质量(正确性、风险、缺失逻辑、可维护性)。
- 最终评分为加权平均值。
- 若最终评分低于90分,markdown报告中会列出必填的改进项。
- 脚本路径:
skills/dual-axis-skill-reviewer/scripts/run_dual_axis_review.py - LLM schema:
skills/dual-axis-skill-reviewer/references/llm_review_schema.md - 评分细则:
skills/dual-axis-skill-reviewer/references/scoring_rubric.md