Loading...
Loading...
Review skills in any project using a dual-axis method: (1) deterministic code-based checks (structure, scripts, tests, execution safety) and (2) LLM deep review findings. Use when you need reproducible quality scoring for `skills/*/SKILL.md`, want to gate merges with a score threshold (for example 90+), or need concrete improvement items for low-scoring skills. Works across projects via --project-root.
npx skill4agent add tradermonty/claude-trading-skills dual-axis-skill-reviewerreports/--project-rootskills/*/SKILL.mduvpyyamluv sync --extra devskills/dual-axis-skill-reviewer/scripts/run_dual_axis_review.py~/.claude/skills/dual-axis-skill-reviewer/scripts/run_dual_axis_review.pyREVIEWER# If reviewing from the same project:
REVIEWER=skills/dual-axis-skill-reviewer/scripts/run_dual_axis_review.py
# If reviewing another project (global install):
REVIEWER=~/.claude/skills/dual-axis-skill-reviewer/scripts/run_dual_axis_review.pyuv run "$REVIEWER" \
--project-root . \
--emit-llm-prompt \
--output-dir reports/--project-rootuv run "$REVIEWER" \
--project-root /path/to/other/project \
--emit-llm-prompt \
--output-dir reports/reports/skill_review_prompt_<skill>_<timestamp>.mduv run "$REVIEWER" \
--project-root . \
--skill <skill-name> \
--llm-review-json <path-to-llm-review.json> \
--auto-weight 0.5 \
--llm-weight 0.5 \
--output-dir reports/--skill <name>--seed <int>--all--skip-tests--output-dir <dir>--auto-weight--llm-weightreports/skill_review_<skill>_<timestamp>.jsonreports/skill_review_<skill>_<timestamp>.mdreports/skill_review_prompt_<skill>_<timestamp>.md--emit-llm-prompt~/.claude/skills/ln -sfn /path/to/claude-trading-skills/skills/dual-axis-skill-reviewer \
~/.claude/skills/dual-axis-skill-reviewer~/.claude/skills/dual-axis-skill-reviewer/scripts/run_dual_axis_review.pyknowledge_onlyskills/dual-axis-skill-reviewer/scripts/run_dual_axis_review.pyskills/dual-axis-skill-reviewer/references/llm_review_schema.mdskills/dual-axis-skill-reviewer/references/scoring_rubric.md