skill-scan
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseSkill-Scan — Security Auditor for Agent Skills
Skill-Scan — Agent技能安全审核工具
Multi-layered security scanner for OpenClaw skill packages. Detects malicious code, evasion techniques, prompt injection, and misaligned behavior through static analysis and optional LLM-powered deep inspection. Run this BEFORE installing or enabling any untrusted skill.
针对OpenClaw技能包的多层安全扫描工具。通过静态分析和可选的LLM驱动深度检测,识别恶意代码、规避技术、提示注入和行为偏差问题。请在安装或启用任何未经验证的技能前运行此工具。
Features
功能特性
- 6 analysis layers — pattern matching, AST/evasion, prompt injection, LLM deep analysis, alignment verification, meta-analysis
- 60+ detection rules — execution threats, credential theft, data exfiltration, obfuscation, behavioral signatures
- Context-aware scoring — reduces false positives for legitimate API skills
- ClawHub integration — scan skills directly from the registry by slug
- Multiple output modes — text report (default), ,
--json,--compact--quiet - Exit codes — 0 for safe, 1 for risky (easy scripting integration)
- 6层分析机制 — 模式匹配、语法树/规避技术检测、提示注入检测、LLM深度分析、一致性验证、元分析
- 60+检测规则 — 执行威胁、凭据窃取、数据泄露、代码混淆、行为特征
- 上下文感知评分 — 减少合法API技能的误报
- ClawHub集成 — 通过slug直接扫描注册表中的技能
- 多种输出模式 — 文本报告(默认)、、
--json、--compact--quiet - 退出码机制 — 0表示安全,1表示存在风险(便于脚本集成)
When to Use
使用场景
MANDATORY before installing or enabling:
- Skills from ClawHub (any skill not authored by you)
- Skills shared by other users or teams
- Skills from public repositories
- Any skill package you haven't personally reviewed
RECOMMENDED for periodic audits of already-installed skills.
必须在安装或启用前使用:
- 来自ClawHub的技能(任何非您自己编写的技能)
- 其他用户或团队共享的技能
- 来自公共仓库的技能
- 任何您未亲自审核过的技能包
推荐对已安装的技能进行定期审核。
Quick Start
快速开始
bash
undefinedbash
undefinedScan a local skill directory
扫描本地技能目录
skill-scan scan /path/to/skill
skill-scan scan /path/to/skill
Scan a skill from ClawHub before installing it
在安装前扫描ClawHub中的技能
skill-scan scan-hub some-skill-slug
skill-scan scan-hub some-skill-slug
Batch scan all installed skills
批量扫描所有已安装技能
skill-scan batch /path/to/skills-directory
skill-scan batch /path/to/skills-directory
JSON output for programmatic use
以JSON格式输出供程序调用
skill-scan scan-hub some-skill-slug --json
skill-scan scan-hub some-skill-slug --json
Quiet mode (just score + verdict)
静默模式(仅显示评分 + 结论)
skill-scan scan-hub some-skill-slug --quiet
undefinedskill-scan scan-hub some-skill-slug --quiet
undefinedRisk Scoring
风险评分
| Risk | Score | Action |
|---|---|---|
| LOW | 80-100 | Safe to install |
| MEDIUM | 50-79 | Review findings before installing |
| HIGH | 20-49 | Do NOT install — serious threats detected |
| CRITICAL | 0-19 | Do NOT install — multiple critical threats |
| 风险等级 | 分数范围 | 操作建议 |
|---|---|---|
| 低风险 | 80-100 | 可安全安装 |
| 中风险 | 50-79 | 查看检测结果后再决定是否安装 |
| 高风险 | 20-49 | 禁止安装 — 检测到严重威胁 |
| 极高风险 | 0-19 | 禁止安装 — 检测到多项关键威胁 |
Exit Codes
退出码
- — LOW risk (safe to proceed)
0 - — MEDIUM or higher (block installation)
1
- — 低风险(可继续操作)
0 - — 中风险及以上(阻止安装)
1
LLM Options
LLM选项
bash
skill-scan scan-hub slug --llm # Always run deep analysis (layers 4-5)
skill-scan scan-hub slug --llm-auto # LLM only if pattern scan finds MEDIUM+
skill-scan scan-hub slug --llm-only # Skip patterns, LLM analysis onlyProvider auto-detected from environment:
- -> gpt-4o-mini
OPENAI_API_KEY - -> claude-sonnet-4-5
ANTHROPIC_API_KEY
bash
skill-scan scan-hub slug --llm # 始终运行深度分析(第4-5层)
skill-scan scan-hub slug --llm-auto # 仅当模式扫描发现中风险及以上时运行LLM分析
skill-scan scan-hub slug --llm-only # 跳过模式扫描,仅运行LLM分析自动从环境变量检测提供商:
- -> gpt-4o-mini
OPENAI_API_KEY - -> claude-sonnet-4-5
ANTHROPIC_API_KEY
For the Agent
针对Agent的使用流程
When the user asks to install a skill from ClawHub, follow this workflow:
- Scan first — run BEFORE installing
skill-scan scan-hub <slug> - Check the result:
- If LOW risk (exit code 0): proceed with installation
- If MEDIUM risk: show findings to the user and ask whether to proceed
- If HIGH or CRITICAL risk: block installation and show the report
- Report — always show the user the scan summary (score, risk level, finding count)
当用户请求安装ClawHub中的技能时,请遵循以下工作流:
- 先扫描 — 在安装前运行
skill-scan scan-hub <slug> - 检查结果:
- 如果是低风险(退出码0):继续安装
- 如果是中风险:向用户展示检测结果并询问是否继续
- 如果是高风险或极高风险:阻止安装并展示报告
- 报告 — 始终向用户展示扫描摘要(评分、风险等级、问题数量)
Alert Format
告警格式
When a scan finds MEDIUM or higher risk:
🛡️ Skill-Scan Alert: {RISK_LEVEL} ({SCORE}/100)
Skill: {slug}
Findings: {count} issue(s) detected
Top threat: {highest severity finding title}
Action: Installation blocked pending review.当扫描发现中风险及以上时:
🛡️ Skill-Scan告警: {RISK_LEVEL} ({SCORE}/100)
技能: {slug}
检测结果: 发现{count}个问题
最高威胁: {最高严重级别的检测项标题}
操作: 已阻止安装,等待审核。AGENTS.md Integration
AGENTS.md集成
To integrate skill-scan into your agent's workflow, add ONE of the following templates to your (or equivalent agent instructions file).
AGENTS.md要将skill-scan集成到Agent的工作流中,请在(或等效的Agent指令文件)中添加以下模板之一。
AGENTS.mdOption A: Automatic Scanning (Recommended)
选项A:自动扫描(推荐)
All ClawHub skill installs are automatically scanned before installation. No manual step needed.
markdown
undefined所有ClawHub技能安装前都会自动进行扫描,无需手动步骤。
markdown
undefinedSkill-Scan — Automatic Pre-Install Security Scanning
Skill-Scan — 安装前自动安全扫描
BEFORE installing any skill from ClawHub, you MUST scan it with skill-scan first. This is non-negotiable — never skip the scan.
在安装ClawHub中的任何技能之前,必须先使用skill-scan进行扫描。这是强制要求 — 绝不能跳过扫描。
Workflow
工作流
- When the user asks to install a skill (e.g. ), run the scan FIRST:
clawhub install <slug>bashskill-scan scan-hub <slug> - Check the exit code and output:
- Exit code 0 (LOW risk) — proceed with
clawhub install <slug> - Exit code 1 (MEDIUM risk) — show findings to the user, ask whether to proceed
- Exit code 1 (HIGH/CRITICAL risk) — BLOCK installation, show the full report
- Exit code 0 (LOW risk) — proceed with
- Never install a HIGH or CRITICAL risk skill without explicit user override
- Always show the scan summary to the user before proceeding with installation
- 当用户请求安装技能(例如)时,先运行扫描:
clawhub install <slug>bashskill-scan scan-hub <slug> - 检查退出码和输出:
- 退出码0(低风险) — 继续执行
clawhub install <slug> - 退出码1(中风险) — 向用户展示检测结果,询问是否继续
- 退出码1(高/极高风险) — 阻止安装,展示完整报告
- 退出码0(低风险) — 继续执行
- 绝不要安装高风险或极高风险的技能,除非用户明确要求覆盖
- 在继续安装前,始终向用户展示扫描摘要
Periodic Audits
定期审核
When asked to audit installed skills, batch scan the skills directory:
bash
skill-scan batch {skillsDir}当被要求审核已安装技能时,批量扫描技能目录:
bash
skill-scan batch {skillsDir}LLM Deep Analysis
LLM深度分析
For high-stakes installs or when pattern scan returns MEDIUM, escalate to LLM analysis:
bash
skill-scan scan-hub <slug> --llm-autoundefined对于高风险安装或模式扫描返回中风险的情况,升级到LLM分析:
bash
skill-scan scan-hub <slug> --llm-autoundefinedOption B: Manual On-Demand Scanning
选项B:手动按需扫描
Skills are only scanned when the user explicitly asks. Installation proceeds without automatic checks.
markdown
undefined技能仅在用户明确要求时才会被扫描,安装流程不包含自动检查。
markdown
undefinedSkill-Scan — On-Demand Skill Security Scanning
Skill-Scan — 按需技能安全扫描
skill-scan is available to audit skill packages for security threats. Use it when the user requests a security scan.
skill-scan可用于审核技能包的安全威胁。当用户请求安全扫描时使用。
Commands
命令
- Scan a ClawHub skill:
skill-scan scan-hub <slug> - Scan a local skill:
skill-scan scan /path/to/skill - Batch scan installed skills:
skill-scan batch {skillsDir} - Deep analysis with LLM: add or
--llmto any scan command--llm-auto
- 扫描ClawHub技能:
skill-scan scan-hub <slug> - 扫描本地技能:
skill-scan scan /path/to/skill - 批量扫描已安装技能:
skill-scan batch {skillsDir} - 使用LLM进行深度分析:在任意扫描命令后添加或
--llm--llm-auto
Risk Levels
风险等级
- LOW (80-100) — safe, no significant threats
- MEDIUM (50-79) — review findings, use caution
- HIGH (20-49) — serious threats, do not install
- CRITICAL (0-19) — multiple critical threats, do not install
- 低风险(80-100) — 安全,无重大威胁
- 中风险(50-79) — 查看检测结果,谨慎使用
- 高风险(20-49) — 存在严重威胁,请勿安装
- 极高风险(0-19) — 存在多项关键威胁,请勿安装
When the User Asks to Scan
当用户请求扫描时
- Run the appropriate scan command
- Show the scan summary (score, risk level, finding count)
- If MEDIUM+, list the top findings with severity and description
- Let the user decide what to do
undefined- 运行相应的扫描命令
- 展示扫描摘要(评分、风险等级、问题数量)
- 如果是中风险及以上,列出最高严重级别的检测结果及其描述
- 由用户决定后续操作
undefinedInstall-Time Setup
安装时设置
When this skill is first installed, ask the user:
How should skill-scan integrate with your workflow?
- Automatic (Recommended) — Every
is scanned first. HIGH/CRITICAL installs are blocked automatically.clawhub install- Manual — Scanning only happens when you explicitly ask for it.
Based on their choice, add the corresponding template (Option A or Option B above) to the project's .
AGENTS.md当首次安装此技能时,请询问用户:
skill-scan应如何与您的工作流集成?
- 自动模式(推荐)— 每次
都会先进行扫描,自动阻止高/极高风险技能的安装。clawhub install- 手动模式 — 仅当您明确要求时才进行扫描。
根据用户的选择,将对应的模板(上述选项A或B)添加到项目的中。
AGENTS.mdDetection Categories
检测类别
Execution threats — , , , dynamic imports
eval()exec()child_processCredential theft — access, API keys, tokens, private keys, wallet files
.envData exfiltration — , , , sockets, webhooks
fetch()axiosrequestsFilesystem manipulation — Write/delete/rename operations
Obfuscation — Base64, hex, unicode encoding, string construction
Prompt injection — Jailbreaks, invisible characters, homoglyphs, roleplay framing, encoded instructions
Behavioral signatures — Compound patterns: data exfiltration, trojan skills, evasive malware, persistent backdoors
执行威胁 — 、、、动态导入
eval()exec()child_process凭据窃取 — 访问文件、API密钥、令牌、私钥、钱包文件
.env数据泄露 — 、、、套接字、Webhook
fetch()axiosrequests文件系统操作 — 写入/删除/重命名操作
代码混淆 — Base64、十六进制、Unicode编码、字符串构造
提示注入 — 越狱攻击、不可见字符、同形字、角色扮演框架、编码指令
行为特征 — 复合模式:数据泄露、木马技能、规避型恶意软件、持久化后门
Requirements
依赖要求
- Python 3.10+
- (for LLM API calls only)
httpx>=0.27 - API key only needed for modes (static analysis is self-contained)
--llm
- Python 3.10+
- (仅用于LLM API调用)
httpx>=0.27 - 仅在模式下需要API密钥(静态分析无需依赖)
--llm
Related Skills
相关技能
- input-guard — External input scanning
- memory-scan — Agent memory security
- guardrails — Security policy configuration
- input-guard — 外部输入扫描
- memory-scan — Agent内存安全
- guardrails — 安全策略配置