cron-job-auditor

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Cron Job Auditor Skill

Cron任务审计器Skill

Operator Context

操作上下文

This skill operates as an operator for cron script auditing workflows, configuring Claude's behavior for deterministic, checklist-driven static analysis. It implements the Systematic Inspection architectural pattern -- discover scripts, audit against best practices, report findings -- with Domain Intelligence embedded in cron-specific reliability patterns.
本Skill作为cron脚本审计工作流的操作器,配置Claude的行为以实现基于检查清单的确定性静态分析。它采用系统性检查架构模式——发现脚本、对照最佳实践审计、报告结果——并嵌入了针对cron场景的可靠性模式领域智能

Hardcoded Behaviors (Always Apply)

硬编码行为(始终生效)

  • Read-Only: Only read and analyze script files; never execute them
  • CLAUDE.md Compliance: Read and follow repository CLAUDE.md before auditing
  • Pattern-Based Detection: Use regex for reliable, reproducible checks
  • Structured Output: Produce machine-parseable PASS/FAIL/WARN results
  • Severity Classification: Every finding gets CRITICAL, HIGH, MEDIUM, or LOW
  • No Auto-Fix: Report problems with recommendations; do not modify scripts
  • 只读模式:仅读取和分析脚本文件,绝不执行
  • 遵循CLAUDE.md规范:审计前需读取并遵循仓库中的CLAUDE.md文件
  • 基于模式的检测:使用正则表达式进行可靠、可复现的检查
  • 结构化输出:生成机器可解析的PASS/FAIL/WARN结果
  • 严重性分类:每个发现的问题都会被标记为CRITICAL(严重)、HIGH(高)、MEDIUM(中)或LOW(低)
  • 不自动修复:仅报告问题并给出建议,绝不修改脚本

Default Behaviors (ON unless disabled)

默认行为(默认开启,可禁用)

  • Full Checklist: Run all 9 best-practice checks on every script
  • Actionable Recommendations: Provide specific code fixes for every failure
  • Score Calculation: Report pass/total as percentage
  • Recursive Discovery: Search
    scripts/
    ,
    cron/
    ,
    jobs/
    directories for
    .sh
    files
  • Shebang Validation: Verify scripts start with
    #!/bin/bash
    or equivalent
  • 完整检查清单:对每个脚本运行全部9项最佳实践检查
  • 可执行建议:为每个失败项提供具体的代码修复方案
  • 分数计算:以百分比形式报告通过项/总检查项的比例
  • 递归发现:在
    scripts/
    cron/
    jobs/
    目录中搜索
    .sh
    文件
  • Shebang验证:验证脚本是否以
    #!/bin/bash
    或等效命令开头

Optional Behaviors (OFF unless enabled)

可选行为(默认关闭,可开启)

  • Strict Mode: Treat MEDIUM/LOW findings as failures (raise exit code)
  • Custom Patterns: Add project-specific checks beyond the standard 9
  • Crontab Schedule Analysis: Parse crontab entries for scheduling conflicts
  • JSON Output: Emit results as JSON instead of human-readable report
  • 严格模式:将MEDIUM/LOW级别的问题视为失败项(提高退出码)
  • 自定义模式:添加超出标准9项检查的项目特定规则
  • Crontab调度分析:解析crontab条目以检测调度冲突
  • JSON输出:以JSON格式输出结果,而非人类可读的报告

What This Skill CAN Do

本Skill的功能

  • Detect missing error handling, logging, lock files, and cleanup traps
  • Check for explicit PATH/environment setup (cron has minimal defaults)
  • Identify scripts vulnerable to concurrent execution
  • Verify log rotation prevents unbounded disk growth
  • Provide copy-paste code snippets to fix every finding
  • Audit multiple scripts in a single pass with aggregate scoring
  • 检测缺失的错误处理、日志记录、锁文件及清理陷阱
  • 检查是否显式配置了PATH/环境变量(cron默认环境变量极少)
  • 识别易受并发执行影响的脚本
  • 验证日志轮转配置,防止磁盘空间无限增长
  • 提供可直接复制粘贴的代码片段以修复所有问题
  • 一次性审计多个脚本并生成综合评分

What This Skill CANNOT Do

本Skill不具备的功能

  • Execute scripts or validate runtime behavior
  • Parse crontab scheduling syntax (focus is script content)
  • Check external dependencies or verify services are running
  • Test notification delivery (email, webhook, Slack)
  • Analyze complex control flow beyond pattern matching
  • Replace a full shell linter (shellcheck) for syntax issues

  • 执行脚本或验证运行时行为
  • 解析crontab调度语法(重点为脚本内容)
  • 检查外部依赖或验证服务是否运行
  • 测试通知交付(邮件、webhook、Slack)
  • 分析模式匹配之外的复杂控制流
  • 替代完整的shell代码检查工具(如shellcheck)进行语法检查

Instructions

操作步骤

Phase 1: DISCOVER

阶段1:发现

Goal: Locate all cron/scheduled scripts to audit.
Step 1: Identify target scripts
If the user provides specific paths, use those. Otherwise search:
scripts/*.sh, cron/*.sh, jobs/*.sh, bin/*.sh
Also check for scripts referenced in crontab files, Makefiles, or CI configs.
Step 2: Validate targets
For each discovered file:
  • Confirm it exists and is readable
  • Check it has a shell shebang (
    #!/bin/bash
    ,
    #!/bin/sh
    ,
    #!/usr/bin/env bash
    )
  • Skip non-shell files (Python cron jobs, etc.) with a note
Step 3: Log discovery results
markdown
undefined
目标:定位所有需要审计的cron/定时任务脚本。
步骤1:确定目标脚本
如果用户提供了具体路径,则使用这些路径。否则搜索:
scripts/*.sh, cron/*.sh, jobs/*.sh, bin/*.sh
同时检查crontab文件、Makefile或CI配置中引用的脚本。
步骤2:验证目标脚本
对于每个发现的文件:
  • 确认文件存在且可读取
  • 检查是否包含shell shebang(如
    #!/bin/bash
    #!/bin/sh
    #!/usr/bin/env bash
  • 跳过非shell脚本(如Python编写的cron任务)并记录说明
步骤3:记录发现结果
markdown
undefined

Scripts Found

找到的脚本

  1. scripts/daily_backup.sh (bash, 45 lines)
  2. cron/cleanup.sh (bash, 22 lines)
  3. jobs/sync_data.sh (SKIPPED: Python script)

**Gate**: At least one auditable shell script identified. Proceed only when gate passes.
  1. scripts/daily_backup.sh (bash, 45行)
  2. cron/cleanup.sh (bash, 22行)
  3. jobs/sync_data.sh (已跳过:Python脚本)

**Gate条件**:至少找到一个可审计的shell脚本。仅当条件满足时才进入下一阶段。

Phase 2: AUDIT

阶段2:审计

Goal: Run every check against every script. No shortcuts.
Step 1: Read each script fully
Read the entire file content. Do not sample or skip sections.
Step 2: Run the 9-point checklist
#CheckPatternsSeverity
1Error handling
set -e
,
set -o errexit
,
|| exit
CRITICAL
2Exit code checking
$?
,
if [ $? -eq
,
&& ... ||
HIGH
3Logging with timestamps
>> *.log
,
$(date)
,
date +
HIGH
4Log rotation
find -mtime -delete
,
logrotate
,
tail -n
MEDIUM
5Working directory
cd "$(dirname"
,
SCRIPT_DIR=
, absolute paths
HIGH
6PATH environment
PATH=
,
export PATH
,
source *env
MEDIUM
7Lock file / concurrency
.lock
,
flock
,
.pid
, lock file check
HIGH
8Cleanup on exit
trap ... EXIT
,
trap ... cleanup
,
rm -rf *tmp
MEDIUM
9Failure notification
mail -s
,
curl *webhook
,
notify
,
alert
LOW
For each check, record:
  • PASS with line number where pattern found, OR
  • FAIL/WARN with specific recommendation
Step 3: Calculate score
Score = passed / total_checks * 100
Classify scripts: 90-100% Excellent, 70-89% Good, 50-69% Needs Work, <50% Critical.
Gate: All 9 checks run against every script. No checks skipped. Proceed only when gate passes.
目标:对每个脚本运行全部检查项,不得跳过任何检查。
步骤1:完整读取每个脚本
读取整个文件内容,不得抽样或跳过任何部分。
步骤2:运行9项检查清单
序号检查项匹配模式严重性
1错误处理
set -e
,
set -o errexit
,
|| exit
CRITICAL
2退出码检查
$?
,
if [ $? -eq
,
&& ... ||
HIGH
3带时间戳的日志记录
>> *.log
,
$(date)
,
date +
HIGH
4日志轮转
find -mtime -delete
,
logrotate
,
tail -n
MEDIUM
5工作目录
cd "$(dirname"
,
SCRIPT_DIR=
, 绝对路径
HIGH
6PATH环境变量
PATH=
,
export PATH
,
source *env
MEDIUM
7锁文件/并发安全
.lock
,
flock
,
.pid
, 锁文件检查
HIGH
8退出时清理
trap ... EXIT
,
trap ... cleanup
,
rm -rf *tmp
MEDIUM
9失败通知
mail -s
,
curl *webhook
,
notify
,
alert
LOW
对于每个检查项,记录:
  • PASS并标注找到匹配模式的行号,或
  • FAIL/WARN并给出具体建议
步骤3:计算评分
评分 = 通过项数 / 总检查项数 * 100
脚本分类:90-100% 优秀,70-89% 良好,50-69% 需要改进,<50% 严重问题。
Gate条件:对每个脚本运行全部9项检查,不得跳过任何检查。仅当条件满足时才进入下一阶段。

Phase 3: REPORT

阶段3:报告

Goal: Produce structured, actionable audit output.
Step 1: Format per-script results
CRON JOB AUDIT: scripts/daily_backup.sh
==================================================
  [PASS] Error handling (line 3)
  [PASS] Logging with timestamps (line 12)
  [FAIL] Lock file: No concurrent run prevention
  [WARN] PATH environment: PATH not explicitly set

SCORE: 7/9 (78%) - Good
Step 2: Provide recommendations
For every FAIL and WARN, provide a specific code snippet the user can paste:
bash
undefined
目标:生成结构化、可执行的审计结果。
步骤1:格式化单脚本结果
CRON任务审计:scripts/daily_backup.sh
==================================================
  [通过] 错误处理(第3行)
  [通过] 带时间戳的日志记录(第12行)
  [失败] 锁文件:未实现并发执行防护
  [警告] PATH环境变量:未显式设置

评分:7/9 (78%) - 良好
步骤2:提供修复建议
对于每个FAIL和WARN项,提供用户可直接复制粘贴的具体代码片段:
bash
undefined

Recommendation: Add lock file

建议:添加锁文件

LOCK_FILE="/tmp/daily_backup.lock" exec 200>"$LOCK_FILE" flock -n 200 || { echo "Already running"; exit 0; } trap "rm -f $LOCK_FILE" EXIT

**Step 3: Produce aggregate summary**

If auditing multiple scripts:

AGGREGATE SUMMARY

Scripts audited: 4 Average score: 72% Critical issues: 2 (missing error handling) Most common gap: Lock files (3/4 scripts missing)

**Gate**: Every finding has a recommendation. Report is complete. Audit is done.

---
LOCK_FILE="/tmp/daily_backup.lock" exec 200>"$LOCK_FILE" flock -n 200 || { echo "脚本已在运行"; exit 0; } trap "rm -f $LOCK_FILE" EXIT

**步骤3:生成综合摘要**

如果审计多个脚本:

综合摘要

已审计脚本数量:4个 平均评分:72% 严重问题:2个(缺失错误处理) 最常见问题:锁文件(4个脚本中有3个缺失)

**Gate条件**:每个发现的问题都有对应的修复建议。报告完整后,审计完成。

---

Examples

示例

Example 1: Single Script Audit

示例1:单脚本审计

User says: "Audit the backup cron script" Actions:
  1. Read
    scripts/backup.sh
    , verify shebang (DISCOVER)
  2. Run 9-point checklist, record PASS/FAIL per check (AUDIT)
  3. Format report with score and recommendations (REPORT) Result: Structured report with actionable fixes
用户输入:"审计备份cron脚本" 操作:
  1. 读取
    scripts/backup.sh
    ,验证shebang(发现阶段)
  2. 运行9项检查清单,记录每个检查项的PASS/FAIL结果(审计阶段)
  3. 格式化报告并附带评分和建议(报告阶段) 结果:包含可执行修复方案的结构化报告

Example 2: Repository-Wide Audit

示例2:全仓库审计

User says: "Check all our cron jobs for best practices" Actions:
  1. Glob for
    .sh
    files in
    scripts/
    ,
    cron/
    ,
    jobs/
    (DISCOVER)
  2. Audit each script against full checklist (AUDIT)
  3. Per-script reports plus aggregate summary (REPORT) Result: Comprehensive audit with prioritized remediation list

用户输入:"检查我们所有的cron任务是否符合最佳实践" 操作:
  1. scripts/
    cron/
    jobs/
    目录中搜索
    .sh
    文件(发现阶段)
  2. 对每个脚本运行完整检查清单(审计阶段)
  3. 生成单脚本报告及综合摘要(报告阶段) 结果:包含优先级修复列表的全面审计报告

Error Handling

错误处理

Error: "No Shell Scripts Found"

错误:"未找到Shell脚本"

Cause: Scripts in unexpected locations, or cron jobs written in Python/Ruby Solution:
  1. Ask user for explicit paths
  2. Search broader:
    **/*.sh
    across the entire repository
  3. Check crontab entries for referenced file paths
原因:脚本位于意外位置,或cron任务由Python/Ruby编写 解决方案:
  1. 询问用户提供明确路径
  2. 扩大搜索范围:在整个仓库中搜索
    **/*.sh
  3. 检查crontab条目以获取引用的文件路径

Error: "Script Has No Shebang"

错误:"脚本无Shebang"

Cause: Script relies on default shell interpreter Solution:
  1. Still audit the script (treat as bash)
  2. Add finding: "Missing shebang line" as MEDIUM severity
  3. Recommend adding
    #!/bin/bash
    or
    #!/usr/bin/env bash
原因:脚本依赖默认shell解释器 解决方案:
  1. 仍对脚本进行审计(视为bash脚本)
  2. 添加发现项:"缺失shebang行",标记为MEDIUM严重性
  3. 建议添加
    #!/bin/bash
    #!/usr/bin/env bash

Error: "Regex Produces False Positive"

错误:"正则表达式产生误报"

Cause: Pattern matches in comments, strings, or unrelated context Solution:
  1. Verify match by reading surrounding lines for context
  2. Check if match is inside a comment (
    # ...
    ) and exclude
  3. Report the finding but note reduced confidence
原因:模式匹配到注释、字符串或无关上下文 解决方案:
  1. 读取匹配内容的上下文以验证
  2. 检查匹配是否位于注释(
    # ...
    )中,若是则排除
  3. 报告发现项但标注置信度降低

Error: "Script Uses Non-Standard Patterns"

错误:"脚本使用非标准模式"

Cause: Custom error handling, logging frameworks, or wrapper functions Solution:
  1. Check if script sources a common library file
  2. Read the sourced file for the missing patterns
  3. If patterns exist in sourced libraries, mark as PASS with note

原因:使用自定义错误处理、日志框架或包装函数 解决方案:
  1. 检查脚本是否引用了公共库文件
  2. 读取引用的库文件以查找缺失的模式
  3. 如果模式存在于引用的库中,则标记为PASS并添加说明

Anti-Patterns

反模式

Anti-Pattern 1: Executing Scripts to Test Them

反模式1:执行脚本进行测试

What it looks like: Running the cron script to see if it "works" Why wrong: Cron scripts may delete data, send emails, or modify production state Do instead: Static analysis only. Read the file, match patterns, report.
表现:运行cron脚本以查看是否“正常工作” 问题:cron脚本可能删除数据、发送邮件或修改生产环境状态 正确做法:仅进行静态分析。读取文件、匹配模式、报告结果。

Anti-Pattern 2: Skipping Checks Because Script Is "Simple"

反模式2:因脚本“简单”而跳过检查

What it looks like: "This is just a 5-line script, no need for lock files" Why wrong: Simple scripts grow. Missing basics cause production incidents. Do instead: Run all 9 checks regardless of script size.
表现:“这只是一个5行脚本,不需要锁文件” 问题:简单脚本会逐渐复杂。缺失基础配置会导致生产事故 正确做法:无论脚本大小,运行全部9项检查

Anti-Pattern 3: Recommending Over-Engineering

反模式3:建议过度工程化方案

What it looks like: Suggesting Prometheus alerting for a log cleanup script Why wrong: Recommendations should match script scope and complexity Do instead: Provide proportional fixes. Lock file yes, monitoring framework no.
表现:建议为日志清理脚本添加Prometheus告警 问题:建议应与脚本的范围和复杂度匹配 正确做法:提供比例适配的修复方案。锁文件是必要的,但监控框架则不必。

Anti-Pattern 4: Ignoring Sourced Dependencies

反模式4:忽略引用的依赖

What it looks like: Marking FAIL because
set -e
is in a sourced common.sh Why wrong: Many teams use shared library files sourced at script start Do instead: Check
source
and
.
commands, read sourced files for patterns.
表现:因
set -e
存在于引用的common.sh中而标记为FAIL 问题:许多团队会使用在脚本开头引用的共享库文件 正确做法:检查
source
.
命令,读取引用的文件以查找模式

Anti-Pattern 5: Reporting Without Recommendations

反模式5:仅报告问题而无建议

What it looks like: "FAIL: No error handling" with no suggested fix Why wrong: Findings without fixes create work without guidance Do instead: Every FAIL/WARN must include a paste-ready code snippet.

表现:“失败:缺失错误处理”但未给出修复建议 问题:仅报告问题会增加工作量但无指导 正确做法:每个FAIL/WARN项必须包含可直接复制粘贴的代码片段

References

参考资料

This skill uses these shared patterns:
  • Anti-Rationalization - Prevents shortcut rationalizations
  • Verification Checklist - Pre-completion checks
本Skill使用以下共享模式:
  • 反合理化模式 - 防止跳过检查的合理化借口
  • 验证检查清单 - 完成前的检查

Domain-Specific Anti-Rationalization

领域特定反合理化

RationalizationWhy It's WrongRequired Action
"Script is too simple to audit"Simple scripts cause outages tooRun full 9-point checklist
"It works in production already"Working ≠ reliable under failureAudit for failure-mode handling
"Lock files are overkill"Concurrent cron runs cause data corruptionAlways check for concurrency safety
"Logging slows things down"Debugging blind cron failures wastes hoursVerify logging with timestamps
合理化借口问题所在要求操作
“脚本太简单,无需审计”简单脚本也会导致故障运行完整的9项检查清单
“它在生产环境中已经正常工作了”正常工作≠故障场景下可靠审计故障模式处理能力
“锁文件是过度设计”并发cron运行会导致数据损坏始终检查并发安全性
“日志记录会拖慢速度”调试无日志的cron故障会浪费大量时间验证带时间戳的日志记录

Best Practices Reference

最佳实践参考示例

bash
#!/bin/bash
set -euo pipefail                              # Error handling
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
cd "$SCRIPT_DIR"                                # Working directory
PATH=/usr/local/bin:/usr/bin:/bin               # Explicit PATH
LOCK="/tmp/$(basename "$0").lock"               # Lock file
exec 200>"$LOCK"
flock -n 200 || { echo "Already running"; exit 0; }
LOG="logs/$(basename "$0" .sh)_$(date +%Y%m%d).log"
exec > >(tee -a "$LOG") 2>&1                   # Logging
echo "$(date): Starting"
trap 'rm -f "$LOCK" /tmp/mytmp_*' EXIT         # Cleanup
find logs -name "*.log" -mtime +30 -delete      # Log rotation
bash
#!/bin/bash
set -euo pipefail                              # 错误处理
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
cd "$SCRIPT_DIR"                                # 工作目录
PATH=/usr/local/bin:/usr/bin:/bin               # 显式PATH
LOCK="/tmp/$(basename "$0").lock"               # 锁文件
exec 200>"$LOCK"
flock -n 200 || { echo "Already running"; exit 0; }
LOG="logs/$(basename "$0" .sh)_$(date +%Y%m%d).log"
exec > >(tee -a "$LOG") 2>&1                   # 日志记录
echo "$(date): Starting"
trap 'rm -f "$LOCK" /tmp/mytmp_*' EXIT         # 清理
find logs -name "*.log" -mtime +30 -delete      # 日志轮转

... actual work ...

... 实际工作代码 ...

if [ $? -ne 0 ]; then # Failure notification echo "FAILED" | mail -s "Cron Alert" admin@example.com fi
undefined
if [ $? -ne 0 ]; then # 失败通知 echo "FAILED" | mail -s "Cron Alert" admin@example.com fi
undefined