bash-script-generator

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Bash Script Generator

Bash 脚本生成工具

Overview

概述

This skill provides a comprehensive workflow for generating production-ready bash scripts with best practices built-in. Generate scripts for system administration, text processing, API clients, automation workflows, and more with robust error handling, logging, argument parsing, and validation.
本技能提供了一套完整的工作流,用于生成内置最佳实践的生产可用bash脚本。可生成适用于系统管理、文本处理、API客户端、自动化工作流等场景的脚本,这些脚本具备强大的错误处理、日志记录、参数解析和验证功能。

When to Use This Skill

适用场景

Invoke this skill when:
  • Creating new bash scripts from scratch
  • Implementing shell automation or system administration tasks
  • Building command-line tools and utilities
  • Creating text processing workflows (log analysis, data transformation, etc.)
  • Converting manual command sequences into reusable scripts
  • Implementing deployment or build automation scripts
  • Creating cron jobs or scheduled tasks
  • The user asks to "create", "generate", "build", or "write" a bash script
  • Implementing scripts that use grep, awk, sed, or other text processing tools
在以下场景中调用本技能:
  • 从零开始创建新的bash脚本
  • 实现Shell自动化或系统管理任务
  • 构建命令行工具和实用程序
  • 创建文本处理工作流(日志分析、数据转换等)
  • 将手动命令序列转换为可复用脚本
  • 实现部署或构建自动化脚本
  • 创建cron任务或定时任务
  • 用户要求“创建”“生成”“构建”或“编写”bash脚本时
  • 实现使用grep、awk、sed或其他文本处理工具的脚本

Mandatory Steps (DO NOT SKIP)

强制步骤(请勿跳过)

The following steps are REQUIRED for every script generation. Skipping these steps will result in scripts that don't meet user needs or fail validation.

以下步骤是所有脚本生成过程中必须执行的。跳过这些步骤会导致脚本无法满足用户需求或验证失败。

MANDATORY: Pre-Generation Requirements

强制:生成前准备要求

BEFORE generating any script, you MUST complete these steps:
在生成任何脚本之前,您必须完成以下步骤:

1. Clarify Requirements with User (REQUIRED)

1. 与用户明确需求(必须执行)

Use AskUserQuestion tool if ANY of the following are unclear or unspecified:
AmbiguityQuestion to Ask
Log/data format unknown"What format is the input data? (e.g., nginx combined log, JSON, CSV, custom)"
Large file handling"Will this process large files (>100MB)? Should we optimize for memory/performance?"
Error handling preference"How should errors be handled? (fail fast, continue with warnings, retry)"
Security context"Will this script handle sensitive data or run with elevated privileges?"
Portability needs"Does this need to run on different systems? (bash-specific vs POSIX sh)"
Output format unclear"What format should the output be? (human-readable, JSON, CSV, etc.)"
DO NOT skip this step. Assumptions lead to scripts that don't meet user needs.
如果存在以下任何不明确或未指定的情况,请使用AskUserQuestion工具:
不明确点询问问题
日志/数据格式未知“输入数据是什么格式?(例如:nginx组合日志、JSON、CSV、自定义格式)”
大文件处理需求“是否需要处理大文件(>100MB)?是否需要针对内存/性能进行优化?”
错误处理偏好“应该如何处理错误?(快速失败、带警告继续、重试)”
安全上下文“该脚本是否会处理敏感数据或使用提升的权限运行?”
可移植性需求“是否需要在不同系统上运行?(仅bash兼容 vs POSIX sh兼容)”
输出格式不明确“输出应该是什么格式?(人类可读、JSON、CSV等)”
请勿跳过此步骤。主观假设会导致脚本无法满足用户需求。

2. Explain Your Approach (REQUIRED)

2. 说明您的实现方案(必须执行)

Before writing code, explain to the user:
  1. Script Architecture: What components and functions you'll create
  2. Tool Selection: Why you're choosing grep/awk/sed/etc. for specific tasks
    • Reference:
      docs/text-processing-guide.md
      for tool selection rationale
  3. Key Design Decisions: Any tradeoffs or assumptions you're making
  4. Customization Points: What the user might want to modify
Example pre-generation explanation:
I'll create a log analyzer with these components:

**Architecture:**
- Main function with argument parsing (getopts pattern)
- 4 analysis functions: analyze_ips(), analyze_status_codes(), etc.
- Logging functions for INFO/DEBUG/ERROR output
- Cleanup trap for signal handling

**Tool Selection:**
- awk for status code analysis (single-pass counting, as recommended in
  docs/text-processing-guide.md)
- grep for date filtering (simple pattern matching)
- sort | uniq -c for frequency counting

**Customization Points:**
- LOG_FORMAT variable for different nginx configurations
- TOP_N constant for number of results to show

Shall I proceed with this design?
在编写代码之前,向用户说明:
  1. 脚本架构:您将创建的组件和函数
  2. 工具选择:为何选择grep/awk/sed等工具完成特定任务
    • 参考:
      docs/text-processing-guide.md
      中的工具选择依据
  3. 关键设计决策:您做出的任何权衡或假设
  4. 自定义点:用户可能需要修改的内容
生成前方案说明示例:
我将创建一个日志分析器,包含以下组件:

**架构:**
- 带参数解析的主函数(使用getopts模式)
- 4个分析函数:analyze_ips()、analyze_status_codes()等
- 用于INFO/DEBUG/ERROR输出的日志函数
- 用于信号处理的清理陷阱

**工具选择:**
- 使用awk进行状态码分析(单次遍历计数,符合`docs/text-processing-guide.md`中的推荐)
- 使用grep进行日期过滤(简单模式匹配)
- 使用sort | uniq -c进行频率计数

**自定义点:**
- LOG_FORMAT变量:适配不同nginx配置
- TOP_N常量:控制显示结果的数量

我可以按照此设计继续吗?

3. Consider Using the Template (RECOMMENDED)

3. 考虑使用模板(推荐)

When to use the template:
  • Use template for quick starts on standard scripts with typical structure (CLI tools, automation scripts)
  • Manual generation is better for highly customized scripts with unique requirements or non-standard architectures
For standard scripts, invoke the template generator:
bash
bash scripts/generate_script_template.sh standard output-script.sh
This ensures consistent structure with all required components pre-configured. The template includes:
  • Proper shebang and strict mode
  • Logging functions (debug, info, warn, error)
  • Error handling (die, check_command, validate_file)
  • Argument parsing boilerplate
  • Cleanup trap handlers
Then customize the generated template for your specific use case.

何时使用模板:
  • 使用模板:快速创建具有典型结构的标准脚本(CLI工具、自动化脚本)
  • 手动生成:更适合具有独特需求或非标准架构的高度定制化脚本
对于标准脚本,调用模板生成器:
bash
bash scripts/generate_script_template.sh standard output-script.sh
这确保生成的脚本结构一致,且所有必需组件已预先配置。模板包含:
  • 正确的shebang和严格模式
  • 日志函数(debug、info、warn、error)
  • 错误处理(die、check_command、validate_file)
  • 参数解析样板代码
  • 清理陷阱处理器
然后根据具体用例自定义生成的模板。

Script Generation Workflow

脚本生成工作流

Follow this workflow when generating bash scripts. Adapt based on user needs:
遵循以下工作流生成bash脚本,并根据用户需求进行调整:

Stage 1: Understand Requirements

阶段1:理解需求

Gather information about what the script needs to do:
  1. Script purpose:
    • What problem does it solve?
    • What tasks does it automate?
    • Who will use it (developers, ops, cron, CI/CD)?
  2. Functionality requirements:
    • Input sources (files, stdin, arguments, APIs)
    • Processing steps (text manipulation, system operations, etc.)
    • Output destinations (stdout, files, logs, APIs)
    • Expected data formats
  3. Shell type:
    • Bash-specific (modern systems, can use arrays, associative arrays, etc.)
    • POSIX sh (maximum portability, limited features)
    • Default to bash unless portability is explicitly required
  4. Argument parsing:
    • Command-line options needed
    • Required vs optional arguments
    • Help/usage text requirements
  5. Error handling requirements:
    • How should errors be handled? (fail fast, retry, graceful degradation)
    • Logging verbosity levels needed?
    • Exit codes required?
  6. Performance considerations:
    • Large file processing requirements
    • Parallel processing needs
    • Resource constraints
  7. Security requirements:
    • Input validation needs
    • Credential handling
    • Privilege requirements
Use AskUserQuestion if information is missing or unclear.
收集脚本需要实现的功能信息:
  1. 脚本用途:
    • 解决什么问题?
    • 自动化哪些任务?
    • 谁会使用它(开发人员、运维人员、cron、CI/CD)?
  2. 功能需求:
    • 输入源(文件、标准输入、参数、API)
    • 处理步骤(文本操作、系统操作等)
    • 输出目标(标准输出、文件、日志、API)
    • 预期数据格式
  3. Shell类型:
    • Bash专属(现代系统,可使用数组、关联数组等)
    • POSIX sh(最大可移植性,功能有限)
    • 除非明确要求可移植性,否则默认使用bash
  4. 参数解析:
    • 需要的命令行选项
    • 必填参数 vs 可选参数
    • 帮助/使用文本需求
  5. 错误处理需求:
    • 如何处理错误?(快速失败、重试、优雅降级)
    • 需要的日志详细级别?
    • 是否需要特定退出码?
  6. 性能考虑:
    • 大文件处理需求
    • 并行处理需求
    • 资源限制
  7. 安全需求:
    • 输入验证需求
    • 凭证处理
    • 权限要求
如果信息缺失或不明确,请使用AskUserQuestion工具。

Stage 2: Architecture Planning

阶段2:架构规划

Plan the script structure based on requirements:
  1. Determine script components:
    • Functions needed
    • Configuration management approach
    • Logging strategy
    • Error handling approach
  2. Select appropriate tools:
    • grep for pattern matching and filtering
    • awk for structured text processing (CSV, logs, columnar data)
    • sed for stream editing and substitution
    • find for file system operations
    • curl/wget for HTTP operations
    • Built-in bash features when possible
  3. Plan error handling:
    • Use
      set -euo pipefail
      for strict mode (recommended)
    • Define error handling functions
    • Plan cleanup procedures (trap for EXIT, ERR signals)
  4. Plan logging:
    • Log levels needed (DEBUG, INFO, WARN, ERROR)
    • Output destinations (stderr for logs, stdout for data)
    • Log formatting
根据需求规划脚本结构:
  1. 确定脚本组件:
    • 需要的函数
    • 配置管理方式
    • 日志策略
    • 错误处理方案
  2. 选择合适的工具:
    • grep:模式匹配和过滤
    • awk:结构化文本处理(CSV、日志、列数据)
    • sed:流编辑和替换
    • find:文件系统操作
    • curl/wget:HTTP操作
    • 尽可能使用bash内置功能
  3. 规划错误处理:
    • 使用
      set -euo pipefail
      启用严格模式(推荐)
    • 定义错误处理函数
    • 规划清理流程(针对EXIT、ERR信号的trap)
  4. 规划日志:
    • 需要的日志级别(DEBUG、INFO、WARN、ERROR)
    • 输出目标(日志输出到stderr,数据输出到stdout)
    • 日志格式

Stage 3: Generate Script Structure

阶段3:生成脚本结构

Create the basic script structure:
  1. Shebang and header:
bash
#!/usr/bin/env bash
#
创建基础脚本结构:
  1. Shebang和头部:
bash
#!/usr/bin/env bash
#

Script Name: script-name.sh

Script Name: script-name.sh

Description: Brief description of what the script does

Description: Brief description of what the script does

Author: Your Name

Author: Your Name

Created: YYYY-MM-DD

Created: YYYY-MM-DD


2. **Strict mode (recommended for all scripts):**
```bash
set -euo pipefail
IFS=$'\n\t'
Explanation:
  • set -e
    : Exit on error
  • set -u
    : Exit on undefined variable
  • set -o pipefail
    : Exit if any command in pipeline fails
  • IFS
    : Set safe Internal Field Separator
  1. Script-level variables and constants:
bash
undefined


2. **严格模式(推荐所有脚本使用):**
```bash
set -euo pipefail
IFS=$'\n\t'
说明:
  • set -e
    :出错时退出
  • set -u
    :遇到未定义变量时退出
  • set -o pipefail
    :管道中任何命令失败时退出
  • IFS
    :设置安全的内部字段分隔符
  1. 脚本级变量和常量:
bash
undefined

Script directory and name

Script directory and name

readonly SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" readonly SCRIPT_NAME="$(basename "${BASH_SOURCE[0]}")"
readonly SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" readonly SCRIPT_NAME="$(basename "${BASH_SOURCE[0]}")"

Other constants

Other constants

readonly DEFAULT_CONFIG_FILE="${SCRIPT_DIR}/config.conf" readonly LOG_FILE="/var/log/myscript.log"

4. **Signal handlers for cleanup:**
```bash
readonly DEFAULT_CONFIG_FILE="${SCRIPT_DIR}/config.conf" readonly LOG_FILE="/var/log/myscript.log"

4. **用于清理的信号处理器:**
```bash

Cleanup function

Cleanup function

cleanup() { local exit_code=$? # Add cleanup logic here # Remove temp files, release locks, etc. exit "${exit_code}" }
cleanup() { local exit_code=$? # Add cleanup logic here # Remove temp files, release locks, etc. exit "${exit_code}" }

Set trap for cleanup

Set trap for cleanup

trap cleanup EXIT ERR INT TERM
undefined
trap cleanup EXIT ERR INT TERM
undefined

Stage 4: Generate Core Functions

阶段4:生成核心函数

Generate essential functions based on requirements:
根据需求生成必要的函数:

Logging Functions

日志函数

bash
undefined
bash
undefined

Logging functions

Logging functions

log_debug() { if [[ "${LOG_LEVEL:-INFO}" == "DEBUG" ]]; then echo "[DEBUG] $(date '+%Y-%m-%d %H:%M:%S') - $*" >&2 fi }
log_info() { echo "[INFO] $(date '+%Y-%m-%d %H:%M:%S') - $*" >&2 }
log_warn() { echo "[WARN] $(date '+%Y-%m-%d %H:%M:%S') - $*" >&2 }
log_error() { echo "[ERROR] $(date '+%Y-%m-%d %H:%M:%S') - $*" >&2 }
log_fatal() { echo "[FATAL] $(date '+%Y-%m-%d %H:%M:%S') - $*" >&2 exit 1 }
undefined
log_debug() { if [[ "${LOG_LEVEL:-INFO}" == "DEBUG" ]]; then echo "[DEBUG] $(date '+%Y-%m-%d %H:%M:%S') - $*" >&2 fi }
log_info() { echo "[INFO] $(date '+%Y-%m-%d %H:%M:%S') - $*" >&2 }
log_warn() { echo "[WARN] $(date '+%Y-%m-%d %H:%M:%S') - $*" >&2 }
log_error() { echo "[ERROR] $(date '+%Y-%m-%d %H:%M:%S') - $*" >&2 }
log_fatal() { echo "[FATAL] $(date '+%Y-%m-%d %H:%M:%S') - $*" >&2 exit 1 }
undefined

Error Handling Functions

错误处理函数

bash
undefined
bash
undefined

Error handling

Error handling

die() { log_error "$@" exit 1 }
die() { log_error "$@" exit 1 }

Check if command exists

Check if command exists

check_command() { local cmd="$1" if ! command -v "${cmd}" &> /dev/null; then die "Required command '${cmd}' not found. Please install it and try again." fi }
check_command() { local cmd="$1" if ! command -v "${cmd}" &> /dev/null; then die "Required command '${cmd}' not found. Please install it and try again." fi }

Validate file exists and is readable

Validate file exists and is readable

validate_file() { local file="$1" [[ -f "${file}" ]] || die "File not found: ${file}" [[ -r "${file}" ]] || die "File not readable: ${file}" }
undefined
validate_file() { local file="$1" [[ -f "${file}" ]] || die "File not found: ${file}" [[ -r "${file}" ]] || die "File not readable: ${file}" }
undefined

Argument Parsing Function

参数解析函数

Using getopts for simple options:
bash
undefined
使用getopts处理简单选项:
bash
undefined

Parse command-line arguments

Parse command-line arguments

parse_args() { while getopts ":hvf:o:d" opt; do case ${opt} in h ) usage exit 0 ;; v ) VERBOSE=true ;; f ) INPUT_FILE="${OPTARG}" ;; o ) OUTPUT_FILE="${OPTARG}" ;; d ) LOG_LEVEL="DEBUG" ;; ? ) echo "Invalid option: -${OPTARG}" >&2 usage exit 1 ;; : ) echo "Option -${OPTARG} requires an argument" >&2 usage exit 1 ;; esac done shift $((OPTIND -1)) }
undefined
parse_args() { while getopts ":hvf:o:d" opt; do case ${opt} in h ) usage exit 0 ;; v ) VERBOSE=true ;; f ) INPUT_FILE="${OPTARG}" ;; o ) OUTPUT_FILE="${OPTARG}" ;; d ) LOG_LEVEL="DEBUG" ;; ? ) echo "Invalid option: -${OPTARG}" >&2 usage exit 1 ;; : ) echo "Option -${OPTARG} requires an argument" >&2 usage exit 1 ;; esac done shift $((OPTIND -1)) }
undefined

Usage/Help Function

使用/帮助函数

bash
undefined
bash
undefined

Display usage information

Display usage information

usage() { cat << EOF Usage: ${SCRIPT_NAME} [OPTIONS] [ARGUMENTS]
Description: Brief description of what the script does
Options: -h Show this help message and exit -v Enable verbose output -f FILE Input file path -o FILE Output file path -d Enable debug logging
Examples: ${SCRIPT_NAME} -f input.txt -o output.txt ${SCRIPT_NAME} -v -f data.log
EOF }
undefined
usage() { cat << EOF Usage: ${SCRIPT_NAME} [OPTIONS] [ARGUMENTS]
Description: Brief description of what the script does
Options: -h Show this help message and exit -v Enable verbose output -f FILE Input file path -o FILE Output file path -d Enable debug logging
Examples: ${SCRIPT_NAME} -f input.txt -o output.txt ${SCRIPT_NAME} -v -f data.log
EOF }
undefined

Stage 5: Generate Business Logic

阶段5:生成业务逻辑

Implement the core functionality based on requirements:
  1. For text processing tasks, use appropriate tools:
    • grep: Pattern matching, line filtering
    • awk: Field extraction, calculations, formatted output
    • sed: Stream editing, substitutions, deletions
  2. For system administration, include:
    • Validation of prerequisites
    • Backup procedures
    • Rollback capabilities
    • Progress indicators
  3. For API clients, include:
    • HTTP error handling
    • Retry logic
    • Authentication handling
    • Response parsing
根据需求实现核心功能:
  1. 文本处理任务:使用合适的工具:
    • grep:模式匹配、行过滤
    • awk:字段提取、计算、格式化输出
    • sed:流编辑、替换、删除
  2. 系统管理任务:包含:
    • 先决条件验证
    • 备份流程
    • 回滚能力
    • 进度指示器
  3. API客户端:包含:
    • HTTP错误处理
    • 重试逻辑
    • 认证处理
    • 响应解析

Stage 6: Generate Main Function

阶段6:生成主函数

Create the main execution flow:
bash
undefined
创建主执行流程:
bash
undefined

Main function

Main function

main() { # Parse arguments parse_args "$@"
# Validate prerequisites
check_command "grep"
check_command "awk"

# Validate inputs
[[ -n "${INPUT_FILE:-}" ]] || die "Input file not specified. Use -f option."
validate_file "${INPUT_FILE}"

log_info "Starting processing..."

# Main logic here
# ...

log_info "Processing completed successfully"
}
main() { # Parse arguments parse_args "$@"
# Validate prerequisites
check_command "grep"
check_command "awk"

# Validate inputs
[[ -n "${INPUT_FILE:-}" ]] || die "Input file not specified. Use -f option."
validate_file "${INPUT_FILE}"

log_info "Starting processing..."

# Main logic here
# ...

log_info "Processing completed successfully"
}

Execute main function

Execute main function

main "$@"
undefined
main "$@"
undefined

Stage 7: Add Documentation and Comments

阶段7:添加文档和注释

Add comprehensive comments:
  1. Function documentation:
bash
#######################################
添加全面的注释:
  1. 函数文档:
bash
#######################################

Brief description of what function does

Brief description of what function does

Globals:

Globals:

VARIABLE_NAME

VARIABLE_NAME

Arguments:

Arguments:

$1 - Description of first argument

$1 - Description of first argument

$2 - Description of second argument

$2 - Description of second argument

Outputs:

Outputs:

Writes results to stdout

Writes results to stdout

Returns:

Returns:

0 if successful, non-zero on error

0 if successful, non-zero on error

####################################### function_name() { # Implementation }

2. **Inline comments** for complex logic
3. **Usage examples** in the header or usage function
####################################### function_name() { # Implementation }

2. **复杂逻辑的内联注释**
3. **头部或使用函数中的使用示例**

Stage 8: Validate Generated Script

阶段8:验证生成的脚本

ALWAYS validate the generated script using the devops-skills:bash-script-validator skill:
Steps:
1. Generate the bash script
2. Invoke devops-skills:bash-script-validator skill with the script file
3. Review validation results
4. Fix any issues identified (syntax, security, best practices, portability)
5. Re-validate until all checks pass
6. Provide summary of generated script and validation status
Validation checklist:
  • Syntax is correct (bash -n passes)
  • ShellCheck warnings addressed
  • Security issues resolved (no command injection, eval with variables, etc.)
  • Variables properly quoted
  • Error handling implemented
  • Functions follow single responsibility principle
  • Script follows best practices from documentation
If validation fails, fix issues and re-validate until all checks pass.
务必使用devops-skills:bash-script-validator技能验证生成的脚本
步骤:
1. 按照上述工作流生成bash脚本
2. 调用devops-skills:bash-script-validator技能并传入脚本文件
3. 查看验证结果
4. 修复发现的所有问题(语法、安全、最佳实践、可移植性)
5. 重新验证直到所有检查通过
6. 提供生成脚本的摘要和验证状态
验证检查清单:
  • 语法正确(通过bash -n检查)
  • 解决ShellCheck警告
  • 修复安全问题(无命令注入、无带变量的eval等)
  • 变量正确引用
  • 实现错误处理
  • 函数遵循单一职责原则
  • 脚本符合文档中的最佳实践
如果验证失败,修复问题后重新验证,直到所有检查通过。

Text Processing Tool Selection Guide

文本处理工具选择指南

Choose the right tool for the job:
为任务选择合适的工具:

Use grep when:

当需要以下操作时使用grep:

  • Searching for patterns in files
  • Filtering lines that match/don't match patterns
  • Counting matches
  • Simple line extraction
Examples:
bash
undefined
  • 在文件中搜索模式
  • 过滤匹配/不匹配模式的行
  • 统计匹配次数
  • 简单行提取
示例:
bash
undefined

Find error lines in log file

在日志文件中查找错误行

grep "ERROR" application.log
grep "ERROR" application.log

Find lines NOT containing pattern

查找不包含指定模式的行

grep -v "DEBUG" application.log
grep -v "DEBUG" application.log

Case-insensitive search with line numbers

不区分大小写搜索并显示行号

grep -in "warning" *.log
grep -in "warning" *.log

Extended regex pattern

扩展正则表达式模式

grep -E "(error|fail|critical)" app.log
undefined
grep -E "(error|fail|critical)" app.log
undefined

Use awk when:

当需要以下操作时使用awk:

  • Processing structured data (CSV, TSV, logs with fields)
  • Performing calculations on data
  • Extracting specific fields
  • Generating formatted reports
  • Complex conditional logic
Examples:
bash
undefined
  • 处理结构化数据(CSV、TSV、带字段的日志)
  • 对数据执行计算
  • 提取特定字段
  • 生成格式化报告
  • 复杂条件逻辑
示例:
bash
undefined

Extract specific fields (e.g., 2nd and 4th columns)

提取特定字段(例如:第2和第4列)

awk '{print $2, $4}' data.txt
awk '{print $2, $4}' data.txt

Sum values in a column

对某列的值求和

awk '{sum += $3} END {print sum}' numbers.txt
awk '{sum += $3} END {print sum}' numbers.txt

Process CSV with custom delimiter

使用自定义分隔符处理CSV

awk -F',' '{print $1, $3}' data.csv
awk -F',' '{print $1, $3}' data.csv

Conditional processing

条件处理

awk '$3 > 100 {print $1, $3}' data.txt
awk '$3 > 100 {print $1, $3}' data.txt

Formatted output

格式化输出

awk '{printf "Name: %-20s Age: %d\n", $1, $2}' people.txt
undefined
awk '{printf "Name: %-20s Age: %d\n", $1, $2}' people.txt
undefined

Use sed when:

当需要以下操作时使用sed:

  • Performing substitutions
  • Deleting lines matching patterns
  • In-place file editing
  • Stream editing
  • Simple transformations
Examples:
bash
undefined
  • 执行替换操作
  • 删除匹配模式的行
  • 原地编辑文件
  • 流编辑
  • 简单转换
示例:
bash
undefined

Simple substitution (first occurrence)

简单替换(首次出现)

sed 's/old/new/' file.txt
sed 's/old/new/' file.txt

Global substitution (all occurrences)

全局替换(所有出现的位置)

sed 's/old/new/g' file.txt
sed 's/old/new/g' file.txt

In-place editing

原地编辑

sed -i 's/old/new/g' file.txt
sed -i 's/old/new/g' file.txt

Delete lines matching pattern

删除匹配模式的行

sed '/pattern/d' file.txt
sed '/pattern/d' file.txt

Replace only on lines matching pattern

仅在匹配模式的行上执行替换

sed '/ERROR/s/old/new/g' log.txt
sed '/ERROR/s/old/new/g' log.txt

Multiple commands

多个命令

sed -e 's/foo/bar/g' -e 's/baz/qux/g' file.txt
undefined
sed -e 's/foo/bar/g' -e 's/baz/qux/g' file.txt
undefined

Combining tools in pipelines:

在管道中组合工具:

bash
undefined
bash
undefined

grep to filter, awk to extract

使用grep过滤,awk提取

grep "ERROR" app.log | awk '{print $1, $5}'
grep "ERROR" app.log | awk '{print $1, $5}'

sed to clean, awk to process

使用sed清理,awk处理

sed 's/[^[:print:]]//g' data.txt | awk '{sum += $2} END {print sum}'
sed 's/[^[:print:]]//g' data.txt | awk '{sum += $2} END {print sum}'

Multiple stages

多阶段处理

cat access.log
| grep "GET"
| sed 's/.HTTP/[0-9.]" //'
| awk '$1 >= 200 && $1 < 300 {count++} END {print count}'
undefined
cat access.log
| grep "GET"
| sed 's/.HTTP/[0-9.]" //'
| awk '$1 >= 200 && $1 < 300 {count++} END {print count}'
undefined

Best Practices for Generated Scripts

生成脚本的最佳实践

Security

安全

  1. Always quote variables:
bash
undefined
  1. 始终引用变量:
bash
undefined

Good

推荐

rm "${file}" grep "${pattern}" "${input_file}"
rm "${file}" grep "${pattern}" "${input_file}"

Bad - prone to word splitting and globbing

不推荐 - 容易出现单词拆分和通配问题

rm $file grep $pattern $input_file

2. **Validate all inputs:**
```bash
rm $file grep $pattern $input_file

2. **验证所有输入:**
```bash

Validate file paths

验证文件路径

[[ "${input_file}" =~ ^[a-zA-Z0-9/_.-]+$ ]] || die "Invalid file path"
[[ "${input_file}" =~ ^[a-zA-Z0-9/_.-]+$ ]] || die "Invalid file path"

Validate numeric inputs

验证数值输入

[[ "${count}" =~ ^[0-9]+$ ]] || die "Count must be numeric"

3. **Avoid eval with user input:**
```bash
[[ "${count}" =~ ^[0-9]+$ ]] || die "Count must be numeric"

3. **避免将用户输入传入eval:**
```bash

Never do this

绝对不要这样做

eval "${user_input}"
eval "${user_input}"

Instead, use case statements or validated inputs

替代方案:使用case语句或已验证的输入

case "${command}" in start) do_start ;; stop) do_stop ;; *) die "Invalid command" ;; esac

4. **Use $() instead of backticks:**
```bash
case "${command}" in start) do_start ;; stop) do_stop ;; *) die "Invalid command" ;; esac

4. **使用$()替代反引号:**
```bash

Good - more readable, can nest

推荐 - 可读性更强,支持嵌套

result=$(command) outer=$(inner $(another_command))
result=$(command) outer=$(inner $(another_command))

Bad - hard to read, can't nest

不推荐 - 可读性差,不支持嵌套

result=
command
undefined
result=
command
undefined

Performance

性能

  1. Use built-ins when possible:
bash
undefined
  1. 尽可能使用内置命令:
bash
undefined

Good - uses bash built-in

推荐 - 使用bash内置命令

if [[ -f "${file}" ]]; then
if [[ -f "${file}" ]]; then

Slower - spawns external process

较慢 - 启动外部进程

if [ -f "${file}" ]; then

2. **Avoid useless use of cat (UUOC):**
```bash
if [ -f "${file}" ]; then

2. **避免不必要的cat(UUOC):**
```bash

Good

推荐

grep "pattern" file.txt awk '{print $1}' file.txt
grep "pattern" file.txt awk '{print $1}' file.txt

Bad - unnecessary cat

不推荐 - 不必要的cat

cat file.txt | grep "pattern" cat file.txt | awk '{print $1}'

3. **Process in a single pass when possible:**
```bash
cat file.txt | grep "pattern" cat file.txt | awk '{print $1}'

3. **尽可能单次处理:**
```bash

Good - single awk call

推荐 - 单次awk调用

awk '/ERROR/ {errors++} /WARN/ {warns++} END {print errors, warns}' log.txt
awk '/ERROR/ {errors++} /WARN/ {warns++} END {print errors, warns}' log.txt

Less efficient - multiple greps

效率较低 - 多次grep调用

errors=$(grep -c "ERROR" log.txt) warns=$(grep -c "WARN" log.txt)
undefined
errors=$(grep -c "ERROR" log.txt) warns=$(grep -c "WARN" log.txt)
undefined

Maintainability

可维护性

  1. Use functions for reusable code
  2. Keep functions focused (single responsibility)
  3. Use meaningful variable names
  4. Add comments for complex logic
  5. Group related functionality
  6. Use readonly for constants
  1. 使用函数实现可复用代码
  2. 保持函数专注(单一职责)
  3. 使用有意义的变量名
  4. 为复杂逻辑添加注释
  5. 分组相关功能
  6. 使用readonly定义常量

Portability (when targeting POSIX sh)

可移植性(针对POSIX sh)

  1. Avoid bash-specific features:
bash
undefined
  1. 避免bash专属特性:
bash
undefined

Bash-specific (arrays)

bash专属(数组)

arr=(one two three)
arr=(one two three)

POSIX alternative

POSIX替代方案

set -- one two three
set -- one two three

Bash-specific ([[ ]])

bash专属([[ ]])

if [[ -f "${file}" ]]; then
if [[ -f "${file}" ]]; then

POSIX alternative

POSIX替代方案

if [ -f "${file}" ]; then

2. **Test with sh:**
```bash
sh -n script.sh  # Syntax check
if [ -f "${file}" ]; then

2. **使用sh测试:**
```bash
sh -n script.sh  # 语法检查

Common Script Patterns

常见脚本模式

Pattern 1: Simple Command-Line Tool

模式1:简单命令行工具

bash
#!/usr/bin/env bash
set -euo pipefail

usage() {
    cat << EOF
Usage: ${0##*/} [OPTIONS] FILE

Process FILE and output results.

Options:
    -h        Show this help
    -v        Verbose output
    -o FILE   Output file (default: stdout)
EOF
}

main() {
    local verbose=false
    local output_file=""
    local input_file=""

    while getopts ":hvo:" opt; do
        case ${opt} in
            h) usage; exit 0 ;;
            v) verbose=true ;;
            o) output_file="${OPTARG}" ;;
            *) echo "Invalid option: -${OPTARG}" >&2; usage; exit 1 ;;
        esac
    done
    shift $((OPTIND - 1))

    input_file="${1:-}"
    [[ -n "${input_file}" ]] || { echo "Error: FILE required" >&2; usage; exit 1; }
    [[ -f "${input_file}" ]] || { echo "Error: File not found: ${input_file}" >&2; exit 1; }

    # Process file
    if [[ -n "${output_file}" ]]; then
        process_file "${input_file}" > "${output_file}"
    else
        process_file "${input_file}"
    fi
}

process_file() {
    local file="$1"
    # Processing logic here
    cat "${file}"
}

main "$@"
bash
#!/usr/bin/env bash
set -euo pipefail

usage() {
    cat << EOF
Usage: ${0##*/} [OPTIONS] FILE

Process FILE and output results.

Options:
    -h        Show this help
    -v        Verbose output
    -o FILE   Output file (default: stdout)
EOF
}

main() {
    local verbose=false
    local output_file=""
    local input_file=""

    while getopts ":hvo:" opt; do
        case ${opt} in
            h) usage; exit 0 ;;
            v) verbose=true ;;
            o) output_file="${OPTARG}" ;;
            *) echo "Invalid option: -${OPTARG}" >&2; usage; exit 1 ;;
        esac
    done
    shift $((OPTIND - 1))

    input_file="${1:-}"
    [[ -n "${input_file}" ]] || { echo "Error: FILE required" >&2; usage; exit 1; }
    [[ -f "${input_file}" ]] || { echo "Error: File not found: ${input_file}" >&2; exit 1; }

    # 处理文件
    if [[ -n "${output_file}" ]]; then
        process_file "${input_file}" > "${output_file}"
    else
        process_file "${input_file}"
    fi
}

process_file() {
    local file="$1"
    # 处理逻辑
    cat "${file}"
}

main "$@"

Pattern 2: Text Processing Script

模式2:文本处理脚本

bash
#!/usr/bin/env bash
set -euo pipefail
bash
#!/usr/bin/env bash
set -euo pipefail

Process log file: extract errors, count by type

处理日志文件:提取错误,按类型统计

process_log() { local log_file="$1"
echo "Error Summary:"
echo "=============="

# Extract errors and count by type
grep "ERROR" "${log_file}" \
    | sed 's/.*ERROR: //' \
    | sed 's/ -.*//' \
    | sort \
    | uniq -c \
    | sort -rn \
    | awk '{printf "  %-30s %d\n", $2, $1}'

echo ""
echo "Total errors: $(grep -c "ERROR" "${log_file}")"
}
main() { [[ $# -eq 1 ]] || { echo "Usage: $0 LOG_FILE" >&2; exit 1; } [[ -f "$1" ]] || { echo "Error: File not found: $1" >&2; exit 1; }
process_log "$1"
}
main "$@"
undefined
process_log() { local log_file="$1"
echo "错误摘要:"
echo "=============="

# 提取错误并按类型统计
grep "ERROR" "${log_file}" \
    | sed 's/.*ERROR: //' \
    | sed 's/ -.*//' \
    | sort \
    | uniq -c \
    | sort -rn \
    | awk '{printf "  %-30s %d\n", $2, $1}'

echo ""
echo "总错误数:$(grep -c "ERROR" "${log_file}")"
}
main() { [[ $# -eq 1 ]] || { echo "用法:$0 日志文件" >&2; exit 1; } [[ -f "$1" ]] || { echo "错误:未找到文件:$1" >&2; exit 1; }
process_log "$1"
}
main "$@"
undefined

Pattern 3: Batch Processing with Parallel Execution

模式3:并行批量处理

bash
#!/usr/bin/env bash
set -euo pipefail
bash
#!/usr/bin/env bash
set -euo pipefail

Process a single file

处理单个文件

process_file() { local file="$1" local output="${file}.processed"
# Processing logic
sed 's/old/new/g' "${file}" > "${output}"
echo "Processed: ${file} -> ${output}"
}
process_file() { local file="$1" local output="${file}.processed"
# 处理逻辑
sed 's/old/new/g' "${file}" > "${output}"
echo "已处理:${file} -> ${output}"
}

Export function for parallel execution

导出函数以支持并行执行

export -f process_file
main() { local input_dir="${1:-.}" local max_jobs="${2:-4}"
# Find all files and process in parallel
find "${input_dir}" -type f -name "*.txt" -print0 \
    | xargs -0 -P "${max_jobs}" -I {} bash -c 'process_file "$@"' _ {}
}
main "$@"
undefined
export -f process_file
main() { local input_dir="${1:-.}" local max_jobs="${2:-4}"
# 查找所有文件并并行处理
find "${input_dir}" -type f -name "*.txt" -print0 \
    | xargs -0 -P "${max_jobs}" -I {} bash -c 'process_file "$@"' _ {}
}
main "$@"
undefined

Helper Scripts

辅助脚本

The
scripts/
directory contains automation tools to assist with generation:
scripts/
目录包含用于辅助生成的自动化工具:

generate_script_template.sh

generate_script_template.sh

Generates a bash script from the standard template with proper structure, error handling, and logging.
Usage:
bash
bash scripts/generate_script_template.sh standard [SCRIPT_NAME]

Example:
  bash scripts/generate_script_template.sh standard myscript.sh
The script will copy the standard template and make it executable. You can then customize it for your specific needs.
从标准模板生成bash脚本,具备正确的结构、错误处理和日志功能。
用法:
bash
bash scripts/generate_script_template.sh standard [脚本名称]

示例:
  bash scripts/generate_script_template.sh standard myscript.sh
该脚本会复制标准模板并设置为可执行文件。您可以根据具体需求自定义生成的模板。

Documentation Resources

文档资源

Core Bash Scripting

核心Bash脚本编写

docs/bash-scripting-guide.md

docs/bash-scripting-guide.md

  • Comprehensive bash scripting guide
  • Bash vs POSIX sh differences
  • Strict mode and error handling strategies
  • Functions, scope, and variable handling
  • Arrays and associative arrays
  • Parameter expansion techniques
  • Process substitution and command substitution
  • Best practices and modern patterns
  • 全面的bash脚本编写指南
  • Bash与POSIX sh的差异
  • 严格模式和错误处理策略
  • 函数、作用域和变量处理
  • 数组和关联数组
  • 参数扩展技巧
  • 进程替换和命令替换
  • 最佳实践和现代模式

docs/script-patterns.md

docs/script-patterns.md

  • Common bash script patterns and templates
  • Argument parsing patterns (getopts, manual)
  • Configuration file handling
  • Logging frameworks and approaches
  • Parallel processing patterns
  • Lock file management
  • Signal handling and cleanup
  • Retry logic and backoff strategies
  • 常见bash脚本模式和模板
  • 参数解析模式(getopts、手动解析)
  • 配置文件处理
  • 日志框架和实现方式
  • 并行处理模式
  • 锁文件管理
  • 信号处理和清理
  • 重试逻辑和退避策略

docs/generation-best-practices.md

docs/generation-best-practices.md

  • Guidelines for generating quality scripts
  • Code organization principles
  • Naming conventions
  • Documentation standards
  • Testing approaches for bash scripts
  • Portability considerations
  • Security best practices
  • Performance optimization techniques
  • 生成高质量脚本的指南
  • 代码组织原则
  • 命名规范
  • 文档标准
  • bash脚本的测试方法
  • 可移植性考虑
  • 安全最佳实践
  • 性能优化技巧

Text Processing Tools

文本处理工具

docs/text-processing-guide.md

docs/text-processing-guide.md

  • When to use grep vs awk vs sed
  • Combining tools effectively in pipelines
  • Performance optimization for large files
  • Common text processing patterns
  • Real-world examples and use cases
  • 何时使用grep vs awk vs sed
  • 在管道中有效组合工具
  • 大文件处理的性能优化
  • 常见文本处理模式
  • 真实场景示例和用例

Tool-Specific References

工具特定参考

Note: The following references are available in the devops-skills:bash-script-validator skill and are referenced by this skill:
  • bash-reference.md - Bash features and syntax
  • grep-reference.md - grep patterns and usage
  • awk-reference.md - AWK text processing
  • sed-reference.md - sed stream editing
  • regex-reference.md - Regular expressions (BRE vs ERE)
**注意:**以下参考文档在devops-skills:bash-script-validator技能中可用,本技能会引用这些文档:
  • bash-reference.md - Bash特性和语法
  • grep-reference.md - grep模式和用法
  • awk-reference.md - AWK文本处理
  • sed-reference.md - sed流编辑
  • regex-reference.md - 正则表达式(BRE vs ERE)

Example Scripts

示例脚本

Located in
examples/
directory:
位于
examples/
目录:

log-analyzer.sh

log-analyzer.sh

Demonstrates grep, awk, and sed usage for log file analysis. Shows pattern matching, field extraction, and statistical analysis. This example illustrates:
  • Using grep to filter log entries
  • Using awk for field extraction and formatting
  • Using sed for text transformations
  • Generating summary reports with proper error handling
展示了grep、awk和sed在日志文件分析中的用法。包含模式匹配、字段提取和统计分析。该示例说明了:
  • 使用grep过滤日志条目
  • 使用awk进行字段提取和格式化
  • 使用sed进行文本转换
  • 生成带正确错误处理的摘要报告

Template

模板

Located in
assets/templates/
directory:
位于
assets/templates/
目录:

standard-template.sh

standard-template.sh

Production-ready template with comprehensive features:
  • Proper shebang (
    #!/usr/bin/env bash
    ) and strict mode
  • Logging functions (debug, info, warn, error)
  • Error handling (die, check_command, validate_file)
  • Argument parsing with getopts
  • Cleanup trap handlers
  • Usage documentation
Use this template as a starting point and customize based on your specific requirements.
生产可用的模板,具备全面功能:
  • 正确的shebang (
    #!/usr/bin/env bash
    )和严格模式
  • 日志函数(debug、info、warn、error)
  • 错误处理(die、check_command、validate_file)
  • 使用getopts的参数解析
  • 清理陷阱处理器
  • 使用文档
以此模板为起点,根据具体需求进行自定义。

Integration with devops-skills:bash-script-validator

与devops-skills:bash-script-validator的集成

After generating any bash script, automatically invoke the devops-skills:bash-script-validator skill to ensure quality:
Steps:
1. Generate the bash script following the workflow above
2. Invoke devops-skills:bash-script-validator skill with the script file
3. Review validation results (syntax, ShellCheck, security, performance)
4. Fix any issues identified
5. Re-validate until all checks pass
6. Provide summary of generated script and validation status
This ensures all generated scripts:
  • Have correct syntax
  • Follow bash best practices
  • Avoid common security issues
  • Are optimized for performance
  • Include proper error handling
  • Are well-documented
生成任何bash脚本后,自动调用devops-skills:bash-script-validator技能以确保质量:
步骤:
1. 按照上述工作流生成bash脚本
2. 调用devops-skills:bash-script-validator技能并传入脚本文件
3. 查看验证结果(语法、ShellCheck、安全、性能)
4. 修复发现的所有问题
5. 重新验证直到所有检查通过
6. 提供生成脚本的摘要和验证状态
这确保所有生成的脚本:
  • 语法正确
  • 遵循bash最佳实践
  • 避免常见安全问题
  • 性能优化
  • 包含正确的错误处理
  • 文档完善

Communication Guidelines (MANDATORY)

沟通指南(强制)

These are NOT optional. Follow these guidelines for every script generation:
这些要求不是可选的。每次生成脚本时都必须遵循以下指南:

Before Generation (see "MANDATORY: Pre-Generation Requirements" above)

生成前(见上方“强制:生成前准备要求”)

  1. ✅ Ask clarifying questions using AskUserQuestion
  2. ✅ Explain your approach and get user confirmation
  3. ✅ Consider using the standard template
  1. ✅ 使用AskUserQuestion工具询问澄清问题
  2. ✅ 说明您的实现方案并获得用户确认
  3. ✅ 考虑使用标准模板

During Generation (REQUIRED)

生成期间(必须执行)

You MUST explicitly cite documentation when using patterns or making tool selections. This helps users understand the rationale and learn best practices.
  1. In the approach explanation, cite documentation for:
    • Tool selection rationale: "Using awk for field extraction (recommended in
      docs/text-processing-guide.md
      for structured data)"
    • Pattern choices: "Using getopts pattern from
      docs/script-patterns.md
      "
    • Best practices: "Following strict mode guidelines from
      docs/bash-scripting-guide.md
      "
  2. In generated code comments, reference documentation:
    bash
    # Using single-pass awk processing (per docs/text-processing-guide.md)
    awk '{ip[$1]++} END {for (i in ip) print ip[i], i}' "${log_file}"
  3. Minimum citation requirement: At least 2 documentation references must appear in:
    • The approach explanation (before code generation)
    • The Post-Generation Summary
您必须明确引用文档,当使用模式或选择工具时。这有助于用户理解决策依据并学习最佳实践。
  1. 在方案说明中,引用文档说明:
    • 工具选择依据:“使用awk进行字段提取(
      docs/text-processing-guide.md
      中推荐用于结构化数据)”
    • 模式选择:“使用
      docs/script-patterns.md
      中的getopts模式”
    • 最佳实践:“遵循
      docs/bash-scripting-guide.md
      中的严格模式指南”
  2. 在生成的代码注释中,引用文档:
    bash
    # 使用单次遍历awk处理(参考docs/text-processing-guide.md)
    awk '{ip[$1]++} END {for (i in ip) print ip[i], i}' "${log_file}"
  3. 最低引用要求:至少在以下位置出现2处文档引用:
    • 方案说明(代码生成前)
    • 生成后摘要

After Generation (REQUIRED)

生成后(必须执行)

Provide a Post-Generation Summary that includes:
undefined
提供生成后摘要,包含以下内容:
undefined

Generated Script Summary

生成脚本摘要

File: path/to/script.sh
Architecture:
  • [List main functions and their purposes]
Tool Selection:
  • grep: [why used]
  • awk: [why used]
  • sed: [why used, or "not needed"]
Key Features:
  • [Feature 1]
  • [Feature 2]
Customization Points:
  • VARIABLE_NAME
    : [what to change]
  • function_name()
    : [when to modify]
Usage Examples:
bash
./script.sh --help
./script.sh -v input.log
./script.sh -o report.txt input.log
Validation Status: ✅ Passed ShellCheck / ❌ Issues found (fixing...)
Documentation References:
  • docs/text-processing-guide.md (tool selection)
  • docs/script-patterns.md (argument parsing)

This summary ensures users understand what was generated and how to use it.
文件: path/to/script.sh
架构:
  • [列出主要函数及其用途]
工具选择:
  • grep: [使用原因]
  • awk: [使用原因]
  • sed: [使用原因,或“不需要”]
关键特性:
  • [特性1]
  • [特性2]
自定义点:
  • VARIABLE_NAME
    : [修改内容]
  • function_name()
    : [修改场景]
使用示例:
bash
./script.sh --help
./script.sh -v input.log
./script.sh -o report.txt input.log
验证状态: ✅ 通过ShellCheck / ❌ 发现问题(正在修复...)
文档参考:
  • docs/text-processing-guide.md(工具选择)
  • docs/script-patterns.md(参数解析)

此摘要确保用户理解生成的脚本及其使用方法。

Resources

资源

Official Documentation

官方文档

Best Practices Guides

最佳实践指南

Internal References

内部参考

All documentation is included in the
docs/
directory for offline reference and context loading.

Note: This skill automatically validates generated scripts using the devops-skills:bash-script-validator skill, providing Claude with comprehensive feedback to ensure high-quality, production-ready bash scripts.
所有文档都包含在
docs/
目录中,支持离线参考和上下文加载。

注意:本技能会自动使用devops-skills:bash-script-validator技能验证生成的脚本,为Claude提供全面反馈,确保生成高质量、生产可用的bash脚本。