Loading...
Loading...
Comprehensive toolkit for generating best practice bash scripts following current standards and conventions. Use this skill when creating new bash scripts, implementing shell automation, text processing workflows, or building production-ready command-line tools.
npx skill4agent add akin-ozer/cc-devops-skills bash-script-generator| Ambiguity | Question to Ask |
|---|---|
| Log/data format unknown | "What format is the input data? (e.g., nginx combined log, JSON, CSV, custom)" |
| Large file handling | "Will this process large files (>100MB)? Should we optimize for memory/performance?" |
| Error handling preference | "How should errors be handled? (fail fast, continue with warnings, retry)" |
| Security context | "Will this script handle sensitive data or run with elevated privileges?" |
| Portability needs | "Does this need to run on different systems? (bash-specific vs POSIX sh)" |
| Output format unclear | "What format should the output be? (human-readable, JSON, CSV, etc.)" |
docs/text-processing-guide.mdI'll create a log analyzer with these components:
**Architecture:**
- Main function with argument parsing (getopts pattern)
- 4 analysis functions: analyze_ips(), analyze_status_codes(), etc.
- Logging functions for INFO/DEBUG/ERROR output
- Cleanup trap for signal handling
**Tool Selection:**
- awk for status code analysis (single-pass counting, as recommended in
docs/text-processing-guide.md)
- grep for date filtering (simple pattern matching)
- sort | uniq -c for frequency counting
**Customization Points:**
- LOG_FORMAT variable for different nginx configurations
- TOP_N constant for number of results to show
Shall I proceed with this design?bash scripts/generate_script_template.sh standard output-script.shset -euo pipefail#!/usr/bin/env bash
#
# Script Name: script-name.sh
# Description: Brief description of what the script does
# Author: Your Name
# Created: YYYY-MM-DD
#set -euo pipefail
IFS=$'\n\t'set -eset -uset -o pipefailIFS# Script directory and name
readonly SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
readonly SCRIPT_NAME="$(basename "${BASH_SOURCE[0]}")"
# Other constants
readonly DEFAULT_CONFIG_FILE="${SCRIPT_DIR}/config.conf"
readonly LOG_FILE="/var/log/myscript.log"# Cleanup function
cleanup() {
local exit_code=$?
# Add cleanup logic here
# Remove temp files, release locks, etc.
exit "${exit_code}"
}
# Set trap for cleanup
trap cleanup EXIT ERR INT TERM# Logging functions
log_debug() {
if [[ "${LOG_LEVEL:-INFO}" == "DEBUG" ]]; then
echo "[DEBUG] $(date '+%Y-%m-%d %H:%M:%S') - $*" >&2
fi
}
log_info() {
echo "[INFO] $(date '+%Y-%m-%d %H:%M:%S') - $*" >&2
}
log_warn() {
echo "[WARN] $(date '+%Y-%m-%d %H:%M:%S') - $*" >&2
}
log_error() {
echo "[ERROR] $(date '+%Y-%m-%d %H:%M:%S') - $*" >&2
}
log_fatal() {
echo "[FATAL] $(date '+%Y-%m-%d %H:%M:%S') - $*" >&2
exit 1
}# Error handling
die() {
log_error "$@"
exit 1
}
# Check if command exists
check_command() {
local cmd="$1"
if ! command -v "${cmd}" &> /dev/null; then
die "Required command '${cmd}' not found. Please install it and try again."
fi
}
# Validate file exists and is readable
validate_file() {
local file="$1"
[[ -f "${file}" ]] || die "File not found: ${file}"
[[ -r "${file}" ]] || die "File not readable: ${file}"
}# Parse command-line arguments
parse_args() {
while getopts ":hvf:o:d" opt; do
case ${opt} in
h )
usage
exit 0
;;
v )
VERBOSE=true
;;
f )
INPUT_FILE="${OPTARG}"
;;
o )
OUTPUT_FILE="${OPTARG}"
;;
d )
LOG_LEVEL="DEBUG"
;;
\? )
echo "Invalid option: -${OPTARG}" >&2
usage
exit 1
;;
: )
echo "Option -${OPTARG} requires an argument" >&2
usage
exit 1
;;
esac
done
shift $((OPTIND -1))
}# Display usage information
usage() {
cat << EOF
Usage: ${SCRIPT_NAME} [OPTIONS] [ARGUMENTS]
Description:
Brief description of what the script does
Options:
-h Show this help message and exit
-v Enable verbose output
-f FILE Input file path
-o FILE Output file path
-d Enable debug logging
Examples:
${SCRIPT_NAME} -f input.txt -o output.txt
${SCRIPT_NAME} -v -f data.log
EOF
}# Main function
main() {
# Parse arguments
parse_args "$@"
# Validate prerequisites
check_command "grep"
check_command "awk"
# Validate inputs
[[ -n "${INPUT_FILE:-}" ]] || die "Input file not specified. Use -f option."
validate_file "${INPUT_FILE}"
log_info "Starting processing..."
# Main logic here
# ...
log_info "Processing completed successfully"
}
# Execute main function
main "$@"#######################################
# Brief description of what function does
# Globals:
# VARIABLE_NAME
# Arguments:
# $1 - Description of first argument
# $2 - Description of second argument
# Outputs:
# Writes results to stdout
# Returns:
# 0 if successful, non-zero on error
#######################################
function_name() {
# Implementation
}Steps:
1. Generate the bash script
2. Invoke devops-skills:bash-script-validator skill with the script file
3. Review validation results
4. Fix any issues identified (syntax, security, best practices, portability)
5. Re-validate until all checks pass
6. Provide summary of generated script and validation status# Find error lines in log file
grep "ERROR" application.log
# Find lines NOT containing pattern
grep -v "DEBUG" application.log
# Case-insensitive search with line numbers
grep -in "warning" *.log
# Extended regex pattern
grep -E "(error|fail|critical)" app.log# Extract specific fields (e.g., 2nd and 4th columns)
awk '{print $2, $4}' data.txt
# Sum values in a column
awk '{sum += $3} END {print sum}' numbers.txt
# Process CSV with custom delimiter
awk -F',' '{print $1, $3}' data.csv
# Conditional processing
awk '$3 > 100 {print $1, $3}' data.txt
# Formatted output
awk '{printf "Name: %-20s Age: %d\n", $1, $2}' people.txt# Simple substitution (first occurrence)
sed 's/old/new/' file.txt
# Global substitution (all occurrences)
sed 's/old/new/g' file.txt
# In-place editing
sed -i 's/old/new/g' file.txt
# Delete lines matching pattern
sed '/pattern/d' file.txt
# Replace only on lines matching pattern
sed '/ERROR/s/old/new/g' log.txt
# Multiple commands
sed -e 's/foo/bar/g' -e 's/baz/qux/g' file.txt# grep to filter, awk to extract
grep "ERROR" app.log | awk '{print $1, $5}'
# sed to clean, awk to process
sed 's/[^[:print:]]//g' data.txt | awk '{sum += $2} END {print sum}'
# Multiple stages
cat access.log \
| grep "GET" \
| sed 's/.*HTTP\/[0-9.]*" //' \
| awk '$1 >= 200 && $1 < 300 {count++} END {print count}'# Good
rm "${file}"
grep "${pattern}" "${input_file}"
# Bad - prone to word splitting and globbing
rm $file
grep $pattern $input_file# Validate file paths
[[ "${input_file}" =~ ^[a-zA-Z0-9/_.-]+$ ]] || die "Invalid file path"
# Validate numeric inputs
[[ "${count}" =~ ^[0-9]+$ ]] || die "Count must be numeric"# Never do this
eval "${user_input}"
# Instead, use case statements or validated inputs
case "${command}" in
start) do_start ;;
stop) do_stop ;;
*) die "Invalid command" ;;
esac# Good - more readable, can nest
result=$(command)
outer=$(inner $(another_command))
# Bad - hard to read, can't nest
result=`command`# Good - uses bash built-in
if [[ -f "${file}" ]]; then
# Slower - spawns external process
if [ -f "${file}" ]; then# Good
grep "pattern" file.txt
awk '{print $1}' file.txt
# Bad - unnecessary cat
cat file.txt | grep "pattern"
cat file.txt | awk '{print $1}'# Good - single awk call
awk '/ERROR/ {errors++} /WARN/ {warns++} END {print errors, warns}' log.txt
# Less efficient - multiple greps
errors=$(grep -c "ERROR" log.txt)
warns=$(grep -c "WARN" log.txt)# Bash-specific (arrays)
arr=(one two three)
# POSIX alternative
set -- one two three
# Bash-specific ([[ ]])
if [[ -f "${file}" ]]; then
# POSIX alternative
if [ -f "${file}" ]; thensh -n script.sh # Syntax check#!/usr/bin/env bash
set -euo pipefail
usage() {
cat << EOF
Usage: ${0##*/} [OPTIONS] FILE
Process FILE and output results.
Options:
-h Show this help
-v Verbose output
-o FILE Output file (default: stdout)
EOF
}
main() {
local verbose=false
local output_file=""
local input_file=""
while getopts ":hvo:" opt; do
case ${opt} in
h) usage; exit 0 ;;
v) verbose=true ;;
o) output_file="${OPTARG}" ;;
*) echo "Invalid option: -${OPTARG}" >&2; usage; exit 1 ;;
esac
done
shift $((OPTIND - 1))
input_file="${1:-}"
[[ -n "${input_file}" ]] || { echo "Error: FILE required" >&2; usage; exit 1; }
[[ -f "${input_file}" ]] || { echo "Error: File not found: ${input_file}" >&2; exit 1; }
# Process file
if [[ -n "${output_file}" ]]; then
process_file "${input_file}" > "${output_file}"
else
process_file "${input_file}"
fi
}
process_file() {
local file="$1"
# Processing logic here
cat "${file}"
}
main "$@"#!/usr/bin/env bash
set -euo pipefail
# Process log file: extract errors, count by type
process_log() {
local log_file="$1"
echo "Error Summary:"
echo "=============="
# Extract errors and count by type
grep "ERROR" "${log_file}" \
| sed 's/.*ERROR: //' \
| sed 's/ -.*//' \
| sort \
| uniq -c \
| sort -rn \
| awk '{printf " %-30s %d\n", $2, $1}'
echo ""
echo "Total errors: $(grep -c "ERROR" "${log_file}")"
}
main() {
[[ $# -eq 1 ]] || { echo "Usage: $0 LOG_FILE" >&2; exit 1; }
[[ -f "$1" ]] || { echo "Error: File not found: $1" >&2; exit 1; }
process_log "$1"
}
main "$@"#!/usr/bin/env bash
set -euo pipefail
# Process a single file
process_file() {
local file="$1"
local output="${file}.processed"
# Processing logic
sed 's/old/new/g' "${file}" > "${output}"
echo "Processed: ${file} -> ${output}"
}
# Export function for parallel execution
export -f process_file
main() {
local input_dir="${1:-.}"
local max_jobs="${2:-4}"
# Find all files and process in parallel
find "${input_dir}" -type f -name "*.txt" -print0 \
| xargs -0 -P "${max_jobs}" -I {} bash -c 'process_file "$@"' _ {}
}
main "$@"scripts/bash scripts/generate_script_template.sh standard [SCRIPT_NAME]
Example:
bash scripts/generate_script_template.sh standard myscript.shexamples/assets/templates/#!/usr/bin/env bashSteps:
1. Generate the bash script following the workflow above
2. Invoke devops-skills:bash-script-validator skill with the script file
3. Review validation results (syntax, ShellCheck, security, performance)
4. Fix any issues identified
5. Re-validate until all checks pass
6. Provide summary of generated script and validation statusdocs/text-processing-guide.mddocs/script-patterns.mddocs/bash-scripting-guide.md# Using single-pass awk processing (per docs/text-processing-guide.md)
awk '{ip[$1]++} END {for (i in ip) print ip[i], i}' "${log_file}"## Generated Script Summary
**File:** path/to/script.sh
**Architecture:**
- [List main functions and their purposes]
**Tool Selection:**
- grep: [why used]
- awk: [why used]
- sed: [why used, or "not needed"]
**Key Features:**
- [Feature 1]
- [Feature 2]
**Customization Points:**
- `VARIABLE_NAME`: [what to change]
- `function_name()`: [when to modify]
**Usage Examples:**
```bash
./script.sh --help
./script.sh -v input.log
./script.sh -o report.txt input.log
This summary ensures users understand what was generated and how to use it.
## Resources
### Official Documentation
- [GNU Bash Manual](https://www.gnu.org/software/bash/manual/bash.html)
- [GNU grep Manual](https://www.gnu.org/software/grep/manual/grep.html)
- [GNU awk Manual](https://www.gnu.org/software/gawk/manual/gawk.html)
- [GNU sed Manual](https://www.gnu.org/software/sed/manual/sed.html)
- [POSIX Shell Specification](https://pubs.opengroup.org/onlinepubs/9699919799/)
- [ShellCheck](https://www.shellcheck.net/)
### Best Practices Guides
- [Google Shell Style Guide](https://google.github.io/styleguide/shellguide.html)
- [Bash Best Practices](https://bertvv.github.io/cheat-sheets/Bash.html)
- [Minimal Safe Bash Script Template](https://betterdev.blog/minimal-safe-bash-script-template/)
### Internal References
All documentation is included in the `docs/` directory for offline reference and context loading.
---
**Note**: This skill automatically validates generated scripts using the devops-skills:bash-script-validator skill, providing Claude with comprehensive feedback to ensure high-quality, production-ready bash scripts.