log-analysis

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

log-analysis

日志分析

Purpose

用途

This skill analyzes system and application logs to identify anomalies, security threats, and patterns indicative of breaches, supporting blue-team operations by providing actionable insights for incident response.
该技能分析系统和应用程序日志,识别异常情况、安全威胁以及表明入侵的模式,通过为事件响应提供可操作的见解来支持蓝队操作。

When to Use

使用场景

Use this skill during forensic investigations, real-time monitoring, or post-incident reviews. Apply it when logs show unusual activity, such as repeated failed logins or unexpected network traffic, or for routine security audits in environments like cloud servers or on-premise systems.
在法医调查、实时监控或事后事件审查时使用该技能。当日志显示异常活动(如重复登录失败或意外网络流量)时,或在云服务器、本地系统等环境中进行例行安全审计时,均可应用该技能。

Key Capabilities

核心能力

  • Detect anomalies using machine learning-based pattern recognition on log data.
  • Identify threats like SQL injection attempts or malware indicators via signature matching.
  • Parse multiple log formats (e.g., syslog, Apache access logs) and extract metadata for correlation.
  • Generate reports with severity levels and recommendations for mitigation.
  • Support for filtering logs by time, IP, or user ID to narrow down investigations.
  • 基于机器学习的模式识别技术检测日志数据中的异常情况。
  • 通过特征匹配识别SQL注入尝试、恶意软件指标等威胁。
  • 解析多种日志格式(如syslog、Apache访问日志)并提取元数据进行关联分析。
  • 生成包含严重等级和缓解建议的报告。
  • 支持按时间、IP或用户ID过滤日志,缩小调查范围。

Usage Patterns

使用模式

Invoke this skill via CLI for quick analysis or integrate it into scripts for automated workflows. Always provide log input paths or streams, and specify analysis parameters. For example, pipe logs from a file or API, then apply filters before running detection. Use environment variables for authentication, like
$LOG_ANALYSIS_API_KEY
, to secure API calls. Ensure logs are in plain text or JSON format for optimal parsing.
可通过CLI调用该技能进行快速分析,或集成到脚本中实现自动化工作流。请务必提供日志输入路径或流,并指定分析参数。例如,从文件或API管道传输日志,然后在运行检测前应用筛选条件。使用环境变量(如
$LOG_ANALYSIS_API_KEY
)进行身份验证,以保障API调用的安全性。确保日志为纯文本或JSON格式,以获得最佳解析效果。

Common Commands/API

常用命令/API

Use the CLI tool with commands like
log-analysis scan
for basic operations. For API integration, call endpoints like
POST /api/v1/logs/analyze
with a JSON payload.
  • CLI Example: Scan a local file for threats:
    log-analysis scan --file /var/log/syslog --anomaly true --output report.json
    This flags anomalies and saves results to a file.
  • API Example: Send logs for analysis:
    curl -X POST https://api.openclaw.com/api/v1/logs/analyze \
    -H "Authorization: Bearer $LOG_ANALYSIS_API_KEY" \
    -d '{"logs": [{"line": "ERROR: Unauthorized access"}], "filters": {"ip": "192.168.1.1"}}'
    This analyzes specified logs and returns JSON with detected threats.
  • Common Flags:
    --file <path>
    for input,
    --anomaly
    for ML-based detection,
    --threshold 0.8
    for sensitivity level.
  • Config Format: Use JSON for custom rules, e.g.:
    {
      "rules": [
        {"pattern": "Failed login", "severity": "high"}
      ]
    }
    Load via
    --config config.json
    .
使用CLI工具执行
log-analysis scan
等命令完成基础操作。如需API集成,可调用
POST /api/v1/logs/analyze
端点并传入JSON负载。
  • CLI示例:扫描本地文件以检测威胁:
    log-analysis scan --file /var/log/syslog --anomaly true --output report.json
    该命令标记异常情况并将结果保存到文件中。
  • API示例:发送日志进行分析:
    curl -X POST https://api.openclaw.com/api/v1/logs/analyze \
    -H "Authorization: Bearer $LOG_ANALYSIS_API_KEY" \
    -d '{"logs": [{"line": "ERROR: Unauthorized access"}], "filters": {"ip": "192.168.1.1"}}'
    该命令分析指定日志并返回包含检测到的威胁的JSON数据。
  • 常用参数:
    --file <path>
    用于指定输入文件,
    --anomaly
    启用基于机器学习的检测,
    --threshold 0.8
    设置敏感度等级。
  • 配置格式:使用JSON定义自定义规则,例如:
    {
      "rules": [
        {"pattern": "Failed login", "severity": "high"}
      ]
    }
    通过
    --config config.json
    加载配置。

Integration Notes

集成说明

Integrate with tools like ELK Stack or Splunk by streaming logs via API. Set
$LOG_ANALYSIS_API_KEY
as an environment variable for authentication. For example, in a Python script, import the SDK and authenticate:
import log_analysis_sdk
client = log_analysis_sdk.Client(api_key=os.environ['LOG_ANALYSIS_API_KEY'])
response = client.analyze(logs_data)
Handle webhooks for real-time alerts by registering a callback URL. Ensure compatibility by matching log formats; convert non-standard logs using tools like jq.
通过API流式传输日志,可与ELK Stack或Splunk等工具集成。将
$LOG_ANALYSIS_API_KEY
设置为环境变量进行身份验证。例如,在Python脚本中导入SDK并完成身份验证:
import log_analysis_sdk
client = log_analysis_sdk.Client(api_key=os.environ['LOG_ANALYSIS_API_KEY'])
response = client.analyze(logs_data)
注册回调URL以处理实时警报的webhook。通过匹配日志格式确保兼容性,可使用jq等工具转换非标准日志。

Error Handling

错误处理

Check for errors like invalid log formats or authentication failures. Use try-catch blocks in scripts:
try:
    result = log_analysis.scan('--file invalid.log')
except FileNotFoundError:
    print("Error: Log file not found. Verify path and permissions.")
Common errors include 401 Unauthorized (missing
$LOG_ANALYSIS_API_KEY
) or 400 Bad Request (malformed JSON). Log errors with timestamps and retry transient issues up to 3 times with exponential backoff. Validate inputs before commands, e.g., check if files exist using
os.path.exists()
.
检查无效日志格式、身份验证失败等错误。在脚本中使用try-catch块:
try:
    result = log_analysis.scan('--file invalid.log')
except FileNotFoundError:
    print("Error: Log file not found. Verify path and permissions.")
常见错误包括401未授权(缺失
$LOG_ANALYSIS_API_KEY
)或400错误请求(JSON格式错误)。记录带时间戳的错误,并对临时问题最多重试3次,采用指数退避策略。执行命令前验证输入,例如使用
os.path.exists()
检查文件是否存在。

Concrete Usage Examples

实际使用示例

  1. Detect anomalies in server logs: Run
    log-analysis scan --file /var/log/auth.log --anomaly true
    to identify potential brute-force attacks. Review the output JSON for entries like {"event": "Failed login", "count": 5, "threat": "high"}, then correlate with firewall logs for further investigation.
  2. Integrate with a monitoring script: In a Bash script, fetch logs from a remote server and analyze:
    logs=$(curl http://server/logs.txt)
    echo "$logs" | log-analysis scan --stream --threshold 0.7
    This detects threats in real-time streams, outputting alerts for anomalies like unusual user activity, and pipes results to a SIEM for automated response.
  1. 检测服务器日志中的异常情况:运行
    log-analysis scan --file /var/log/auth.log --anomaly true
    识别潜在的暴力破解攻击。查看输出JSON中的条目(如{"event": "Failed login", "count": 5, "threat": "high"}),然后与防火墙日志关联以进行进一步调查。
  2. 与监控脚本集成:在Bash脚本中,从远程服务器获取日志并进行分析:
    logs=$(curl http://server/logs.txt)
    echo "$logs" | log-analysis scan --stream --threshold 0.7
该命令实时检测流中的威胁,针对异常活动(如异常用户行为)输出警报,并将结果传输到SIEM以实现自动化响应。

Graph Relationships

关联关系

  • Related to: forensics (via shared tags for log parsing), security (for threat detection overlap), blue-team cluster (as a core component for operations).
  • Links: Connects to skills like "incident-response" for follow-up actions and "network-monitoring" for correlated data analysis.
  • 相关领域:取证(共享日志解析标签)、安全(威胁检测重叠)、蓝队集群(作为操作的核心组件)。
  • 关联技能:与“事件响应”技能关联以执行后续操作,与“网络监控”技能关联以进行关联数据分析。