log-coverage-analyzer

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese
This skill performs comprehensive log coverage analysis for code repositories. It identifies logging deficiencies in call chains and detects high-frequency log risks that may impact performance.
本Skill针对代码仓库执行全面的日志覆盖率分析,识别调用链中的日志缺失问题,以及可能影响性能的高频日志风险。

Analysis Principles

分析原则

Log Level Definitions

日志级别定义

Log LevelBehaviorUse Case
LOGEAlways prints, never lostCritical - Must exist on all error return paths
LOGIAlways prints, may be lost if high-frequency triggered at same locationImportant - Success paths, key operations
LOGDOnly prints when debug switch enabledOptional - Debug information only
LOGWOnly prints when debug switch enabledOptional - Warning information only
日志级别行为使用场景
LOGE始终打印,不会丢失关键场景 - 所有错误返回路径必须包含
LOGI始终打印,若同一位置高频触发可能丢失重要场景 - 成功路径、关键操作
LOGD仅在调试开关开启时打印可选场景 - 仅用于调试信息
LOGW仅在调试开关开启时打印可选场景 - 仅用于警告信息

Core Analysis Rules

核心分析规则

  1. Log Coverage Rule: Each branch in a function call chain must have LOGE or LOGI level logging
  2. High-Frequency Risk Rule: Functions that may be called frequently (loops, timers, callbacks, event handlers) with LOGE/LOGI logs pose performance risks
  3. Error Path Rule: All error return paths MUST have LOGE logging
  4. Context Rule: All logs must include sufficient context information (IDs, states, parameters)
  1. 日志覆盖规则:函数调用链中的每个分支必须包含LOGE或LOGI级别的日志
  2. 高频风险规则:可能被频繁调用的函数(循环、定时器、回调、事件处理器)中若包含LOGE/LOGI日志,会存在性能风险
  3. 错误路径规则:所有错误返回路径必须包含LOGE日志
  4. 上下文规则:所有日志必须包含足够的上下文信息(ID、状态、参数)

Execution Workflow

执行流程

Step 00: Create Analysis Plan

步骤00:创建分析计划

Description: Create detailed analysis plan before starting
Actions:
  • Use
    TodoWrite
    tool to create task list
  • Plan analysis scope:
    • Source file discovery and filtering
    • Function call chain extraction
    • Log coverage analysis per branch
    • High-frequency risk detection
    • Report generation
Output: Analysis plan created via
TodoWrite
tool
描述:开始前创建详细的分析计划
操作:
  • 使用
    TodoWrite
    工具创建任务列表
  • 规划分析范围:
    • 源文件的发现与过滤
    • 函数调用链提取
    • 各分支的日志覆盖率分析
    • 高频风险检测
    • 报告生成
输出:通过
TodoWrite
工具创建的分析计划

Step 01: Discover Source Files

步骤01:发现源文件

Description: Find all relevant source files in the repository
Actions:
  • Use
    Glob
    tool to find source files based on language patterns:
    • C/C++:
      **/*.cpp
      ,
      **/*.cc
      ,
      **/*.cxx
      ,
      **/*.h
      ,
      **/*.hpp
    • Java:
      **/*.java
    • Python:
      **/*.py
    • JavaScript/TypeScript:
      **/*.js
      ,
      **/*.ts
    • Go:
      **/*.go
    • Rust:
      **/*.rs
  • Exclude test files if specified (patterns:
    **/test/**
    ,
    **/tests/**
    ,
    **/*_test.*
    )
  • Exclude build directories (patterns:
    **/build/**
    ,
    **/out/**
    ,
    **/target/**
    )
Output: List of source files to analyze
描述:在仓库中查找所有相关源文件
操作:
  • 使用
    Glob
    工具根据语言模式查找源文件:
    • C/C++:
      **/*.cpp
      ,
      **/*.cc
      ,
      **/*.cxx
      ,
      **/*.h
      ,
      **/*.hpp
    • Java:
      **/*.java
    • Python:
      **/*.py
    • JavaScript/TypeScript:
      **/*.js
      ,
      **/*.ts
    • Go:
      **/*.go
    • Rust:
      **/*.rs
  • 若指定则排除测试文件(模式:
    **/test/**
    ,
    **/tests/**
    ,
    **/*_test.*
  • 排除构建目录(模式:
    **/build/**
    ,
    **/out/**
    ,
    **/target/**
输出:待分析的源文件列表

Step 02: Scan Logging Patterns

步骤02:扫描日志模式

Description: Scan all source files for logging macros/functions
Actions:
  • Use
    Grep
    tool to find log statements with pattern:
    • Case-insensitive search for:
      LOG[DEIW]
      ,
      HILOG[DEIW]
      ,
      ALOG[DEIW]
    • Also search for:
      \.log[deiw]\(
      ,
      Log\.[deiw]
      ,
      LOG\.
  • Count occurrences per file
  • Identify log level distribution
Output: Log statistics per file
描述:扫描所有源文件中的日志宏/函数
操作:
  • 使用
    Grep
    工具查找符合以下模式的日志语句:
    • 不区分大小写搜索:
      LOG[DEIW]
      ,
      HILOG[DEIW]
      ,
      ALOG[DEIW]
    • 同时搜索:
      \\.log[deiw]\\(
      ,
      Log\\.[deiw]
      ,
      LOG\\.
  • 统计每个文件中的日志出现次数
  • 识别日志级别分布情况
输出:各文件的日志统计信息

Step 03: Extract Functions and Call Chains

步骤03:提取函数与调用链

Description: Parse source files to extract function definitions and call relationships
Actions:
  • For each source file:
    • Identify function definitions (various patterns per language)
    • Extract function bodies including all branches
    • Identify function calls within each function
    • Build call chain graph
  • Map branch conditions (if/else, switch/case, try/catch, early returns)
Output: Function list with call relationships
描述:解析源文件以提取函数定义和调用关系
操作:
  • 针对每个源文件:
    • 识别函数定义(各语言有不同模式)
    • 提取包含所有分支的函数体
    • 识别每个函数内部的函数调用
    • 构建调用链图
  • 映射分支条件(if/else、switch/case、try/catch、提前返回)
输出:包含调用关系的函数列表

Step 04: Analyze Log Coverage per Branch

步骤04:分析各分支的日志覆盖率

Description: Check each branch in call chains for required logging
Actions:
  • For each function in call chain:
    • Identify all branches:
      • Early returns
      • if/else branches
      • switch/case branches
      • try/catch blocks
      • loop exits with error conditions
    • Check each branch for LOGE or LOGI presence
    • Flag branches without required logging as Log Deficiency
    • Check error return paths specifically for LOGE
Deficiency Classification:
  • High Priority: Error return path without LOGE
  • Medium Priority: Success path without LOGI for multi-step operations
  • Low Priority: Branch without any logging (non-critical)
Output: List of log deficiencies with locations
描述:检查调用链中每个分支是否包含所需日志
操作:
  • 针对调用链中的每个函数:
    • 识别所有分支:
      • 提前返回
      • if/else分支
      • switch/case分支
      • try/catch块
      • 带错误条件的循环退出
    • 检查每个分支是否存在LOGE或LOGI
    • 将缺少所需日志的分支标记为日志缺失
    • 专门检查错误返回路径是否包含LOGE
缺失分类:
  • 高优先级:错误返回路径无LOGE
  • 中优先级:多步骤操作的成功路径无LOGI
  • 低优先级:分支无任何日志(非关键场景)
输出:包含位置信息的日志缺失列表

Step 05: Detect High-Frequency Log Risks

步骤05:检测高频日志风险

Description: Identify functions with LOGE/LOGI that may be called frequently
Actions:
  • Identify high-frequency function patterns:
    • Network data callbacks (
      OnDataReceived
      ,
      OnPacketReceived
      ,
      OnMessage
      )
    • Timer callbacks (
      OnTimer
      ,
      Tick
      ,
      Update
      )
    • Event handlers (
      HandleEvent
      ,
      ProcessEvent
      ,
      OnEvent
      )
    • Loop operations (
      ProcessItem
      ,
      HandlePacket
      ,
      SendData
      )
    • Stream processing (
      ProcessStream
      ,
      HandleFrame
      ,
      EncodeFrame
      )
    • Message queue handlers (
      OnMessage
      ,
      Dispatch
      )
  • For functions with LOGE/LOGI in these patterns:
    • Analyze call frequency potential
    • Classify risk level: High/Medium/Low
    • Flag as High-Frequency Risk
Risk Classification:
  • High Risk: Per-packet/per-frame LOGE/LOGI in data path
  • Medium Risk: Per-message LOGI in messaging path
  • Low Risk: Low-frequency callbacks with logs
Output: List of high-frequency risks with locations
描述:识别包含LOGE/LOGI且可能被频繁调用的函数
操作:
  • 识别高频函数模式:
    • 网络数据回调(
      OnDataReceived
      ,
      OnPacketReceived
      ,
      OnMessage
    • 定时器回调(
      OnTimer
      ,
      Tick
      ,
      Update
    • 事件处理器(
      HandleEvent
      ,
      ProcessEvent
      ,
      OnEvent
    • 循环操作(
      ProcessItem
      ,
      HandlePacket
      ,
      SendData
    • 流处理(
      ProcessStream
      ,
      HandleFrame
      ,
      EncodeFrame
    • 消息队列处理器(
      OnMessage
      ,
      Dispatch
  • 针对符合上述模式且包含LOGE/LOGI的函数:
    • 分析潜在调用频率
    • 分类风险级别:高/中/低
    • 标记为高频风险
风险分类:
  • 高风险:数据路径中每个数据包/帧都打印LOGE/LOGI
  • 中风险:消息路径中每条消息都打印LOGI
  • 低风险:低频率回调中包含日志
输出:包含位置信息的高频风险列表

Step 06: Generate Analysis Report

步骤06:生成分析报告

Description: Create comprehensive analysis report with all findings
Actions:
  • Generate Markdown report with:
    1. Summary statistics
    2. Log deficiencies with fix suggestions
    3. High-frequency risks with fix suggestions
    4. Call chain analysis
    5. Detailed code examples
  • Use
    Write
    tool to save report to
    log-coverage-report-YYYYMMDD-HHmmss.md
Output: Complete analysis report
描述:创建包含所有发现的综合分析报告
操作:
  • 生成Markdown报告,包含:
    1. 汇总统计
    2. 日志缺失问题及修复建议
    3. 高频风险问题及修复建议
    4. 调用链分析
    5. 详细代码示例
  • 使用
    Write
    工具将报告保存至
    log-coverage-report-YYYYMMDD-HHmmss.md
输出:完整的分析报告

Language-Specific Patterns

各语言特定模式

C/C++

C/C++

Function Definition Patterns:
cpp
[return_type] [class::]function_name([parameters]) {
[\w\s:*,]*\{
Log Patterns:
  • LOG[DEIW]\(
    ,
    HILOG[DEIW]\(
    ,
    ALOG[DEIW]\(
  • __android_log_print
  • OHOS::HiviewDFX::HiLog::[Error|Warn|Info|Debug]
函数定义模式:
cpp
[return_type] [class::]function_name([parameters]) {
[\\w\\s:*,]*\\{
日志模式:
  • LOG[DEIW]\\(
    ,
    HILOG[DEIW]\\(
    ,
    ALOG[DEIW]\\(
  • __android_log_print
  • OHOS::HiviewDFX::HiLog::[Error|Warn|Info|Debug]

Java

Java

Function Definition Patterns:
java
(public|private|protected)?(\s+static)?\s+\w+\s+\w+\s*\(.*\)\s*(throws\s+[\w\s,]+)?\s*\{
Log Patterns:
  • Log\.[deiw]\(
  • Logger\.(error|warn|info|debug)\(
  • Timber\.[deiw]\(
函数定义模式:
java
(public|private|protected)?(\\s+static)?\\s+\\w+\\s+\\w+\\s*\\(.*\\)\\s*(throws\\s+[\\w\\s,]+)?\\s*\\{
日志模式:
  • Log\\.[deiw]\\(
  • Logger\\.(error|warn|info|debug)\\(
  • Timber\\.[deiw]\\(

Python

Python

Function Definition Patterns:
python
def\s+\w+\s*\(.*\)\s*->?\s*.*:
Log Patterns:
  • logger\.(error|warning|info|debug)\(
  • logging\.(error|warning|info|debug)\(
  • print\(
函数定义模式:
python
def\\s+\\w+\\s*\\(.*\\)\\s*->?\\s*.*:
日志模式:
  • logger\\.(error|warning|info|debug)\\(
  • logging\\.(error|warning|info|debug)\\(
  • print\\(

JavaScript/TypeScript

JavaScript/TypeScript

Function Definition Patterns:
javascript
function\s+\w+\s*\(.*\)\s*\{
|\w+\s*\([^)]*\)\s*(=>|\{)
Log Patterns:
  • console\.(error|warn|info|log)\(
  • logger\.(error|warn|info|debug)\(
函数定义模式:
javascript
function\\s+\\w+\\s*\\(.*\\)\\s*\\{
|\\w+\\s*\\([^)]*\\)\\s*(=>|\\{)
日志模式:
  • console\\.(error|warn|info|log)\\(
  • logger\\.(error|warn|info|debug)\\(

Fix Recommendations

修复建议

For Log Deficiencies

针对日志缺失问题

Error Return Path (Must Fix):
cpp
// BEFORE:
if (remote == nullptr) {
    return ERR_NULL_OBJECT;  // ❌ No LOGE
}

// AFTER:
if (remote == nullptr) {
    HILOGE("FunctionName: remote is null, context=%{public}d", context);
    return ERR_NULL_OBJECT;
}
Success Path (Should Fix):
cpp
// BEFORE:
int32_t CreateSession(...) {
    // ... initialization code
    return ERR_OK;  // ❌ No LOGI on success
}

// AFTER:
int32_t CreateSession(...) {
    // ... initialization code
    HILOGI("CreateSession: success, sessionId=%{public}d, name=%{public}s", id, name);
    return ERR_OK;
}
错误返回路径(必须修复):
cpp
// 修复前:
if (remote == nullptr) {
    return ERR_NULL_OBJECT;  // ❌ 无LOGE
}

// 修复后:
if (remote == nullptr) {
    HILOGE("FunctionName: remote is null, context=%{public}d", context);
    return ERR_NULL_OBJECT;
}
成功路径(建议修复):
cpp
// 修复前:
int32_t CreateSession(...) {
    // ... 初始化代码
    return ERR_OK;  // ❌ 成功路径无LOGI
}

// 修复后:
int32_t CreateSession(...) {
    // ... 初始化代码
    HILOGI("CreateSession: success, sessionId=%{public}d, name=%{public}s", id, name);
    return ERR_OK;
}

For High-Frequency Risks

针对高频风险问题

Remove High-Frequency LOGI:
cpp
// BEFORE:
void OnPacketReceived(int socketId, const void* data, uint32_t len) {
    HILOGI("packet received: socket=%{public}d, len=%{public}u", socketId, len);  // ⚠️ Per-packet
    // ... process packet
}

// AFTER:
void OnPacketReceived(int socketId, const void* data, uint32_t len) {
    // Removed per-packet LOGI
    static std::atomic<uint64_t> packetCount{0};
    if (++packetCount % 1000 == 1) {
        HILOGI("Packet stats: count=%{public}llu, socket=%{public}d",
            packetCount.load(), socketId);
    }
    // ... process packet
}
Use Throttled LOGE for Errors:
cpp
// BEFORE:
void ProcessPacket(const Packet* pkt) {
    if (pkt == nullptr) {
        HILOGE("packet is null");  // ⚠️ Per-packet error
        return;
    }
}

// AFTER:
void ProcessPacket(const Packet* pkt) {
    if (pkt == nullptr) {
        static std::atomic<uint32_t> errorCount{0};
        if (++errorCount % 100 == 1) {
            HILOGE("ProcessPacket: null packet, count=%{public}u", errorCount.load());
        }
        return;
    }
}
移除高频LOGI:
cpp
// 修复前:
void OnPacketReceived(int socketId, const void* data, uint32_t len) {
    HILOGI("packet received: socket=%{public}d, len=%{public}u", socketId, len);  // ⚠️ 每个数据包都打印
    // ... 处理数据包
}

// 修复后:
void OnPacketReceived(int socketId, const void* data, uint32_t len) {
    // 移除每个数据包的LOGI
    static std::atomic<uint64_t> packetCount{0};
    if (++packetCount % 1000 == 1) {
        HILOGI("Packet stats: count=%{public}llu, socket=%{public}d",
            packetCount.load(), socketId);
    }
    // ... 处理数据包
}
对错误日志使用限流打印:
cpp
// 修复前:
void ProcessPacket(const Packet* pkt) {
    if (pkt == nullptr) {
        HILOGE("packet is null");  // ⚠️ 每个错误数据包都打印
        return;
    }
}

// 修复后:
void ProcessPacket(const Packet* pkt) {
    if (pkt == nullptr) {
        static std::atomic<uint32_t> errorCount{0};
        if (++errorCount % 100 == 1) {
            HILOGE("ProcessPacket: null packet, count=%{public}u", errorCount.load());
        }
        return;
    }
}

Output Format

输出格式

================================================================================
Log Coverage Analysis Report
================================================================================

Repository: <repository_path>
Analysis Date: <timestamp>
Files Analyzed: <count>
================================================================================
日志覆盖率分析报告
================================================================================

仓库路径: <repository_path>
分析日期: <timestamp>
分析文件数: <count>

Summary Statistics

汇总统计

MetricCount
Total Source Files<number>
Total Functions<number>
Functions Analyzed<number>
Log Deficiencies<number>
High-Frequency Risks<number>
指标数量
总源文件数<number>
总函数数<number>
已分析函数数<number>
日志缺失数<number>
高频风险数<number>

Log Deficiencies

日志缺失问题

[High/Medium/Low] Priority

[高/中/低]优先级

File:
<file_path>
Function:
<function_name>
Lines:
<line_range>
Issue:
<description>
Evidence:
cpp
<code snippet>
Impact: <explain impact on debugging/troubleshooting>
Fix:
cpp
<fixed code>

文件:
<file_path>
函数:
<function_name>
行号:
<line_range>
问题:
<description>
证据:
cpp
<code snippet>
影响: <说明对调试/故障排查的影响>
修复方案:
cpp
<修复后的代码>

High-Frequency Risks

高频风险问题

[High/Medium/Low] Risk

[高/中/低]风险

File:
<file_path>
Function:
<function_name>
Lines:
<line_range>
Risk: <description>
Evidence:
cpp
<code snippet>
Analysis: <explain why this is high-frequency>
Fix:
cpp
<fixed code with throttling/statistics>

文件:
<file_path>
函数:
<function_name>
行号:
<line_range>
风险: <description>
证据:
cpp
<code snippet>
分析: <说明为何属于高频场景>
修复方案:
cpp
<带限流/统计的修复代码>

Call Chain Analysis

调用链分析

Call Chain: <chain_name>

调用链: <chain_name>

<function_1>()
    ↓ [✓/✗/⚠️] <log_status>
<function_2>()
    ↓ [✓/✗/⚠️] <log_status>
<function_3>()
    ├─ [✓/✗/⚠️] branch_1
    ├─ [✓/✗/⚠️] branch_2
    └─ [✓/✗/⚠️] branch_3
Legend:
  • ✓ : Has required LOGE/LOGI
  • ✗ : Missing required logging
  • ⚠️ : Has high-frequency log risk

<function_1>()
    ↓ [✓/✗/⚠️] <日志状态>
<function_2>()
    ↓ [✓/✗/⚠️] <日志状态>
<function_3>()
    ├─ [✓/✗/⚠️] branch_1
    ├─ [✓/✗/⚠️] branch_2
    └─ [✓/✗/⚠️] branch_3
图例:
  • ✓ : 包含所需LOGE/LOGI
  • ✗ : 缺少所需日志
  • ⚠️ : 存在高频日志风险

Recommendations

建议

Immediate Actions (P0)

立即执行(P0)

<list of critical items>
<关键事项列表>

Follow-up Actions (P1)

后续执行(P1)

<list of important items>
<重要事项列表>

Long-term Improvements (P2)

长期优化(P2)

<list of improvement suggestions>
================================================================================
undefined
<优化建议列表>
================================================================================
undefined

Parameters

参数

ParameterRequiredDescription
--path
NoRepository path (default: current directory)
--exclude-tests
NoExclude test files (default: true)
--lang
NoLanguage filter (cpp, java, python, js, all)
--output
NoOutput report path
参数是否必填描述
--path
仓库路径(默认:当前目录)
--exclude-tests
排除测试文件(默认:true)
--lang
语言过滤(cpp, java, python, js, all)
--output
报告输出路径

Usage Examples

使用示例

undefined
undefined

Analyze current directory

分析当前目录

/log-coverage-analyzer
/log-coverage-analyzer

Analyze specific repository

分析指定仓库

/log-coverage-analyzer --path /path/to/repo
/log-coverage-analyzer --path /path/to/repo

Analyze C++ code only

仅分析C++代码

/log-coverage-analyzer --lang cpp --path /path/to/repo
/log-coverage-analyzer --lang cpp --path /path/to/repo

Include test files in analysis

分析包含测试文件

/log-coverage-analyzer --exclude-tests false
/log-coverage-analyzer --exclude-tests false

Custom output location

自定义输出位置

/log-coverage-analyzer --output /path/to/report.md
undefined
/log-coverage-analyzer --output /path/to/report.md
undefined

Tips

提示

  • Use
    Grep
    with
    output_mode: content
    and
    -B/-C
    flags for context
  • Use
    Read
    tool with
    offset
    and
    limit
    for large files
  • For large repositories, focus on critical directories first
  • High-frequency risks should be prioritized over minor log deficiencies
  • Always include context (IDs, states, parameters) in LOGE messages
  • 使用
    Grep
    时设置
    output_mode: content
    并结合
    -B/-C
    参数获取上下文
  • 处理大文件时使用
    Read
    工具的
    offset
    limit
    参数
  • 针对大型仓库,优先分析关键目录
  • 高频日志风险的优先级高于轻微日志缺失
  • LOGE消息中务必包含上下文信息(ID、状态、参数)",