performance-profile
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChinesePerformance Profiling & Bottleneck Detection
性能分析与瓶颈检测
I'll profile your application to identify performance bottlenecks and provide optimization recommendations.
Supported Environments:
- Node.js profiling (--inspect, clinic.js)
- Browser performance (Chrome DevTools)
- Python profiling (cProfile, line_profiler)
- Bottleneck identification
- Memory leak detection
- Optimization recommendations
Arguments: - optional: or specific file/route to profile
$ARGUMENTSnode|python|browser我会对您的应用进行性能分析,识别性能瓶颈并提供优化建议。
支持的环境:
- Node.js性能分析(--inspect、clinic.js)
- 浏览器性能分析(Chrome DevTools)
- Python性能分析(cProfile、line_profiler)
- 瓶颈识别
- 内存泄漏检测
- 优化建议
参数: - 可选: 或要分析的特定文件/路由
$ARGUMENTSnode|python|browserToken Optimization
Token优化
This skill uses efficient patterns to minimize token consumption during performance profiling setup and analysis.
本技能使用高效模式,在性能分析设置和分析过程中最小化Token消耗。
Optimization Strategies
优化策略
1. Runtime Detection Caching (Saves 700 tokens per invocation)
1. 运行时检测缓存(每次调用节省700个Token)
Cache runtime environment and tool availability to avoid repeated detection:
bash
CACHE_FILE=".claude/cache/performance-profile/runtime.json"
CACHE_TTL=86400 # 24 hours (tools don't change frequently)
mkdir -p .claude/cache/performance-profile
if [ -f "$CACHE_FILE" ]; then
CACHE_AGE=$(($(date +%s) - $(stat -c %Y "$CACHE_FILE" 2>/dev/null || stat -f %m "$CACHE_FILE" 2>/dev/null)))
if [ $CACHE_AGE -lt $CACHE_TTL ]; then
# Use cached runtime info
RUNTIME=$(jq -r '.runtime' "$CACHE_FILE")
NODE_VERSION=$(jq -r '.node_version' "$CACHE_FILE")
PYTHON_VERSION=$(jq -r '.python_version' "$CACHE_FILE")
TOOLS_INSTALLED=$(jq -r '.tools_installed' "$CACHE_FILE")
echo "Using cached runtime: $RUNTIME"
echo "Tools available: $TOOLS_INSTALLED"
SKIP_DETECTION="true"
fi
fi缓存运行时环境和工具可用性,避免重复检测:
bash
CACHE_FILE=".claude/cache/performance-profile/runtime.json"
CACHE_TTL=86400 # 24小时(工具不会频繁变更)
mkdir -p .claude/cache/performance-profile
if [ -f "$CACHE_FILE" ]; then
CACHE_AGE=$(($(date +%s) - $(stat -c %Y "$CACHE_FILE" 2>/dev/null || stat -f %m "$CACHE_FILE" 2>/dev/null)))
if [ $CACHE_AGE -lt $CACHE_TTL ]; then
# 使用缓存的运行时信息
RUNTIME=$(jq -r '.runtime' "$CACHE_FILE")
NODE_VERSION=$(jq -r '.node_version' "$CACHE_FILE")
PYTHON_VERSION=$(jq -r '.python_version' "$CACHE_FILE")
TOOLS_INSTALLED=$(jq -r '.tools_installed' "$CACHE_FILE")
echo "Using cached runtime: $RUNTIME"
echo "Tools available: $TOOLS_INSTALLED"
SKIP_DETECTION="true"
fi
fiFirst run: detect and cache
首次运行:检测并缓存
if [ "$SKIP_DETECTION" != "true" ]; then
detect_runtime # Expensive operation
detect_tools # Expensive operation
# Cache results
jq -n \
--arg runtime "$RUNTIME" \
--arg node "$NODE_VERSION" \
--arg python "$PYTHON_VERSION" \
--arg tools "$TOOLS_INSTALLED" \
'{runtime: $runtime, node_version: $node, python_version: $python, tools_installed: $tools}' \
> "$CACHE_FILE"fi
**Savings:** 700 tokens (no repeated package.json reads, no command checks, no file system scans)if [ "$SKIP_DETECTION" != "true" ]; then
detect_runtime # 高开销操作
detect_tools # 高开销操作
# 缓存结果
jq -n \
--arg runtime "$RUNTIME" \
--arg node "$NODE_VERSION" \
--arg python "$PYTHON_VERSION" \
--arg tools "$TOOLS_INSTALLED" \
'{runtime: $runtime, node_version: $node, python_version: $python, tools_installed: $tools}' \
> "$CACHE_FILE"fi
**节省:** 700个Token(无需重复读取package.json、检查命令或扫描文件系统)2. Early Exit for Configured Environments (Saves 85%)
2. 已配置环境提前退出(节省85%)
If profiling already set up, provide quick summary and exit:
bash
if [ -f ".claude/profiling/profile-node.sh" ] && [ -f ".claude/profiling/profile-$TIMESTAMP.md" ]; then
if [ "$RECONFIGURE" != "true" ]; then
echo "Profiling already configured"
echo ""
echo "Available profiling scripts:"
ls .claude/profiling/profile-*.sh 2>/dev/null
echo ""
echo "Last report: $(ls -t .claude/profiling/profile-*.md | head -1)"
echo ""
echo "Use --reconfigure to regenerate setup"
exit 0
fi
fiSavings: 85% reduction when setup exists (4,000 → 600 tokens)
如果已设置好分析环境,提供快速摘要后退出:
bash
if [ -f ".claude/profiling/profile-node.sh" ] && [ -f ".claude/profiling/profile-$TIMESTAMP.md" ]; then
if [ "$RECONFIGURE" != "true" ]; then
echo "Profiling already configured"
echo ""
echo "Available profiling scripts:"
ls .claude/profiling/profile-*.sh 2>/dev/null
echo ""
echo "Last report: $(ls -t .claude/profiling/profile-*.md | head -1)"
echo ""
echo "Use --reconfigure to regenerate setup"
exit 0
fi
fi节省: 当环境已配置时减少85%的Token消耗(从4000降至600个Token)
3. Script Generation on Demand (Saves 70%)
3. 按需生成脚本(节省70%)
Only generate profiling scripts for the detected runtime:
bash
undefined仅为检测到的运行时生成分析脚本:
bash
undefinedDefault: Only create scripts for detected runtime
默认:仅为检测到的运行时创建脚本
case "$RUNTIME" in
nodejs)
create_node_profiling_script # 800 tokens
# Skip Python and browser scripts
;;
python)
create_python_profiling_script # 700 tokens
# Skip Node and browser scripts
;;
browser)
create_browser_profiling_guide # 600 tokens
# Skip Node and Python scripts
;;
esac
case "$RUNTIME" in
nodejs)
create_node_profiling_script # 800个Token
# 跳过Python和浏览器脚本
;;
python)
create_python_profiling_script # 700个Token
# 跳过Node和浏览器脚本
;;
browser)
create_browser_profiling_guide # 600个Token
# 跳过Node和Python脚本
;;
esac
Only create all scripts if explicitly requested
仅在明确请求时创建所有脚本
if [ "$GENERATE_ALL" = "true" ]; then
create_node_profiling_script
create_python_profiling_script
create_browser_profiling_guide
fi
**Savings:** 70% (generate 1 script vs 3 scripts)if [ "$GENERATE_ALL" = "true" ]; then
create_node_profiling_script
create_python_profiling_script
create_browser_profiling_guide
fi
**节省:** 70%(生成1个脚本而非3个)4. Sample-Based Anti-Pattern Detection (Saves 80%)
4. 基于样本的反模式检测(节省80%)
Show only first 10 issues, not exhaustive list:
bash
undefined仅显示前10个问题,而非完整列表:
bash
undefinedEfficient anti-pattern detection with head limit
高效的反模式检测,限制显示数量
echo "Scanning for performance anti-patterns..."
echo "Scanning for performance anti-patterns..."
Show only first 10 sync blocking operations
仅显示前10个同步阻塞操作
SYNC_BLOCKING=$(grep -r "readFileSync|writeFileSync|execSync"
--include=".js" --include=".ts"
--exclude-dir=node_modules --exclude-dir=dist
. 2>/dev/null | head -10)
--include=".js" --include=".ts"
--exclude-dir=node_modules --exclude-dir=dist
. 2>/dev/null | head -10)
if [ -n "$SYNC_BLOCKING" ]; then
COUNT=$(echo "$SYNC_BLOCKING" | wc -l)
echo "⚠️ Found synchronous blocking operations (showing first 10):"
echo "$SYNC_BLOCKING" | head -5 | while read line; do
echo " - $line"
done
if [ $COUNT -gt 5 ]; then
echo " ... and $((COUNT - 5)) more"
fifi
**Savings:** 80% (show samples vs exhaustive 500+ line output)SYNC_BLOCKING=$(grep -r "readFileSync|writeFileSync|execSync"
--include=".js" --include=".ts"
--exclude-dir=node_modules --exclude-dir=dist
. 2>/dev/null | head -10)
--include=".js" --include=".ts"
--exclude-dir=node_modules --exclude-dir=dist
. 2>/dev/null | head -10)
if [ -n "$SYNC_BLOCKING" ]; then
COUNT=$(echo "$SYNC_BLOCKING" | wc -l)
echo "⚠️ Found synchronous blocking operations (showing first 10):"
echo "$SYNC_BLOCKING" | head -5 | while read line; do
echo " - $line"
done
if [ $COUNT -gt 5 ]; then
echo " ... and $((COUNT - 5)) more"
fifi
**节省:** 80%(显示样本而非500+行的完整输出)5. Grep-Based Pattern Detection (Saves 90%)
5. 基于Grep的模式检测(节省90%)
Use Grep for anti-pattern detection instead of reading all files:
bash
undefined使用Grep进行反模式检测,而非读取所有文件:
bash
undefinedEfficient: Grep returns only matching lines
高效:Grep仅返回匹配的行
detect_performance_issues() {
local issue_count=0
# Check for console.log (only in src directories)
if grep -q -r "console\.log" --include="*.{js,ts,jsx,tsx}" src/ 2>/dev/null; then
issue_count=$((issue_count + 1))
echo "⚠️ Issue $issue_count: console.log statements in source code"
fi
# Check for sync operations
if grep -q -r "readFileSync\|writeFileSync" --include="*.{js,ts}" src/ 2>/dev/null; then
issue_count=$((issue_count + 1))
echo "⚠️ Issue $issue_count: Synchronous file operations detected"
fi
# Show count, not full lists (efficient summary)
echo ""
echo "Total issues detected: $issue_count"}
**Savings:** 90% vs reading all files (Grep returns boolean or count)detect_performance_issues() {
local issue_count=0
# 检查console.log(仅在src目录中)
if grep -q -r "console\.log" --include="*.{js,ts,jsx,tsx}" src/ 2>/dev/null; then
issue_count=$((issue_count + 1))
echo "⚠️ Issue $issue_count: console.log statements in source code"
fi
# 检查同步操作
if grep -q -r "readFileSync\|writeFileSync" --include="*.{js,ts}" src/ 2>/dev/null; then
issue_count=$((issue_count + 1))
echo "⚠️ Issue $issue_count: Synchronous file operations detected"
fi
# 显示数量,而非完整列表(高效摘要)
echo ""
echo "Total issues detected: $issue_count"}
**节省:** 与读取所有文件相比节省90%(Grep返回布尔值或数量)6. Bash-Based Tool Execution (Saves 70%)
6. 基于Bash的工具执行(节省70%)
All profiling operations use pure bash (no Task agent, no Read/Edit tools):
bash
undefined所有分析操作均使用纯bash(无需Task agent、Read/Edit工具):
bash
undefinedAll operations in bash
所有操作均在bash中执行
case "$PROFILE_TYPE" in
node)
# Direct bash execution
clinic doctor -- node index.js
;;
python)
# Direct bash execution
python -m cProfile -o profile.stats main.py
;;
browser)
# Direct bash execution
lighthouse http://localhost:3000 --output html
;;
esac
**Savings:** 70% vs using Task agents for profiling operationscase "$PROFILE_TYPE" in
node)
# 直接bash执行
clinic doctor -- node index.js
;;
python)
# 直接bash执行
python -m cProfile -o profile.stats main.py
;;
browser)
# 直接bash执行
lighthouse http://localhost:3000 --output html
;;
esac
**节省:** 与使用Task agent执行分析操作相比节省70%7. Minimal Report Generation (Saves 60%)
7. 最小化报告生成(节省60%)
Generate summary report by default, full report on demand:
bash
REPORT_LEVEL="${REPORT_LEVEL:-summary}"
case "$REPORT_LEVEL" in
summary)
# Minimal report (400 tokens)
cat > "$REPORT" << EOF默认生成摘要报告,按需生成完整报告:
bash
REPORT_LEVEL="${REPORT_LEVEL:-summary}"
case "$REPORT_LEVEL" in
summary)
# 最小化报告(400个Token)
cat > "$REPORT" << EOFPerformance Profile Summary
Performance Profile Summary
Runtime: $RUNTIME
Tools: $TOOLS_INSTALLED
Runtime: $RUNTIME
Tools: $TOOLS_INSTALLED
Quick Start
Quick Start
Run profiling:
$PROFILE_DIR/profile-$RUNTIME.sh
See full report: cat $REPORT --full
EOF
;;
full)
# Comprehensive report (2,000 tokens)
generate_full_performance_report
;;esac
**Savings:** 60% for default runs (summary 400 tokens vs full 2,000 tokens)Run profiling:
$PROFILE_DIR/profile-$RUNTIME.sh
See full report: cat $REPORT --full
EOF
;;
full)
# 全面报告(2000个Token)
generate_full_performance_report
;;esac
**节省:** 默认运行时节省60%(摘要400个Token vs 完整报告2000个Token)8. Tool Installation Guidance Only (Saves 95%)
8. 仅提供工具安装指导(节省95%)
Don't install tools automatically, provide installation commands:
bash
check_tool_availability() {
local tool="$1"
if ! command -v "$tool" >/dev/null 2>&1; then
echo "⚠️ $tool not installed"
echo " Install: $(get_install_command $tool)"
TOOLS_MISSING="true"
else
echo "✓ $tool available"
fi
}不自动安装工具,仅提供安装命令:
bash
check_tool_availability() {
local tool="$1"
if ! command -v "$tool" >/dev/null 2>&1; then
echo "⚠️ $tool not installed"
echo " Install: $(get_install_command $tool)"
TOOLS_MISSING="true"
else
echo "✓ $tool available"
fi
}No actual npm install, just guidance
不实际执行npm install,仅提供指导
if [ "$TOOLS_MISSING" = "true" ]; then
echo ""
echo "Install missing tools to proceed with profiling"
exit 0
fi
**Savings:** 95% (no npm install execution which can take 1,000+ tokens)if [ "$TOOLS_MISSING" = "true" ]; then
echo ""
echo "Install missing tools to proceed with profiling"
exit 0
fi
**节省:** 95%(避免执行npm install,这可能消耗1000+个Token)Cache Invalidation
缓存失效
Caches are invalidated when:
- Runtime environment changes (different package.json, new tool installs)
- 24 hours elapsed (time-based for tool availability)
- User runs flag
--clear-cache - Major version update detected (Node.js 18 → 20)
在以下情况时缓存会失效:
- 运行时环境变更(不同的package.json、新安装的工具)
- 已过去24小时(基于时间的工具可用性检查)
- 用户运行标志
--clear-cache - 检测到主要版本更新(Node.js 18 → 20)
Real-World Token Usage
实际Token使用情况
Typical profiling workflow:
-
First-time setup: 1,500-2,500 tokens
- Runtime detection: 400 tokens
- Tool checking: 300 tokens
- Script generation (1 runtime): 800 tokens
- Summary report: 400 tokens
-
Cached environment: 400-800 tokens
- Cached runtime: 100 tokens (85% savings)
- Skip tool checks: 0 tokens
- Skip script regeneration: 0 tokens
- Quick summary: 300 tokens
-
Profiling execution: 200-600 tokens
- Execute profiling script: 200 tokens (bash operation)
- Parse results summary: 400 tokens (with --results flag)
-
Anti-pattern scan: 300-800 tokens
- Grep-based detection: 200 tokens
- Sample-based reporting: 100-600 tokens (show 5-10 examples)
-
Full report generation: 1,500-2,000 tokens
- Only when explicitly requested with --full flag
Average usage distribution:
- 70% of runs: Cached environment + quick summary (400-800 tokens) ✅ Most common
- 20% of runs: First-time setup (1,500-2,500 tokens)
- 10% of runs: Full analysis + report (2,000-3,000 tokens)
Expected token range: 400-2,500 tokens (60% reduction from 1,000-6,000 baseline)
典型分析工作流:
-
首次设置: 1500-2500个Token
- 运行时检测:400个Token
- 工具检查:300个Token
- 脚本生成(1个运行时):800个Token
- 摘要报告:400个Token
-
缓存环境: 400-800个Token
- 缓存运行时:100个Token(节省85%)
- 跳过工具检查:0个Token
- 跳过脚本重新生成:0个Token
- 快速摘要:300个Token
-
分析执行: 200-600个Token
- 执行分析脚本:200个Token(bash操作)
- 解析结果摘要:400个Token(使用--results标志)
-
反模式扫描: 300-800个Token
- 基于Grep的检测:200个Token
- 基于样本的报告:100-600个Token(显示5-10个示例)
-
完整报告生成: 1500-2000个Token
- 仅在明确使用--full标志时请求
平均使用分布:
- 70%的运行:缓存环境+快速摘要(400-800个Token)✅ 最常见
- 20%的运行:首次设置(1500-2500个Token)
- 10%的运行:完整分析+报告(2000-3000个Token)
预期Token范围: 400-2500个Token(相比基准的1000-6000个Token减少60%)
Progressive Disclosure
渐进式信息披露
Three levels of detail:
-
Default (summary): Quick setup and guidancebash
claude "/performance-profile" # Shows: runtime detected, tools available, profiling commands # Tokens: 400-800 -
Verbose (detailed): Anti-pattern scan + profiling scriptsbash
claude "/performance-profile --verbose" # Shows: full anti-pattern detection, all profiling options # Tokens: 1,200-1,800 -
Full (exhaustive): Complete analysis + all runtime scriptsbash
claude "/performance-profile --full" # Shows: detailed report, all runtimes, complete recommendations # Tokens: 2,000-3,000
提供三个层级的细节:
-
默认(摘要): 快速设置和指导bash
claude "/performance-profile" # 显示:检测到的运行时、可用工具、分析命令 # Token:400-800 -
详细模式: 反模式扫描+分析脚本bash
claude "/performance-profile --verbose" # 显示:完整的反模式检测、所有分析选项 # Token:1200-1800 -
完整模式(详尽): 完整分析+所有运行时脚本bash
claude "/performance-profile --full" # 显示:详细报告、所有运行时、完整建议 # Token:2000-3000
Implementation Notes
实现说明
Key patterns applied:
- ✅ Runtime detection caching (700 token savings)
- ✅ Early exit for configured environments (85% reduction)
- ✅ Script generation on demand (70% savings)
- ✅ Sample-based anti-pattern detection (80% savings)
- ✅ Grep-based pattern detection (90% savings)
- ✅ Bash-based tool execution (70% savings)
- ✅ Minimal report generation (60% savings)
- ✅ Tool guidance only, no auto-install (95% savings)
Cache locations:
- - Runtime environment and tools
.claude/cache/performance-profile/runtime.json - - Recent scan results (1 hour TTL)
.claude/cache/performance-profile/anti-patterns.txt
Flags:
- - Force regeneration of profiling scripts
--reconfigure - - Show anti-pattern scan + detailed options
--verbose - - Generate comprehensive report for all runtimes
--full - - Create scripts for Node.js + Python + Browser
--all-runtimes - - Force cache invalidation
--clear-cache
Environment-specific optimizations:
- Node.js: Focus on async patterns, readFileSync detection, package.json analysis
- Python: Focus on list comprehensions, database queries, cProfile setup
- Browser: Focus on bundle size, console.log removal, image optimization
应用的关键模式:
- ✅ 运行时检测缓存(节省700个Token)
- ✅ 已配置环境提前退出(减少85%)
- ✅ 按需生成脚本(节省70%)
- ✅ 基于样本的反模式检测(节省80%)
- ✅ 基于Grep的模式检测(节省90%)
- ✅ 基于Bash的工具执行(节省70%)
- ✅ 最小化报告生成(节省60%)
- ✅ 仅提供工具指导,不自动安装(节省95%)
缓存位置:
- - 运行时环境和工具
.claude/cache/performance-profile/runtime.json - - 最近的扫描结果(1小时TTL)
.claude/cache/performance-profile/anti-patterns.txt
标志:
- - 强制重新生成分析脚本
--reconfigure - - 显示反模式扫描+详细选项
--verbose - - 为所有运行时生成全面报告
--full - - 为Node.js + Python + Browser创建脚本
--all-runtimes - - 强制缓存失效
--clear-cache
特定环境优化:
- Node.js:关注异步模式、readFileSync检测、package.json分析
- Python:关注列表推导、数据库查询、cProfile设置
- Browser:关注包大小、console.log移除、图片优化
Extended Thinking for Performance Analysis
性能分析扩展思考
<think>
Performance profiling requires understanding:
- CPU-bound vs I/O-bound operations
- Synchronous vs asynchronous bottlenecks
- Memory allocation patterns
- Database query performance
- Network latency impacts
- Bundle size and loading performance
Complex scenarios:
- Multi-threaded applications
- Distributed systems
- Microservice latency
- Real-time applications
- Large dataset processing
- Memory leaks over time
<think>
性能分析需要理解:
- CPU密集型 vs I/O密集型操作
- 同步 vs 异步瓶颈
- 内存分配模式
- 数据库查询性能
- 网络延迟影响
- 包大小和加载性能
复杂场景:
- 多线程应用
- 分布式系统
- 微服务延迟
- 实时应用
- 大数据集处理
- 长期内存泄漏
Phase 1: Environment Detection
阶段1:环境检测
I'll detect the runtime environment and available profiling tools:
bash
#!/bin/bash我会检测运行时环境和可用的分析工具:
bash
#!/bin/bashPerformance Profiling - Environment Detection
Performance Profiling - Environment Detection
echo "=== Performance Profiling Setup ==="
echo ""
echo "=== Performance Profiling Setup ==="
echo ""
Create profiling directory
创建分析目录
mkdir -p .claude/profiling
PROFILE_DIR=".claude/profiling"
TIMESTAMP=$(date +%Y%m%d-%H%M%S)
REPORT="$PROFILE_DIR/profile-$TIMESTAMP.md"
detect_runtime() {
local runtime=""
# Check for Node.js
if [ -f "package.json" ]; then
if command -v node >/dev/null 2>&1; then
runtime="nodejs"
NODE_VERSION=$(node --version)
echo "✓ Node.js detected: $NODE_VERSION"
fi
fi
# Check for Python
if [ -f "requirements.txt" ] || [ -f "setup.py" ] || [ -f "pyproject.toml" ]; then
if command -v python >/dev/null 2>&1; then
runtime="python"
PYTHON_VERSION=$(python --version)
echo "✓ Python detected: $PYTHON_VERSION"
elif command -v python3 >/dev/null 2>&1; then
runtime="python"
PYTHON_VERSION=$(python3 --version)
echo "✓ Python detected: $PYTHON_VERSION"
fi
fi
# Check for browser-based app
if [ -f "package.json" ]; then
if grep -q "\"react\"\|\"vue\"\|\"angular\"\|\"svelte\"" package.json; then
if [ -z "$runtime" ]; then
runtime="browser"
else
runtime="$runtime,browser"
fi
echo "✓ Frontend framework detected - browser profiling available"
fi
fi
echo "$runtime"}
RUNTIME=$(detect_runtime)
if [ -z "$RUNTIME" ]; then
echo "❌ Unable to detect runtime environment"
echo ""
echo "Supported environments:"
echo " - Node.js applications"
echo " - Python applications"
echo " - Browser-based applications (React, Vue, Angular, Svelte)"
exit 1
fi
echo ""
echo "Environment: $RUNTIME"
undefinedmkdir -p .claude/profiling
PROFILE_DIR=".claude/profiling"
TIMESTAMP=$(date +%Y%m%d-%H%M%S)
REPORT="$PROFILE_DIR/profile-$TIMESTAMP.md"
detect_runtime() {
local runtime=""
# 检查Node.js
if [ -f "package.json" ]; then
if command -v node >/dev/null 2>&1; then
runtime="nodejs"
NODE_VERSION=$(node --version)
echo "✓ Node.js detected: $NODE_VERSION"
fi
fi
# 检查Python
if [ -f "requirements.txt" ] || [ -f "setup.py" ] || [ -f "pyproject.toml" ]; then
if command -v python >/dev/null 2>&1; then
runtime="python"
PYTHON_VERSION=$(python --version)
echo "✓ Python detected: $PYTHON_VERSION"
elif command -v python3 >/dev/null 2>&1; then
runtime="python"
PYTHON_VERSION=$(python3 --version)
echo "✓ Python detected: $PYTHON_VERSION"
fi
fi
# 检查基于浏览器的应用
if [ -f "package.json" ]; then
if grep -q "\"react\"\|\"vue\"\|\"angular\"\|\"svelte\"" package.json; then
if [ -z "$runtime" ]; then
runtime="browser"
else
runtime="$runtime,browser"
fi
echo "✓ Frontend framework detected - browser profiling available"
fi
fi
echo "$runtime"}
RUNTIME=$(detect_runtime)
if [ -z "$RUNTIME" ]; then
echo "❌ Unable to detect runtime environment"
echo ""
echo "Supported environments:"
echo " - Node.js applications"
echo " - Python applications"
echo " - Browser-based applications (React, Vue, Angular, Svelte)"
exit 1
fi
echo ""
echo "Environment: $RUNTIME"
undefinedPhase 2: Install Profiling Tools
阶段2:安装分析工具
I'll check and install necessary profiling tools:
bash
echo ""
echo "=== Installing Profiling Tools ==="
install_profiling_tools() {
case "$RUNTIME" in
*nodejs*)
echo "Setting up Node.js profiling tools..."
# Check for clinic.js
if ! npm list -g clinic >/dev/null 2>&1; then
echo "Installing clinic.js (performance profiling suite)..."
echo " npm install -g clinic"
echo ""
echo "Clinic.js tools:"
echo " - clinic doctor - Diagnose performance issues"
echo " - clinic flame - CPU flame graphs"
echo " - clinic bubbleprof - Async operations"
echo " - clinic heapprofiler - Memory profiling"
else
echo "✓ clinic.js already installed"
fi
# Check for autocannon (benchmarking)
if ! npm list -g autocannon >/dev/null 2>&1; then
echo ""
echo "Installing autocannon (HTTP benchmarking)..."
echo " npm install -g autocannon"
else
echo "✓ autocannon already installed"
fi
echo ""
echo "Node.js profiling methods available:"
echo " 1. Built-in --inspect flag"
echo " 2. clinic.js suite"
echo " 3. Chrome DevTools"
;;
*python*)
echo "Setting up Python profiling tools..."
# Check for profiling packages
if ! python -c "import cProfile" 2>/dev/null; then
echo "cProfile not available (should be in stdlib)"
else
echo "✓ cProfile available (built-in)"
fi
# Check for line_profiler
if ! python -c "import line_profiler" 2>/dev/null; then
echo ""
echo "Install line_profiler for line-by-line profiling:"
echo " pip install line_profiler"
else
echo "✓ line_profiler available"
fi
# Check for memory_profiler
if ! python -c "import memory_profiler" 2>/dev/null; then
echo ""
echo "Install memory_profiler for memory analysis:"
echo " pip install memory_profiler"
else
echo "✓ memory_profiler available"
fi
echo ""
echo "Python profiling methods available:"
echo " 1. cProfile (CPU profiling)"
echo " 2. line_profiler (line-by-line)"
echo " 3. memory_profiler (memory usage)"
;;
*browser*)
echo "Browser profiling setup..."
echo ""
echo "Browser profiling tools:"
echo " 1. Chrome DevTools Performance tab"
echo " 2. Lighthouse CI"
echo " 3. webpack-bundle-analyzer"
echo " 4. React DevTools Profiler"
# Check for bundle analyzer
if [ -f "package.json" ]; then
if ! grep -q "webpack-bundle-analyzer" package.json; then
echo ""
echo "Install webpack-bundle-analyzer:"
echo " npm install --save-dev webpack-bundle-analyzer"
fi
fi
;;
esac
}
install_profiling_tools我会检查并安装必要的分析工具:
bash
echo ""
echo "=== Installing Profiling Tools ==="
install_profiling_tools() {
case "$RUNTIME" in
*nodejs*)
echo "Setting up Node.js profiling tools..."
# 检查clinic.js
if ! npm list -g clinic >/dev/null 2>&1; then
echo "Installing clinic.js (performance profiling suite)..."
echo " npm install -g clinic"
echo ""
echo "Clinic.js tools:"
echo " - clinic doctor - Diagnose performance issues"
echo " - clinic flame - CPU flame graphs"
echo " - clinic bubbleprof - Async operations"
echo " - clinic heapprofiler - Memory profiling"
else
echo "✓ clinic.js already installed"
fi
# 检查autocannon(基准测试)
if ! npm list -g autocannon >/dev/null 2>&1; then
echo ""
echo "Installing autocannon (HTTP benchmarking)..."
echo " npm install -g autocannon"
else
echo "✓ autocannon already installed"
fi
echo ""
echo "Node.js profiling methods available:"
echo " 1. Built-in --inspect flag"
echo " 2. clinic.js suite"
echo " 3. Chrome DevTools"
;;
*python*)
echo "Setting up Python profiling tools..."
# 检查分析包
if ! python -c "import cProfile" 2>/dev/null; then
echo "cProfile not available (should be in stdlib)"
else
echo "✓ cProfile available (built-in)"
fi
# 检查line_profiler
if ! python -c "import line_profiler" 2>/dev/null; then
echo ""
echo "Install line_profiler for line-by-line profiling:"
echo " pip install line_profiler"
else
echo "✓ line_profiler available"
fi
# 检查memory_profiler
if ! python -c "import memory_profiler" 2>/dev/null; then
echo ""
echo "Install memory_profiler for memory analysis:"
echo " pip install memory_profiler"
else
echo "✓ memory_profiler available"
fi
echo ""
echo "Python profiling methods available:"
echo " 1. cProfile (CPU profiling)"
echo " 2. line_profiler (line-by-line)"
echo " 3. memory_profiler (memory usage)"
;;
*browser*)
echo "Browser profiling setup..."
echo ""
echo "Browser profiling tools:"
echo " 1. Chrome DevTools Performance tab"
echo " 2. Lighthouse CI"
echo " 3. webpack-bundle-analyzer"
echo " 4. React DevTools Profiler"
# 检查包分析器
if [ -f "package.json" ]; then
if ! grep -q "webpack-bundle-analyzer" package.json; then
echo ""
echo "Install webpack-bundle-analyzer:"
echo " npm install --save-dev webpack-bundle-analyzer"
fi
fi
;;
esac
}
install_profiling_toolsPhase 3: Node.js Profiling
阶段3:Node.js分析
For Node.js applications, I'll set up and run profiling:
bash
if [[ "$RUNTIME" == *"nodejs"* ]]; then
echo ""
echo "=== Node.js Performance Profiling ==="
# Create profiling scripts
cat > "$PROFILE_DIR/profile-node.sh" << 'NODEPROFILE'
#!/bin/bash对于Node.js应用,我会设置并运行分析:
bash
if [[ "$RUNTIME" == *"nodejs"* ]]; then
echo ""
echo "=== Node.js Performance Profiling ==="
# 创建分析脚本
cat > "$PROFILE_DIR/profile-node.sh" << 'NODEPROFILE'
#!/bin/bashNode.js Performance Profiling
Node.js Performance Profiling
echo "Node.js Profiling Options:"
echo ""
echo "1. CPU Profiling (clinic doctor)"
echo "2. Flame Graph (clinic flame)"
echo "3. Async Operations (clinic bubbleprof)"
echo "4. Memory Profiling (clinic heapprofiler)"
echo "5. Built-in V8 Profiler (--prof)"
echo "6. HTTP Load Testing (autocannon)"
echo ""
echo "Node.js Profiling Options:"
echo ""
echo "1. CPU Profiling (clinic doctor)"
echo "2. Flame Graph (clinic flame)"
echo "3. Async Operations (clinic bubbleprof)"
echo "4. Memory Profiling (clinic heapprofiler)"
echo "5. Built-in V8 Profiler (--prof)"
echo "6. HTTP Load Testing (autocannon)"
echo ""
Option 1: Clinic Doctor (comprehensive diagnosis)
Option 1: Clinic Doctor (comprehensive diagnosis)
clinic_doctor() {
echo "Running clinic doctor..."
echo "This will start your app and collect performance data"
echo ""
# Find entry point
ENTRY=$(node -p "require('./package.json').main || 'index.js'")
clinic doctor -- node "$ENTRY"
echo ""
echo "✓ Profile complete! Opening report in browser..."}
clinic_doctor() {
echo "Running clinic doctor..."
echo "This will start your app and collect performance data"
echo ""
# 查找入口点
ENTRY=$(node -p "require('./package.json').main || 'index.js'")
clinic doctor -- node "$ENTRY"
echo ""
echo "✓ Profile complete! Opening report in browser..."}
Option 2: Flame Graph
Option 2: Flame Graph
clinic_flame() {
echo "Running clinic flame..."
echo "Generating CPU flame graph"
echo ""
ENTRY=$(node -p "require('./package.json').main || 'index.js'")
clinic flame -- node "$ENTRY"
echo ""
echo "✓ Flame graph generated!"}
clinic_flame() {
echo "Running clinic flame..."
echo "Generating CPU flame graph"
echo ""
ENTRY=$(node -p "require('./package.json').main || 'index.js'")
clinic flame -- node "$ENTRY"
echo ""
echo "✓ Flame graph generated!"}
Option 3: Async Operations
Option 3: Async Operations
clinic_bubble() {
echo "Running clinic bubbleprof..."
echo "Analyzing async operations"
echo ""
ENTRY=$(node -p "require('./package.json').main || 'index.js'")
clinic bubbleprof -- node "$ENTRY"
echo ""
echo "✓ Async analysis complete!"}
clinic_bubble() {
echo "Running clinic bubbleprof..."
echo "Analyzing async operations"
echo ""
ENTRY=$(node -p "require('./package.json').main || 'index.js'")
clinic bubbleprof -- node "$ENTRY"
echo ""
echo "✓ Async analysis complete!"}
Option 4: V8 Profiler
Option 4: V8 Profiler
v8_profiler() {
echo "Running V8 profiler..."
echo ""
ENTRY=$(node -p "require('./package.json').main || 'index.js'")
node --prof "$ENTRY"
# Process the profile
PROFILE=$(ls isolate-*.log | head -1)
if [ -n "$PROFILE" ]; then
node --prof-process "$PROFILE" > processed-profile.txt
echo "✓ Profile processed: processed-profile.txt"
fi}
v8_profiler() {
echo "Running V8 profiler..."
echo ""
ENTRY=$(node -p "require('./package.json').main || 'index.js'")
node --prof "$ENTRY"
# 处理分析结果
PROFILE=$(ls isolate-*.log | head -1)
if [ -n "$PROFILE" ]; then
node --prof-process "$PROFILE" > processed-profile.txt
echo "✓ Profile processed: processed-profile.txt"
fi}
Option 5: HTTP Load Test
Option 5: HTTP Load Test
http_load_test() {
echo "HTTP Load Testing with autocannon"
echo ""
read -p "Enter URL to test (e.g., http://localhost:3000): " URL
read -p "Duration in seconds (default: 10): " DURATION
DURATION=${DURATION:-10}
echo ""
echo "Running load test..."
autocannon -d $DURATION "$URL"}
http_load_test() {
echo "HTTP Load Testing with autocannon"
echo ""
read -p "Enter URL to test (e.g., http://localhost:3000): " URL
read -p "Duration in seconds (default: 10): " DURATION
DURATION=${DURATION:-10}
echo ""
echo "Running load test..."
autocannon -d $DURATION "$URL"}
Main menu
Main menu
case "${1:-1}" in
1) clinic_doctor ;;
2) clinic_flame ;;
3) clinic_bubble ;;
4) clinic_heapprofiler -- node $(node -p "require('./package.json').main || 'index.js'") ;;
5) v8_profiler ;;
6) http_load_test ;;
*) echo "Invalid option" ;;
esac
NODEPROFILE
chmod +x "$PROFILE_DIR/profile-node.sh"
echo "✓ Created Node.js profiling script: $PROFILE_DIR/profile-node.sh"
# Identify potential bottlenecks in code
echo ""
echo "Scanning for potential performance issues..."
# Check for synchronous blocking operations
SYNC_BLOCKING=$(grep -r "readFileSync\|writeFileSync\|execSync" \
--include="*.js" --include="*.ts" \
--exclude-dir=node_modules \
--exclude-dir=dist \
. 2>/dev/null | wc -l)
if [ "$SYNC_BLOCKING" -gt 0 ]; then
echo "⚠️ Found $SYNC_BLOCKING synchronous blocking operations"
echo " Consider using async versions (readFile, writeFile, exec)"
fi
# Check for missing await
MISSING_AWAIT=$(grep -r "^\s*async.*function" \
--include="*.js" --include="*.ts" \
--exclude-dir=node_modules \
. 2>/dev/null | wc -l)
echo "💡 Async functions found: $MISSING_AWAIT"
echo " Verify all async calls use await or proper promise handling"
# Check for large dependencies
echo ""
echo "Analyzing bundle size..."
if [ -f "package.json" ]; then
DEPS_COUNT=$(cat package.json | jq '.dependencies | length' 2>/dev/null || echo "N/A")
echo " Dependencies: $DEPS_COUNT"
# Suggest bundle analyzer
echo ""
echo "To analyze bundle size:"
echo " npm install --save-dev webpack-bundle-analyzer"
echo " # Add to webpack config and run build"
fifi
undefinedcase "${1:-1}" in
1) clinic_doctor ;;
2) clinic_flame ;;
3) clinic_bubble ;;
4) clinic_heapprofiler -- node $(node -p "require('./package.json').main || 'index.js'") ;;
5) v8_profiler ;;
6) http_load_test ;;
*) echo "Invalid option" ;;
esac
NODEPROFILE
chmod +x "$PROFILE_DIR/profile-node.sh"
echo "✓ Created Node.js profiling script: $PROFILE_DIR/profile-node.sh"
# 识别代码中潜在的瓶颈
echo ""
echo "Scanning for potential performance issues..."
# 检查同步阻塞操作
SYNC_BLOCKING=$(grep -r "readFileSync\|writeFileSync\|execSync" \
--include="*.js" --include="*.ts" \
--exclude-dir=node_modules \
--exclude-dir=dist \
. 2>/dev/null | wc -l)
if [ "$SYNC_BLOCKING" -gt 0 ]; then
echo "⚠️ Found $SYNC_BLOCKING synchronous blocking operations"
echo " Consider using async versions (readFile, writeFile, exec)"
fi
# 检查缺失的await
MISSING_AWAIT=$(grep -r "^\s*async.*function" \
--include="*.js" --include="*.ts" \
--exclude-dir=node_modules \
. 2>/dev/null | wc -l)
echo "💡 Async functions found: $MISSING_AWAIT"
echo " Verify all async calls use await or proper promise handling"
# 检查大型依赖
echo ""
echo "Analyzing bundle size..."
if [ -f "package.json" ]; then
DEPS_COUNT=$(cat package.json | jq '.dependencies | length' 2>/dev/null || echo "N/A")
echo " Dependencies: $DEPS_COUNT"
# 建议使用包分析器
echo ""
echo "To analyze bundle size:"
echo " npm install --save-dev webpack-bundle-analyzer"
echo " # Add to webpack config and run build"
fifi
undefinedPhase 4: Python Profiling
阶段4:Python分析
For Python applications, I'll set up profiling:
bash
if [[ "$RUNTIME" == *"python"* ]]; then
echo ""
echo "=== Python Performance Profiling ==="
cat > "$PROFILE_DIR/profile-python.sh" << 'PYTHONPROFILE'
#!/bin/bash对于Python应用,我会设置分析:
bash
if [[ "$RUNTIME" == *"python"* ]]; then
echo ""
echo "=== Python Performance Profiling ==="
cat > "$PROFILE_DIR/profile-python.sh" << 'PYTHONPROFILE'
#!/bin/bashPython Performance Profiling
Python Performance Profiling
echo "Python Profiling Options:"
echo ""
echo "1. cProfile (CPU profiling)"
echo "2. line_profiler (line-by-line)"
echo "3. memory_profiler (memory usage)"
echo ""
echo "Python Profiling Options:"
echo ""
echo "1. cProfile (CPU profiling)"
echo "2. line_profiler (line-by-line)"
echo "3. memory_profiler (memory usage)"
echo ""
# Option 1: cProfile
cpu_profile() {
echo "Running cProfile..."
echo ""
# 查找主Python文件
if [ -f "main.py" ]; then
ENTRY="main.py"
elif [ -f "app.py" ]; then
ENTRY="app.py"
else
read -p "Enter Python file to profile: " ENTRY
fi
python -m cProfile -o profile.stats "$ENTRY"
# 分析结果
python << 'ANALYZE'import pstats
from pstats import SortKey
p = pstats.Stats('profile.stats')
print("\n=== Top 20 Functions by Cumulative Time ===")
p.sort_stats(SortKey.CUMULATIVE).print_stats(20)
print("\n=== Top 20 Functions by Total Time ===")
p.sort_stats(SortKey.TIME).print_stats(20)
ANALYZE
echo ""
echo "✓ Profile saved to: profile.stats"
}
# Option 2: line_profiler
line_profile() {
echo "Using line_profiler..."
echo ""
echo "Add @profile decorator to functions you want to profile"
echo "Example:"
echo " @profile"
echo " def slow_function():"
echo " pass"
echo ""
read -p "Enter Python file with @profile decorators: " ENTRY
kernprof -l -v "$ENTRY"
}
# Option 3: memory_profiler
memory_profile() {
echo "Using memory_profiler..."
echo ""
echo "Add @profile decorator to functions you want to profile"
echo ""
read -p "Enter Python file with @profile decorators: " ENTRY
python -m memory_profiler "$ENTRY"
}
case "${1:-1}" in
1) cpu_profile ;;
2) line_profile ;;
3) memory_profile ;;
*) echo "Invalid option" ;;
esacPYTHONPROFILE
chmod +x "$PROFILE_DIR/profile-python.sh"
echo "✓ Created Python profiling script: $PROFILE_DIR/profile-python.sh"
# 识别潜在瓶颈
echo ""
echo "Scanning for potential performance issues..."
# 检查可替换为生成器的列表推导
LIST_COMP=$(grep -r "\[.*for .* in .*\]" \
--include="*.py" \
--exclude-dir=venv \
--exclude-dir=env \
. 2>/dev/null | wc -l)
echo "💡 List comprehensions found: $LIST_COMP"
echo " Consider using generators for large datasets"
# 检查没有索引提示的数据库查询
DB_QUERIES=$(grep -r "SELECT.*FROM\|\.filter(\|\.get(" \
--include="*.py" \
. 2>/dev/null | wc -l)
if [ "$DB_QUERIES" -gt 0 ]; then
echo "💡 Database queries found: $DB_QUERIES"
echo " Ensure proper indexing and use select_related/prefetch_related"
fifi
undefinedOption 1: cProfile
阶段5:浏览器分析
cpu_profile() {
echo "Running cProfile..."
echo ""
# Find main Python file
if [ -f "main.py" ]; then
ENTRY="main.py"
elif [ -f "app.py" ]; then
ENTRY="app.py"
else
read -p "Enter Python file to profile: " ENTRY
fi
python -m cProfile -o profile.stats "$ENTRY"
# Analyze profile
python << 'ANALYZE'import pstats
from pstats import SortKey
p = pstats.Stats('profile.stats')
print("\n=== Top 20 Functions by Cumulative Time ===")
p.sort_stats(SortKey.CUMULATIVE).print_stats(20)
print("\n=== Top 20 Functions by Total Time ===")
p.sort_stats(SortKey.TIME).print_stats(20)
ANALYZE
echo ""
echo "✓ Profile saved to: profile.stats"}
对于前端应用,我会设置浏览器分析:
bash
if [[ "$RUNTIME" == *"browser"* ]]; then
echo ""
echo "=== Browser Performance Profiling ==="
cat > "$PROFILE_DIR/profile-browser.md" << 'BROWSERPROFILE'Option 2: line_profiler
浏览器性能分析指南
—
Chrome DevTools性能分析
line_profile() {
echo "Using line_profiler..."
echo ""
echo "Add @profile decorator to functions you want to profile"
echo "Example:"
echo " @profile"
echo " def slow_function():"
echo " pass"
echo ""
read -p "Enter Python file with @profile decorators: " ENTRY
kernprof -l -v "$ENTRY"}
-
打开Chrome DevTools
- F12 或右键 → 检查
- 切换到“性能”标签页
-
录制性能数据
- 点击录制按钮(圆形图标)
- 与应用交互(导航、点击、滚动)
- 点击停止
-
分析结果
- 查找长任务(>50ms)
- 识别JavaScript执行时间
- 检查布局/重排操作
- 分析网络瀑布图
-
关键指标
- FCP(首次内容绘制)- <1.8秒
- LCP(最大内容绘制)- <2.5秒
- TBT(总阻塞时间)- <200毫秒
- CLS(累积布局偏移)- <0.1
Option 3: memory_profiler
Lighthouse审计
memory_profile() {
echo "Using memory_profiler..."
echo ""
echo "Add @profile decorator to functions you want to profile"
echo ""
read -p "Enter Python file with @profile decorators: " ENTRY
python -m memory_profiler "$ENTRY"}
case "${1:-1}" in
1) cpu_profile ;;
2) line_profile ;;
3) memory_profile ;;
*) echo "Invalid option" ;;
esac
PYTHONPROFILE
chmod +x "$PROFILE_DIR/profile-python.sh"
echo "✓ Created Python profiling script: $PROFILE_DIR/profile-python.sh"
# Identify potential bottlenecks
echo ""
echo "Scanning for potential performance issues..."
# Check for list comprehensions that could be generators
LIST_COMP=$(grep -r "\[.*for .* in .*\]" \
--include="*.py" \
--exclude-dir=venv \
--exclude-dir=env \
. 2>/dev/null | wc -l)
echo "💡 List comprehensions found: $LIST_COMP"
echo " Consider using generators for large datasets"
# Check for database queries without indexing hints
DB_QUERIES=$(grep -r "SELECT.*FROM\|\.filter(\|\.get(" \
--include="*.py" \
. 2>/dev/null | wc -l)
if [ "$DB_QUERIES" -gt 0 ]; then
echo "💡 Database queries found: $DB_QUERIES"
echo " Ensure proper indexing and use select_related/prefetch_related"
fifi
undefinedbash
undefinedPhase 5: Browser Profiling
安装Lighthouse CLI
For frontend applications, I'll set up browser profiling:
bash
if [[ "$RUNTIME" == *"browser"* ]]; then
echo ""
echo "=== Browser Performance Profiling ==="
cat > "$PROFILE_DIR/profile-browser.md" << 'BROWSERPROFILE'npm install -g lighthouse
Browser Performance Profiling Guide
运行审计
Chrome DevTools Performance Profiling
—
-
Open Chrome DevTools
- F12 or Right-click → Inspect
- Go to "Performance" tab
-
Record Performance
- Click Record button (circle)
- Interact with your app (navigate, click, scroll)
- Click Stop
-
Analyze Results
- Look for long tasks (> 50ms)
- Identify JavaScript execution time
- Check layout/reflow operations
- Analyze network waterfall
-
Key Metrics
- FCP (First Contentful Paint) - < 1.8s
- LCP (Largest Contentful Paint) - < 2.5s
- TBT (Total Blocking Time) - < 200ms
- CLS (Cumulative Layout Shift) - < 0.1
lighthouse http://localhost:3000 --view
Lighthouse Audit
或以编程方式运行
bash
undefinednpx lighthouse http://localhost:3000 --output html --output-path ./lighthouse-report.html
undefinedInstall Lighthouse CLI
包分析
—
Webpack Bundle Analyzer
npm install -g lighthouse
bash
npm install --save-dev webpack-bundle-analyzerRun audit
添加到webpack.config.js
lighthouse http://localhost:3000 --view
const BundleAnalyzerPlugin = require('webpack-bundle-analyzer').BundleAnalyzerPlugin;
module.exports = {
plugins: [
new BundleAnalyzerPlugin()
]
}
Or programmatically
运行构建
npx lighthouse http://localhost:3000 --output html --output-path ./lighthouse-report.html
undefinednpm run build
undefinedBundle Analysis
Vite包分析
Webpack Bundle Analyzer
—
bash
npm install --save-dev webpack-bundle-analyzerbash
npm install --save-dev rollup-plugin-visualizerAdd to webpack.config.js
添加到vite.config.js
const BundleAnalyzerPlugin = require('webpack-bundle-analyzer').BundleAnalyzerPlugin;
module.exports = {
plugins: [
new BundleAnalyzerPlugin()
]
}
import { visualizer } from 'rollup-plugin-visualizer';
export default {
plugins: [
visualizer({ open: true })
]
}
Run build
运行构建
npm run build
undefinednpm run build
undefinedVite Bundle Analysis
React分析器
bash
npm install --save-dev rollup-plugin-visualizerjsx
import { Profiler } from 'react';
function onRenderCallback(
id, phase, actualDuration, baseDuration, startTime, commitTime
) {
console.log(`${id} took ${actualDuration}ms to render`);
}
<Profiler id="MyComponent" onRender={onRenderCallback}>
<MyComponent />
</Profiler>Add to vite.config.js
常见性能问题
—
1. 不必要的重渲染
import { visualizer } from 'rollup-plugin-visualizer';
export default {
plugins: [
visualizer({ open: true })
]
}
- 对开销大的组件使用React.memo()
- 实现shouldComponentUpdate或使用useMemo
- 避免在JSX中定义内联函数
Run build
2. 包体积过大
npm run build
undefined- 使用React.lazy()进行代码分割
- 摇树优化移除未使用的代码
- 分析并移除大型依赖
React Profiler
3. 未优化的图片
jsx
import { Profiler } from 'react';
function onRenderCallback(
id, phase, actualDuration, baseDuration, startTime, commitTime
) {
console.log(`${id} took ${actualDuration}ms to render`);
}
<Profiler id="MyComponent" onRender={onRenderCallback}>
<MyComponent />
</Profiler>- 使用WebP格式
- 实现懒加载
- 添加合适的width/height属性
Common Performance Issues
4. 过多网络请求
1. Unnecessary Re-renders
—
- Use React.memo() for expensive components
- Implement shouldComponentUpdate or use useMemo
- Avoid inline function definitions in JSX
- 打包相似资源
- 使用HTTP/2多路复用
- 实现资源提示(preload、prefetch)
2. Large Bundle Size
5. 阻塞性JavaScript
- Code splitting with React.lazy()
- Tree shaking unused code
- Analyze and remove large dependencies
- 延迟非关键脚本
- 使用async属性
- 将脚本移到body底部
3. Unoptimized Images
优化检查清单
- Use WebP format
- Implement lazy loading
- Add proper width/height attributes
-
包体积 <200KB(gzip压缩后)
-
首次加载 <3秒
-
图片已优化并启用懒加载
-
按路由进行代码分割
-
关键CSS内联
-
字体加载已优化
-
服务端工作线程用于缓存
-
已启用压缩(gzip/brotli) BROWSERPROFILEecho "✓ Created browser profiling guide: $PROFILE_DIR/profile-browser.md"
检查性能反模式
echo "" echo "Scanning for performance anti-patterns..."检查生产环境中的console.log
CONSOLE_LOGS=$(grep -r "console.log|console.debug"
--include=".js" --include=".jsx" --include=".ts" --include=".tsx"
--exclude-dir=node_modules
. 2>/dev/null | wc -l)if [ "$CONSOLE_LOGS" -gt 10 ]; then echo "⚠️ Found $CONSOLE_LOGS console.log statements" echo " Remove or use conditional logging for production" fi检查大型图片
if [ -d "public" ] || [ -d "static" ] || [ -d "assets" ]; then echo "" echo "Checking for large images..." find public static assets -type f ( -name ".jpg" -o -name ".png" -o -name "*.jpeg" ) -size +500k 2>/dev/null | while read img; do SIZE=$(du -h "$img" | cut -f1) echo " ⚠️ Large image: $img ($SIZE)" done fi fi
undefined4. Too Many Network Requests
阶段6:生成性能报告
- Bundle similar resources
- Use HTTP/2 multiplexing
- Implement resource hints (preload, prefetch)
我会创建一份全面的性能分析报告:
bash
echo ""
echo "=== Generating Performance Report ==="
cat > "$REPORT" << EOF5. Blocking JavaScript
性能分析报告
- Defer non-critical scripts
- Use async attribute
- Move scripts to bottom of body
生成时间: $(date)
运行时: $RUNTIME
项目: $(basename $(pwd))
Optimization Checklist
分析设置
—
环境
-
Bundle size < 200KB (gzipped)
-
First load < 3 seconds
-
Images optimized and lazy-loaded
-
Code split by route
-
Critical CSS inlined
-
Font loading optimized
-
Service worker for caching
-
Compression enabled (gzip/brotli) BROWSERPROFILEecho "✓ Created browser profiling guide: $PROFILE_DIR/profile-browser.md"
Check for performance anti-patterns
echo "" echo "Scanning for performance anti-patterns..."Check for console.log in production
CONSOLE_LOGS=$(grep -r "console.log|console.debug"
--include=".js" --include=".jsx" --include=".ts" --include=".tsx"
--exclude-dir=node_modules
. 2>/dev/null | wc -l)if [ "$CONSOLE_LOGS" -gt 10 ]; then echo "⚠️ Found $CONSOLE_LOGS console.log statements" echo " Remove or use conditional logging for production" fiCheck for large images
if [ -d "public" ] || [ -d "static" ] || [ -d "assets" ]; then echo "" echo "Checking for large images..." find public static assets -type f ( -name ".jpg" -o -name ".png" -o -name "*.jpeg" ) -size +500k 2>/dev/null | while read img; do SIZE=$(du -h "$img" | cut -f1) echo " ⚠️ Large image: $img ($SIZE)" done fi fi
undefined- 运行时: $RUNTIME
- Node版本: ${NODE_VERSION:-N/A}
- Python版本: ${PYTHON_VERSION:-N/A}
Phase 6: Generate Performance Report
可用工具
I'll create a comprehensive performance analysis report:
bash
echo ""
echo "=== Generating Performance Report ==="
cat > "$REPORT" << EOFEOF
Performance Profile Report
添加工具特定部分
Generated: $(date)
Runtime: $RUNTIME
Project: $(basename $(pwd))
if [[ "$RUNTIME" == "nodejs" ]]; then
cat >> "$REPORT" << 'EOF'
Profiling Setup
Node.js分析工具
Environment
—
- Runtime: $RUNTIME
- Node Version: ${NODE_VERSION:-N/A}
- Python Version: ${PYTHON_VERSION:-N/A}
- clinic.js套件(doctor、flame、bubbleprof、heapprofiler)
- V8内置分析器(--prof)
- Chrome DevTools调试器
- autocannon(HTTP基准测试)
运行分析:
bash
./claude/profiling/profile-node.sh 1 # 全面诊断
./claude/profiling/profile-node.sh 2 # 火焰图
./claude/profiling/profile-node.sh 6 # 负载测试EOF
fi
if [[ "$RUNTIME" == "python" ]]; then
cat >> "$REPORT" << 'EOF'
Tools Available
Python分析工具
EOF
- cProfile(CPU分析)
- line_profiler(逐行分析)
- memory_profiler(内存使用分析)
运行分析:
bash
./claude/profiling/profile-python.sh 1 # cProfile分析
./claude/profiling/profile-python.sh 2 # 逐行分析EOF
fi
if [[ "$RUNTIME" == "browser" ]]; then
cat >> "$REPORT" << 'EOF'
Add tool-specific sections
浏览器分析工具
if [[ "$RUNTIME" == "nodejs" ]]; then
cat >> "$REPORT" << 'EOF'
- Chrome DevTools性能面板
- Lighthouse CI
- webpack-bundle-analyzer
- React DevTools分析器
查看指南:
bash
cat .claude/profiling/profile-browser.mdEOF
fi
cat >> "$REPORT" << 'EOF'
Node.js Profiling Tools
快速优化建议
—
立即可实施的优化
- clinic.js suite (doctor, flame, bubbleprof, heapprofiler)
- V8 built-in profiler (--prof)
- Chrome DevTools inspector
- autocannon (HTTP benchmarking)
Run profiling:
bash
./claude/profiling/profile-node.sh 1 # Clinic doctor
./claude/profiling/profile-node.sh 2 # Flame graph
./claude/profiling/profile-node.sh 6 # Load testEOF
fi
if [[ "$RUNTIME" == "python" ]]; then
cat >> "$REPORT" << 'EOF'
-
启用压缩
- gzip/brotli压缩
- 减少60-80%的包体积
-
代码分割
- 按路由分割
- 懒加载重组件
- 减少初始加载时间
-
优化图片
- 使用WebP格式
- 懒加载
- 合理设置尺寸
-
移除不必要的依赖
- 审计package.json
- 用轻量库替代重型库
- 摇树优化移除未使用代码
-
数据库查询优化
- 添加合适的索引
- 使用连接池
- 实现缓存
Python Profiling Tools
性能监控
—
持续监控
- cProfile (CPU profiling)
- line_profiler (line-by-line analysis)
- memory_profiler (memory usage)
Run profiling:
bash
./claude/profiling/profile-python.sh 1 # cProfile
./claude/profiling/profile-python.sh 2 # line_profilerEOF
fi
if [[ "$RUNTIME" == "browser" ]]; then
cat >> "$REPORT" << 'EOF'
- 在流水线中设置Lighthouse CI
- 监控核心Web指标
- 跟踪包体积变化
- 在生产环境中进行分析
Browser Profiling Tools
推荐工具
- Chrome DevTools Performance
- Lighthouse CI
- webpack-bundle-analyzer
- React DevTools Profiler
See guide:
bash
cat .claude/profiling/profile-browser.mdEOF
fi
cat >> "$REPORT" << 'EOF'
Quick Wins
下一步行动
Immediate Optimizations
—
-
Enable Compression
- gzip/brotli compression
- Reduce bundle size by 60-80%
-
Code Splitting
- Split by route
- Lazy load heavy components
- Reduce initial load time
-
Optimize Images
- Use WebP format
- Lazy loading
- Proper sizing
-
Remove Unnecessary Dependencies
- Audit package.json
- Replace heavy libraries with lighter alternatives
- Tree shake unused code
-
Database Query Optimization
- Add proper indexes
- Use connection pooling
- Implement caching
- 运行分析会话
- 识别前3个瓶颈
- 实施优化
- 衡量改进效果
- 在CI中添加性能预算
- 设置持续监控
Performance Monitoring
参考资源
Continuous Monitoring
—
- Set up Lighthouse CI in your pipeline
- Monitor Core Web Vitals
- Track bundle size over time
- Profile in production
EOF
echo "✓ Created performance report: $REPORT"
undefinedRecommended Tools
摘要
bash
echo ""
echo "=== ✓ Performance Profiling Setup Complete ==="
echo ""
echo "🎯 Runtime Environment: $RUNTIME"
echo ""
echo "📁 Generated files:"
echo " - $REPORT"
if [[ "$RUNTIME" == *"nodejs"* ]]; then
echo " - $PROFILE_DIR/profile-node.sh"
fi
if [[ "$RUNTIME" == *"python"* ]]; then
echo " - $PROFILE_DIR/profile-python.sh"
fi
if [[ "$RUNTIME" == *"browser"* ]]; then
echo " - $PROFILE_DIR/profile-browser.md"
fi
echo ""
echo "🚀 Quick Start:"
echo ""
if [[ "$RUNTIME" == *"nodejs"* ]]; then
echo "Node.js Profiling:"
echo " $PROFILE_DIR/profile-node.sh 1 # Comprehensive diagnosis"
echo " $PROFILE_DIR/profile-node.sh 2 # CPU flame graph"
echo " $PROFILE_DIR/profile-node.sh 6 # HTTP load test"
echo ""
fi
if [[ "$RUNTIME" == *"python"* ]]; then
echo "Python Profiling:"
echo " $PROFILE_DIR/profile-python.sh 1 # CPU profiling"
echo " $PROFILE_DIR/profile-python.sh 2 # Line-by-line"
echo ""
fi
if [[ "$RUNTIME" == *"browser"* ]]; then
echo "Browser Profiling:"
echo " lighthouse http://localhost:3000 --view"
echo " cat $PROFILE_DIR/profile-browser.md # Full guide"
echo ""
fi
echo "📊 Performance Analysis Workflow:"
echo ""
echo "1. Establish baseline performance"
echo " - Run profiling on current code"
echo " - Record key metrics (response time, memory, CPU)"
echo ""
echo "2. Identify bottlenecks"
echo " - Find slowest functions/routes"
echo " - Check database queries"
echo " - Analyze bundle size"
echo ""
echo "3. Optimize iteratively"
echo " - Fix highest-impact issues first"
echo " - Re-profile after each change"
echo " - Measure improvement"
echo ""
echo "4. Set performance budgets"
echo " - Bundle size limits"
echo " - Response time targets"
echo " - Memory usage constraints"
echo ""
echo "💡 Common Bottlenecks:"
echo " - Synchronous blocking operations"
echo " - N+1 database queries"
echo " - Large bundle sizes"
echo " - Unoptimized images"
echo " - Missing caching"
echo " - Inefficient algorithms"
echo ""
echo "🔗 Integration Points:"
echo " - /debug-root-cause - Investigate performance issues"
echo " - /test - Add performance tests"
echo " - /ci-setup - Add performance budgets to CI"
echo ""
echo "View full report: cat $REPORT"Next Steps
最佳实践
- Run profiling session
- Identify top 3 bottlenecks
- Implement optimizations
- Measure improvements
- Add performance budgets to CI
- Set up continuous monitoring
分析策略:
- 在类生产环境中进行分析
- 使用具有代表性的工作负载
- 运行多个样本以确保准确性
- 关注80/20法则(优先处理最大的瓶颈)
- 在优化前后进行衡量对比
常见优化手段:
- 缓存频繁访问的数据
- 对数据库使用连接池
- 实现懒加载
- 为数据库查询添加索引
- 对静态资源使用CDN
- 启用压缩
- 最小化包体积
性能预算:
- 设置并强制执行包体积限制
- 为关键路径设置响应时间目标
- 监控内存使用模式
- 跟踪核心Web指标(LCP、FID、CLS)
鸣谢: 性能分析方法论基于Node.js分析指南、Python性能文档、Chrome DevTools文档和Web.dev性能最佳实践。Clinic.js集成模式来自NearForm。Lighthouse CI指导来自Google Chrome团队。
Resources
—
EOF
echo "✓ Created performance report: $REPORT"
undefined—
Summary
—
bash
echo ""
echo "=== ✓ Performance Profiling Setup Complete ==="
echo ""
echo "🎯 Runtime Environment: $RUNTIME"
echo ""
echo "📁 Generated files:"
echo " - $REPORT"
if [[ "$RUNTIME" == *"nodejs"* ]]; then
echo " - $PROFILE_DIR/profile-node.sh"
fi
if [[ "$RUNTIME" == *"python"* ]]; then
echo " - $PROFILE_DIR/profile-python.sh"
fi
if [[ "$RUNTIME" == *"browser"* ]]; then
echo " - $PROFILE_DIR/profile-browser.md"
fi
echo ""
echo "🚀 Quick Start:"
echo ""
if [[ "$RUNTIME" == *"nodejs"* ]]; then
echo "Node.js Profiling:"
echo " $PROFILE_DIR/profile-node.sh 1 # Comprehensive diagnosis"
echo " $PROFILE_DIR/profile-node.sh 2 # CPU flame graph"
echo " $PROFILE_DIR/profile-node.sh 6 # HTTP load test"
echo ""
fi
if [[ "$RUNTIME" == *"python"* ]]; then
echo "Python Profiling:"
echo " $PROFILE_DIR/profile-python.sh 1 # CPU profiling"
echo " $PROFILE_DIR/profile-python.sh 2 # Line-by-line"
echo ""
fi
if [[ "$RUNTIME" == *"browser"* ]]; then
echo "Browser Profiling:"
echo " lighthouse http://localhost:3000 --view"
echo " cat $PROFILE_DIR/profile-browser.md # Full guide"
echo ""
fi
echo "📊 Performance Analysis Workflow:"
echo ""
echo "1. Establish baseline performance"
echo " - Run profiling on current code"
echo " - Record key metrics (response time, memory, CPU)"
echo ""
echo "2. Identify bottlenecks"
echo " - Find slowest functions/routes"
echo " - Check database queries"
echo " - Analyze bundle size"
echo ""
echo "3. Optimize iteratively"
echo " - Fix highest-impact issues first"
echo " - Re-profile after each change"
echo " - Measure improvement"
echo ""
echo "4. Set performance budgets"
echo " - Bundle size limits"
echo " - Response time targets"
echo " - Memory usage constraints"
echo ""
echo "💡 Common Bottlenecks:"
echo " - Synchronous blocking operations"
echo " - N+1 database queries"
echo " - Large bundle sizes"
echo " - Unoptimized images"
echo " - Missing caching"
echo " - Inefficient algorithms"
echo ""
echo "🔗 Integration Points:"
echo " - /debug-root-cause - Investigate performance issues"
echo " - /test - Add performance tests"
echo " - /ci-setup - Add performance budgets to CI"
echo ""
echo "View full report: cat $REPORT"—
Best Practices
—
Profiling Strategy:
- Profile in production-like environment
- Use representative workloads
- Run multiple samples for accuracy
- Focus on 80/20 rule (biggest bottlenecks first)
- Measure before and after optimizations
Common Optimizations:
- Cache frequently accessed data
- Use connection pooling for databases
- Implement lazy loading
- Optimize database queries with indexes
- Use CDN for static assets
- Enable compression
- Minimize bundle size
Performance Budgets:
- Set and enforce limits on bundle size
- Target response times for critical paths
- Monitor memory usage patterns
- Track Core Web Vitals (LCP, FID, CLS)
Credits: Performance profiling methodology based on Node.js profiling guide, Python performance documentation, Chrome DevTools documentation, and Web.dev performance best practices. Clinic.js integration patterns from NearForm. Lighthouse CI guidance from Google Chrome team.
—