perf-optimizer

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Performance Optimizer

性能优化器

Systematic workflow for diagnosing and fixing performance issues in the OneKey mobile app using the perf-ci infrastructure and performance-server tooling.
借助perf-ci基础设施与performance-server工具链,为OneKey移动应用诊断并修复性能问题的系统性工作流。

Overview

概述

This skill provides a structured iterative approach to:
  • Establish performance baselines from existing sessions
  • Run controlled perf measurements (3 runs, median aggregation)
  • Analyze session data to identify bottlenecks
  • Make targeted code changes
  • Verify improvements against thresholds
  • Document all changes and results
Key Metrics:
  • tokensStartMs
    : Time when Home tokens refresh starts (lower is better)
  • tokensSpanMs
    : Duration of Home tokens refresh (lower is better)
  • functionCallCount
    : Total function calls during session (lower is better)
Success Criteria:
  • SUCCESS: Time metrics improve by ≥10%
  • 🌟 MINOR_IMPROVEMENT: Time unchanged but function calls reduce by ≥20% (safe, small-scope changes)
  • NO_IMPROVEMENT: Neither threshold met → revert changes
本Skill提供了一套结构化的迭代方法,用于:
  • 从现有会话中建立性能基准
  • 运行受控性能测试(3次运行,取中位数聚合结果)
  • 分析会话数据以识别瓶颈
  • 进行针对性代码修改
  • 根据阈值验证优化效果
  • 记录所有修改与结果
核心指标:
  • tokensStartMs
    :首页代币刷新启动时间(数值越小越好)
  • tokensSpanMs
    :首页代币刷新持续时长(数值越小越好)
  • functionCallCount
    :会话期间的总函数调用次数(数值越小越好)
成功判定标准:
  • SUCCESS(成功):时间指标提升≥10%
  • 🌟 MINOR_IMPROVEMENT(小幅改进):时间指标无变化,但函数调用次数减少≥20%(安全、小范围修改)
  • NO_IMPROVEMENT(无改进):未达到任一阈值 → 回滚修改

Workflow

工作流

Phase 1: Setup and Baseline

第一阶段:准备与基准建立

Step 1.1: Select Baseline Session

步骤1.1:选择基准会话

Ask user to choose a baseline session or help them select one:
bash
undefined
请用户选择基准会话,或协助其选择:
bash
undefined

List recent sessions with key metrics

列出包含关键指标的近期会话

cat ~/perf-sessions/sessions.overview.jsonl |
jq -r '[.sessionId, .createdAt, .marks["Home:refresh:done:tokens"]] | @tsv' |
tail -20

User can specify:
- A known good session (for regression fixes)
- A recent session (for improvement work)
- Let you choose a representative session
cat ~/perf-sessions/sessions.overview.jsonl |
jq -r '[.sessionId, .createdAt, .marks["Home:refresh:done:tokens"]] | @tsv' |
tail -20

用户可指定:
- 已知的良好会话(用于修复回归问题)
- 近期会话(用于性能提升工作)
- 由你选择具有代表性的会话

Step 1.2: Analyze Baseline

步骤1.2:分析基准会话

Extract baseline metrics from the session:
bash
undefined
从会话中提取基准指标:
bash
undefined

Get detailed analysis

获取详细分析结果

node development/performance-server/cli/derive-session.js <baseline-sessionId>
--pretty
--output /tmp/perf-baseline-derived.json

Read baseline metrics from `~/perf-sessions/<sessionId>/mark.log`:

```bash
node development/performance-server/cli/derive-session.js <baseline-sessionId>
--pretty
--output /tmp/perf-baseline-derived.json

从`~/perf-sessions/<sessionId>/mark.log`中读取基准指标:

```bash

Extract tokensStartMs (timestamp of Home:refresh:start:tokens)

提取tokensStartMs(Home:refresh:start:tokens的时间戳)

grep "Home:refresh:start:tokens" ~/perf-sessions/<sessionId>/mark.log | jq '.timestamp'
grep "Home:refresh:start:tokens" ~/perf-sessions/<sessionId>/mark.log | jq '.timestamp'

Extract tokensSpanMs (done - start)

提取tokensSpanMs(完成时间 - 启动时间)

grep "Home:refresh:done:tokens" ~/perf-sessions/<sessionId>/mark.log | jq '.timestamp'
grep "Home:refresh:done:tokens" ~/perf-sessions/<sessionId>/mark.log | jq '.timestamp'

Count function calls

统计函数调用次数

wc -l < ~/perf-sessions/<sessionId>/function_call.log

Create baseline metrics JSON for comparison:

```bash
echo '{"tokensStartMs": <start>, "tokensSpanMs": <span>, "functionCallCount": <count>}' > /tmp/baseline-metrics.json
wc -l < ~/perf-sessions/<sessionId>/function_call.log

创建用于对比的基准指标JSON文件:

```bash
echo '{"tokensStartMs": <start>, "tokensSpanMs": <span>, "functionCallCount": <count>}' > /tmp/baseline-metrics.json

Step 1.3: Initialize Documentation

步骤1.3:初始化文档

Create session document at
development/output/perf-optimization-<timestamp>.md
using the template from
references/template.md
. Fill in:
  • Current date/time
  • Baseline session ID
  • Baseline metrics
  • Current branch name
  • Target (regression fix or improvement)
使用
references/template.md
中的模板,在
development/output/perf-optimization-<timestamp>.md
路径下创建会话文档,并填写以下内容:
  • 当前日期/时间
  • 基准会话ID
  • 基准指标
  • 当前分支名称
  • 目标(修复回归问题或提升性能)

Phase 2: Iterative Optimization Loop

第二阶段:迭代优化循环

Maximum iterations: 10
For each iteration (run in a sub-agent):
最大迭代次数:10次
每次迭代在子Agent中运行:

Step 2.1: Run Performance Tests

步骤2.1:运行性能测试

The perf script automatically runs 3 times and aggregates results:
bash
node development/perf-ci/run-ios-perf-detox-release.js
Output location:
development/perf-ci/output/<jobId>/
  • report.json
    - Contains aggregated results in
    agg
    field
  • detox/runs.json
    - Contains individual run sessionIds
Extract current metrics from
report.json
:
bash
undefined
性能脚本会自动运行3次并聚合结果:
bash
node development/perf-ci/run-ios-perf-detox-release.js
输出路径:
development/perf-ci/output/<jobId>/
  • report.json
    - 包含聚合后的结果(在
    agg
    字段中)
  • detox/runs.json
    - 包含各次运行的sessionId
report.json
中提取当前指标:
bash
undefined

Read aggregated metrics directly

直接读取聚合后的指标

cat development/perf-ci/output/<jobId>/report.json | jq '{ tokensStartMs: .agg.tokensStartMs, tokensSpanMs: .agg.tokensSpanMs, functionCallCount: .agg.functionCallCount }' > /tmp/current-metrics.json
undefined
cat development/perf-ci/output/<jobId>/report.json | jq '{ tokensStartMs: .agg.tokensStartMs, tokensSpanMs: .agg.tokensSpanMs, functionCallCount: .agg.functionCallCount }' > /tmp/current-metrics.json
undefined

Step 2.2: Analyze Current Performance

步骤2.2:分析当前性能

For deeper analysis, run derive-session on individual sessions:
bash
undefined
如需深入分析,对单个会话运行derive-session:
bash
undefined

Get sessionIds from the run

获取本次运行的sessionId列表

SESSIONS=$(cat development/perf-ci/output/<jobId>/detox/runs.json | jq -r '.runs[].sessionId')
SESSIONS=$(cat development/perf-ci/output/<jobId>/detox/runs.json | jq -r '.runs[].sessionId')

Analyze each session

分析每个会话

for sid in $SESSIONS; do node development/performance-server/cli/derive-session.js $sid
--pretty
--output /tmp/perf-derived-$sid.json done

Focus on these sections in the derived output:
- **slowFunctions**: Functions taking the most cumulative time
- **homeRefreshTokens**: What's consuming time in the critical refresh window
- **jsblock**: Main thread blocks causing delays
- **repeatedCalls**: Thrashing patterns or excessive re-renders
- **keyMarks**: Critical milestone timing

Identify top 1-3 bottlenecks that are:
- Taking significant time
- Potentially optimizable
- Within the critical path (Home refresh flow)
for sid in $SESSIONS; do node development/performance-server/cli/derive-session.js $sid
--pretty
--output /tmp/perf-derived-$sid.json done

重点关注派生输出中的以下部分:
- **slowFunctions**:累计耗时最长的函数
- **homeRefreshTokens**:关键刷新流程中的耗时环节
- **jsblock**:导致延迟的主线程阻塞
- **repeatedCalls**:频繁调用模式或过度重渲染
- **keyMarks**:关键里程碑时间点

识别出1-3个符合以下条件的核心瓶颈:
- 耗时占比高
- 具备优化潜力
- 位于关键路径(首页刷新流程)

Step 2.3: Determine Action

步骤2.3:确定后续操作

Compare current metrics to baseline:
bash
undefined
对比当前指标与基准指标:
bash
undefined

Quick comparison

快速对比

cat /tmp/baseline-metrics.json cat /tmp/current-metrics.json
cat /tmp/baseline-metrics.json cat /tmp/current-metrics.json

Calculate deltas manually or use script in skill directory

手动计算差值,或使用Skill目录中的脚本


**Decision tree:**

If current metrics show improvement over baseline:
- ✅ **SUCCESS** (≥10% time improvement) → **STOP**, document success
- 🌟 **MINOR_IMPROVEMENT** (≥20% function call reduction, time stable) → Create branch, commit, return to main branch, continue

If no improvement yet:
- Continue to Step 2.4 (make changes)

If iteration count reaches 10:
- Document findings and stop

**决策树:**

若当前指标优于基准指标:
- ✅ **SUCCESS(成功)**(时间指标提升≥10%)→ **停止**,记录成功结果
- 🌟 **MINOR_IMPROVEMENT(小幅改进)**(函数调用次数减少≥20%,时间指标稳定)→ 创建分支、提交修改,切回主分支后继续迭代

若尚未取得改进:
- 进入步骤2.4(进行代码修改)

若迭代次数达到10次:
- 记录发现并停止

Step 2.4: Make Code Changes

步骤2.4:进行代码修改

Based on analysis, make ONE targeted change per iteration:
Change types:
  1. Optimization: Remove redundant work, cache results, reduce allocations
  2. Add perfMark: Add marks to understand unclear bottlenecks better
  3. Both: Add marks + optimize in same area
Guidelines:
  • One change at a time (unless analysis proves multiple changes must work together)
  • Small, focused changes
  • Safe changes only (never break functionality)
  • Document rationale clearly
Adding perfMarks:
Use the performance utilities in
packages/shared/src/performance/
:
typescript
import { perfMark } from '@onekeyhq/shared/src/performance/perfMark';

// Add mark at a specific point
perfMark('MyComponent:operation:start');
// ... operation ...
perfMark('MyComponent:operation:done');
Naming convention:
<Component>:<action>:<phase>
(e.g.,
Home:refresh:start:tokens
)
If adding perfMarks for investigation:
  1. Add marks around suspected bottleneck
  2. Run one perf cycle with marks
  3. Analyze new data with marks visible
  4. Then make code optimization
  5. Verify with another perf cycle
基于分析结果,每次迭代仅做一项针对性修改
修改类型:
  1. 优化:移除冗余操作、缓存结果、减少内存分配
  2. 添加perfMark:添加标记以更清晰地定位未知瓶颈
  3. 两者结合:在同一区域添加标记并进行优化
修改准则:
  • 每次仅做一项修改(除非分析证明多项修改必须配合生效)
  • 小范围、聚焦式修改
  • 仅做安全修改(绝不破坏现有功能)
  • 清晰记录修改理由
添加perfMark:
使用
packages/shared/src/performance/
中的性能工具:
typescript
import { perfMark } from '@onekeyhq/shared/src/performance/perfMark';

// 在特定位置添加标记
perfMark('MyComponent:operation:start');
// ... 执行操作 ...
perfMark('MyComponent:operation:done');
命名规范:
<组件>:<操作>:<阶段>
(例如:
Home:refresh:start:tokens
若为排查问题添加perfMark:
  1. 在疑似瓶颈周围添加标记
  2. 运行一次包含标记的性能测试
  3. 分析带有标记的新数据
  4. 进行代码优化
  5. 再次运行性能测试验证效果

Step 2.5: Document Iteration

步骤2.5:记录迭代内容

Update the session document with:
  • Analysis: Job ID, session IDs, median metrics, key findings from derive-session
  • Code Changes: File, location, change type, description, rationale
  • Verification Results: New job ID, metrics, deltas vs previous/baseline, verdict, action taken
更新会话文档,添加以下信息:
  • 分析结果:任务ID、会话ID、中位数指标、derive-session工具的关键发现
  • 代码修改:文件路径、修改位置、修改类型、描述、理由
  • 验证结果:新任务ID、指标数据、与之前/基准指标的差值、判定结果、执行的操作

Phase 3: Finalization

第三阶段:收尾工作

Step 3.1: Handle MINOR_IMPROVEMENT Branch

步骤3.1:处理小幅改进分支

If any iterations resulted in MINOR_IMPROVEMENT:
bash
git checkout -b perf/minor-<description>
git add <changed-files>
git commit -m "perf: <description>

Reduces function call count by X% while maintaining time metrics.

Reason: <brief explanation>
"
git checkout <original-branch>
git restore .
Document the branch name in the session document.
若任何迭代产生了小幅改进:
bash
git checkout -b perf/minor-<description>
git add <changed-files>
git commit -m "perf: <description>

Reduces function call count by X% while maintaining time metrics.

Reason: <brief explanation>
"
git checkout <original-branch>
git restore .
在会话文档中记录分支名称。

Step 3.2: Complete Documentation

步骤3.2:完成文档整理

Fill in the Summary section:
  • Total iterations run
  • Final result (SUCCESS with % improvement, or still investigating)
  • List all effective changes
  • List all ineffective changes with reasons
  • List any branches created
  • Next steps if incomplete
填写总结部分:
  • 总迭代次数
  • 最终结果(成功及提升百分比,或仍在排查中)
  • 所有有效修改列表
  • 所有无效修改列表及原因
  • 创建的所有分支
  • 未完成时的后续计划

Key Files and Paths

核心文件与路径

Perf Infrastructure:
  • development/perf-ci/run-ios-perf-detox-release.js
    - Main perf runner
  • development/perf-ci/output/<jobId>/
    - Job output directory
  • development/performance-server/cli/derive-session.js
    - Session analyzer
  • ~/perf-sessions/
    - Session data storage (default)
  • ~/perf-sessions/sessions.overview.jsonl
    - Session index
Thresholds:
  • development/perf-ci/thresholds/ios.release.json
    - Release mode thresholds
Performance Utilities:
  • packages/shared/src/performance/perfMark.ts
    - Performance marking utility
性能基础设施:
  • development/perf-ci/run-ios-perf-detox-release.js
    - 主性能测试运行脚本
  • development/perf-ci/output/<jobId>/
    - 任务输出目录
  • development/performance-server/cli/derive-session.js
    - 会话分析工具
  • ~/perf-sessions/
    - 会话数据存储目录(默认路径)
  • ~/perf-sessions/sessions.overview.jsonl
    - 会话索引
阈值配置:
  • development/perf-ci/thresholds/ios.release.json
    - 发布模式阈值
性能工具:
  • packages/shared/src/performance/perfMark.ts
    - 性能标记工具

References

参考资料

  • references/template.md: Session documentation template
  • references/perf_tool_guide.md: Detailed guide to derive-session and analysis tools
  • references/template.md:会话文档模板
  • references/perf_tool_guide.md:derive-session工具与分析工具的详细指南

Important Notes

重要注意事项

  1. Run each optimization loop in a sub-agent to avoid context bloat
  2. Never commit changes unless SUCCESS or MINOR_IMPROVEMENT
  3. Always document failed attempts - helps avoid repeating ineffective changes
  4. Trust the data - if metrics don't improve, revert even if change "should" help
  5. Be patient - each perf run takes significant time (build + 3 runs); rushing leads to mistakes
  6. Focus on the critical path - Home screen tokens refresh is the key metric
  7. Watch for trade-offs - some optimizations might reduce one metric but increase another
  1. 在子Agent中运行每个优化循环,避免上下文膨胀
  2. 仅在取得SUCCESS或MINOR_IMPROVEMENT时提交修改
  3. 始终记录失败尝试——有助于避免重复无效修改
  4. 信任数据——若指标未提升,即使修改“理论上”有效也需回滚
  5. 保持耐心——每次性能测试耗时较长(构建+3次运行);仓促操作易出错
  6. 聚焦关键路径——首页代币刷新是核心指标
  7. 关注权衡关系——部分优化可能降低某一指标但提升另一指标