accelint-ts-performance
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseTypeScript Performance Optimization
TypeScript性能优化
Systematic performance optimization for JavaScript/TypeScript codebases. Combines audit workflow with expert-level optimization patterns for runtime performance.
针对JavaScript/TypeScript代码库的系统化性能优化方案。将审计流程与专家级运行时性能优化模式相结合。
NEVER Do When Optimizing Performance
性能优化的禁忌
Note: For general best practices (type safety with /, avoiding , not mutating parameters), use the skill instead. This section focuses exclusively on performance-specific anti-patterns.
anyenumnullaccelint-ts-best-practices-
NEVER assume code is cold path - Utility functions, formatters, parsers, and validators appear simple but are frequently called in loops, rendering pipelines, or real-time systems. Always audit ALL code for performance anti-patterns. Do not make assumptions about usage frequency or skip auditing based on perceived simplicity.
-
NEVER apply all optimizations blindly - Performance patterns have trade-offs. Balance optimization gains against code complexity. When conducting audits, identify ALL anti-patterns through systematic analysis and report them with expected gains. Let users decide which optimizations to apply based on their specific context.
-
NEVER ignore algorithmic complexity - Optimizing O(n²) code with micro-optimizations is futile. For n=1000, algorithmic fix (O(n² → O(n)) yields 1000x speedup; micro-optimizations yield 1.1-2x at best. Fix algorithm first: use Maps/Sets for O(1) lookups, eliminate nested iterations, choose appropriate data structures.
-
NEVER sacrifice correctness for speed - Performance bugs are still bugs. Optimizations frequently break edge cases: off-by-one errors in manual loops, wrong behavior for empty arrays, null handling issues. Verify behavior matches before and after. Add comprehensive tests covering edge cases before optimizing—catching bugs in production costs far more than any performance gain.
-
NEVER optimize code you don't own - Shared utilities, library internals, or code actively developed by others creates merge conflicts, duplicates effort, and confuses ownership. Performance changes affect all callers; coordinate with owners or defer optimization until code stabilizes.
-
NEVER ignore memory vs CPU trade-offs - Caching trades memory for speed. Unbounded memoization causes memory leaks in long-running applications. A 2x CPU speedup that increases memory 10x can trigger OOM crashes or frequent GC pauses (worse than original slowness). Profile memory usage alongside CPU; set cache size limits; use WeakMap for lifecycle-bound caches.
-
NEVER assume performance across environments - V8 optimizations differ between Node.js versions (v18 vs v20), browsers (Chrome vs Safari), and architectures (x64 vs ARM). An optimization yielding 3x speedup in Chrome may regress 1.5x in Safari. Profile in ALL target environments before shipping; maintain fallback implementations for environment-specific optimizations.
-
NEVER chain array methods (.filter().map().reduce()) - Each method creates intermediate arrays and iterates separately. For arrays with 10k items,allocates 10k + 5k items (if 50% pass filter) and iterates twice. Use single
.filter().map()pass to iterate once with zero intermediate allocations, yielding 2-5x speedup in hot paths.reduce -
NEVER usefor repeated lookups - Array.includes() is O(n) linear search. Checking 1000 items against array of 100 is O(n×m) = 100k operations. Use
Array.includes()instead: O(1) lookup via hash table, reducing 100k operations to 1000 for ~100x speedup. Build Set once upfront; amortized cost is negligible.Set.has() -
NEVER await before checking if you need the result -suspends execution immediately, even if the value isn't needed. Move
awaitinto conditional branches that actually use the result. Example:awaitwastes I/O time when condition is false. Better:const data = await fetch(url); if (condition) { use(data); }skips fetch entirely when unneeded.if (condition) { const data = await fetch(url); use(data); } -
NEVER recompute constants inside loops - Recomputing invariants wastes CPU in every iteration. For 10k iterations,lookup (even if cached by engine) or
array.lengthruns 10k times unnecessarily. Hoist invariants outside loops:Math.max(a, b)or curry functions to precompute constant parameters once.const len = array.length; for (let i = 0; i < len; i++) -
NEVER create unbounded loops or queues - Prevents runaway resource consumption from bugs or malicious input. Set explicit limits () or timeouts. Unbounded loops can freeze UI threads; unbounded queues cause OOM crashes. Fail fast with clear limits rather than degrading gracefully into unusability.
for (let i = 0; i < Math.min(items.length, 10000); i++) -
NEVER placein hot paths - V8 cannot inline functions containing try-catch blocks and marks entire function as non-optimizable. Single try-catch in hot loop causes 3-5x slowdown by preventing inlining, escape analysis, and other optimizations. Validate inputs before hot paths using type guards; move try-catch outside loops to wrap entire operation; use Result types for expected errors.
try/catch
注意: 关于通用最佳实践(使用/保证类型安全、避免、不修改参数),请使用技能。本节仅聚焦于性能相关的反模式。
anyenumnullaccelint-ts-best-practices-
切勿假设代码处于冷路径 - 工具函数、格式化器、解析器和验证器看似简单,但常被循环、渲染流水线或实时系统频繁调用。始终审计所有代码中的性能反模式,不要根据主观判断的使用频率跳过审计,也不要因代码看似简单就忽略。
-
切勿盲目应用所有优化手段 - 性能优化模式存在取舍。需平衡性能提升与代码复杂度。进行审计时,通过系统化分析识别所有反模式,并报告问题及预期性能提升,由用户根据具体场景决定应用哪些优化。
-
切勿忽视算法复杂度 - 对O(n²)代码进行微优化毫无意义。当n=1000时,算法层面的修复(从O(n²)优化为O(n))可带来1000倍的速度提升;而微优化最多仅能带来1.1-2倍的提升。优先修复算法:使用Maps/Sets实现O(1)查找、消除嵌套循环、选择合适的数据结构。
-
切勿为了速度牺牲正确性 - 性能问题依然是Bug。优化很容易破坏边界情况:手动循环中的差一错误、空数组的异常行为、null处理问题。优化前后需验证行为一致性。优化前添加覆盖边界情况的全面测试——在生产环境中修复Bug的成本远高于任何性能提升带来的收益。
-
切勿优化不属于你的代码 - 共享工具库、框架内部代码或他人正在开发的代码,优化此类代码会导致合并冲突、重复工作,并混淆代码所有权。性能变更会影响所有调用方,需与代码所有者协调,或等待代码稳定后再进行优化。
-
切勿忽视内存与CPU的取舍 - 缓存是以内存换速度。无限制的记忆化会导致长期运行的应用出现内存泄漏。2倍的CPU速度提升若伴随10倍的内存占用增加,可能触发OOM崩溃或频繁的GC停顿(比原本的缓慢更糟糕)。需同时分析内存与CPU使用情况;设置缓存大小限制;对生命周期绑定的缓存使用WeakMap。
-
切勿假设不同环境下性能表现一致 - V8优化在不同Node.js版本(v18与v20)、浏览器(Chrome与Safari)和架构(x64与ARM)之间存在差异。在Chrome中带来3倍速度提升的优化,在Safari中可能导致1.5倍的性能退化。上线前需在所有目标环境中进行性能分析;针对环境特定的优化需保留回退实现。
-
切勿链式调用数组方法 (.filter().map().reduce()) - 每个方法都会创建中间数组并单独遍历。对于包含10000个元素的数组,会分配10000 + 5000个元素(假设50%通过过滤),并遍历两次。使用单次
.filter().map()遍历,无需分配中间数组,可在热点路径中带来2-5倍的速度提升。reduce -
切勿在重复查找中使用- Array.includes()是O(n)的线性查找。用包含100个元素的数组检查1000个项,会产生O(n×m)=100000次操作。改用
Array.includes():通过哈希表实现O(1)查找,将100000次操作减少到1000次,带来约100倍的速度提升。提前构建一次Set;分摊后的成本可忽略不计。Set.has() -
切勿在检查是否需要结果前使用-
await会立即暂停执行,即使结果可能并不需要。将await移至实际需要结果的条件分支中。示例:await在条件为false时会浪费I/O时间。优化后:const data = await fetch(url); if (condition) { use(data); }在不需要时完全跳过请求。if (condition) { const data = await fetch(url); use(data); } -
切勿在循环内重复计算常量 - 重复计算不变量会在每次迭代中浪费CPU资源。对于10000次迭代,查找(即使引擎会缓存)或
array.length会被不必要地执行10000次。将不变量移至循环外:Math.max(a, b),或使用柯里化函数预先计算常量参数。const len = array.length; for (let i = 0; i < len; i++) -
切勿创建无界循环或队列 - 防止因Bug或恶意输入导致资源耗尽。设置明确的限制()或超时机制。无界循环会冻结UI线程;无界队列会导致OOM崩溃。设置明确限制快速失败,而非逐渐退化至不可用状态。
for (let i = 0; i < Math.min(items.length, 10000); i++) -
切勿在热点路径中放置- V8无法内联包含try-catch块的函数,并会将整个函数标记为不可优化。在热点循环中使用单个try-catch会导致3-5倍的性能下降,因为它会阻止内联、逃逸分析和其他优化。在进入热点路径前使用类型守卫验证输入;将try-catch移至循环外包裹整个操作;对预期错误使用Result类型。
try/catch
Before Optimizing Performance, Ask
优化前需明确的问题
Apply these tests to focus optimization efforts effectively:
通过以下测试聚焦优化工作:
Impact Assessment
影响评估
- Is this code actually slow? When profiling data is available, use it to inform prioritization. When unavailable, audit all code for anti-patterns.
- What percentage of runtime does this represent? When profiling data is available, flame graphs help identify the highest-impact issues. When unavailable, report all anti-patterns found.
- Raw performance matters - Audit ALL code for performance anti-patterns regardless of current usage context. Utility functions, formatters, parsers, and data transformations are frequently called in loops, rendering pipelines, or real-time systems even when they appear simple.
- 这段代码真的运行缓慢吗? 若有性能分析数据,以此确定优先级。若无数据,审计所有代码中的反模式。
- 它占运行时间的百分比是多少? 若有性能分析数据,火焰图可帮助识别影响最大的问题。若无数据,报告所有发现的反模式。
- 原始性能至关重要 - 无论当前使用场景如何,审计所有代码中的性能反模式。工具函数、格式化器、解析器和数据转换函数即使看似简单,也常被循环、渲染流水线或实时系统频繁调用。
Correctness Verification
正确性验证
- Do I have tests covering this code? Performance bugs are subtle. Comprehensive tests catch regressions from optimizations. Add tests before optimizing.
- What are the edge cases? Off-by-one errors, empty arrays, null/undefined values become more likely with manual loop optimizations. Test exhaustively.
- 这段代码有测试覆盖吗? 性能Bug非常隐蔽。全面的测试可捕捉优化带来的回归问题。优化前添加测试。
- 边界情况有哪些? 手动循环优化更容易出现差一错误、空数组异常、null/undefined值处理问题。需全面测试。
Complexity vs Benefit
复杂度与收益的平衡
- Is the algorithmic complexity optimal? O(n) → O(1) is 1000x speedup. Micro-optimizations are 1.1-2x at best. Fix algorithm first.
- Will this optimization persist? If the code changes frequently, optimization may be discarded soon. Optimize stable code first.
- What's the readability cost? Manual loops are faster but harder to maintain than . Balance performance with team velocity.
.map()
- 算法复杂度是否最优? O(n)→O(1)可带来1000倍的速度提升。微优化最多仅能带来1.1-2倍的提升。优先修复算法。
- 该优化能否长期保留? 若代码频繁变更,优化可能很快被丢弃。优先优化稳定代码。
- 可读性成本如何? 手动循环比更快,但更难维护。需平衡性能与团队开发效率。
.map()
How to Use
使用方法
This skill uses progressive disclosure to minimize context usage:
本技能采用渐进式披露方式,以最小化上下文依赖:
1. Start with the Workflow (SKILL.md)
1. 从工作流开始(SKILL.md)
Follow the 4-phase audit workflow below for systematic performance analysis.
遵循以下4阶段审计工作流,进行系统化性能分析。
2. Reference Performance Rules Overview (AGENTS.md)
2. 参考性能规则概述(AGENTS.md)
Load AGENTS.md to scan compressed rule summaries organized by category.
加载AGENTS.md查看按类别组织的压缩规则摘要。
3. Load Specific Performance Patterns as Needed
3. 根据需要加载特定性能模式
When you identify specific performance issues, load corresponding reference files for detailed ❌/✅ examples.
识别具体性能问题后,加载对应的参考文件查看详细的❌/✅示例。
4. Use the Report Template (For Explicit Audit Requests)
4. 使用报告模板(针对明确的审计请求)
When users explicitly request a performance audit, load the template for consistent reporting:
- assets/output-report-template.md - Structured template with guidance
当用户明确要求性能审计时,加载模板生成一致的报告:
- assets/output-report-template.md - 带指导的结构化模板
Performance Optimization Workflow
性能优化工作流
Two modes of operation:
-
Audit Mode - Skill invoked directly () or user explicitly requests performance audit
/accelint-ts-performance <path>- Generate a structured audit report using the template (Phases 1-2 only)
- Report findings for user review before implementation
- User decides which optimizations to apply
-
Implementation Mode - Skill triggers automatically during feature work
- Identify and apply optimizations directly (all 4 phases)
- No formal report needed
- Focus on fixing issues inline
Copy this checklist to track progress:
- [ ] Phase 1: Profile - Identify actual bottlenecks using profiling tools
- [ ] Phase 2: Analyze - Categorize issues by impact and optimization category
- [ ] Phase 3: Optimize - Apply performance patterns from references/
- [ ] Phase 4: Verify - Measure improvements and validate correctness两种操作模式:
-
审计模式 - 直接调用技能()或用户明确要求性能审计
/accelint-ts-performance <path>- 使用模板生成结构化审计报告(仅第1-2阶段)
- 报告发现的问题供用户审核,再进行实现
- 由用户决定应用哪些优化
-
实现模式 - 在功能开发过程中自动触发技能
- 直接识别并应用优化(全4阶段)
- 无需正式报告
- 专注于内联修复问题
复制以下清单跟踪进度:
- [ ] 阶段1:性能分析 - 使用性能分析工具识别实际瓶颈
- [ ] 阶段2:问题分析 - 按影响程度和优化类别对问题分类
- [ ] 阶段3:优化实施 - 应用references/中的性能模式
- [ ] 阶段4:验证 - 测量性能提升并验证正确性Phase 1: Profile to Identify Bottlenecks
阶段1:性能分析以识别瓶颈
CRITICAL: Audit ALL code for performance anti-patterns. Do not skip code based on assumptions about usage frequency. Utility functions, formatters, parsers, validators, and data transformations are frequently called in loops, rendering pipelines, or real-time systems even if their implementation appears simple.
When profiling tools are available, use them to establish baseline measurements:
- Browser: Chrome DevTools Performance tab
- Node.js:
node --prof script.js && node --prof-process isolate-*.log
Whether profiling data is available or not: Perform systematic static code analysis to identify ALL performance anti-patterns:
- O(n²) complexity (nested loops, repeated searches)
- Excessive allocations (template literals, object spreads, array methods)
- Template literal allocation when String() would suffice
- Array method chaining (.filter().map())
- Blocking async operations
- Try/catch in loops
Output: Complete list of ALL identified anti-patterns with their locations and expected performance impact. Do not filter based on "severity" or "priority" - report everything found.
When generating audit reports (when skill is invoked directly via or user explicitly requests performance audit), use the structured template:
/accelint-ts-performance <path>- Load assets/output-report-template.md for the report structure
- Follow the template's guidance for consistent formatting and issue grouping
关键要求:审计所有代码中的性能反模式。不要根据主观判断的使用频率跳过代码。工具函数、格式化器、解析器、验证器和数据转换函数即使实现看似简单,也常被循环、渲染流水线或实时系统频繁调用。
若有性能分析工具可用,使用它们建立基准测量:
- 浏览器:Chrome DevTools性能面板
- Node.js:
node --prof script.js && node --prof-process isolate-*.log
无论是否有性能分析数据,执行系统化的静态代码分析以识别所有性能反模式:
- O(n²)复杂度(嵌套循环、重复查找)
- 过度内存分配(模板字面量、对象展开、数组方法)
- 可使用String()却使用模板字面量分配内存
- 数组方法链式调用(.filter().map())
- 阻塞式异步操作
- 循环内的try/catch
输出: 所有识别出的反模式的完整列表,包含位置和预期性能提升。不要根据“严重程度”或“优先级”过滤——报告所有发现的问题。
生成审计报告时(当技能通过直接调用或用户明确要求性能审计),使用结构化模板:
/accelint-ts-performance <path>- 加载assets/output-report-template.md获取报告结构
- 遵循模板指导进行统一格式化和问题分组
Phase 2: Analyze and Categorize Issues
阶段2:分析并分类问题
For EVERY issue identified in Phase 1, categorize by optimization type:
Categorize ALL issues by optimization type:
| Issue Type | Category | Expected Gain |
|---|---|---|
| Nested loops, O(n²) complexity | Algorithmic optimization | 10-1000x |
| Repeated expensive computations | Caching & memoization | 2-100x |
| Allocation-heavy code | Allocation reduction | 1.5-5x |
| Sequential access violations | Memory locality | 1.5-3x |
| Excessive I/O operations | I/O optimization | 5-50x |
| Blocking async operations | I/O optimization | 2-10x |
| Property access in loops | Caching & memoization | 1.2-2x |
Quick reference for mapping issues:
Load references/quick-reference.md for detailed issue-to-category mapping and anti-pattern detection.
Output: Categorized list of ALL issues with their optimization categories. Do not filter or prioritize - list everything found in Phase 1.
针对阶段1中识别的每个问题,按优化类型分类:
按优化类型对所有问题分类:
| 问题类型 | 分类 | 预期性能提升 |
|---|---|---|
| 嵌套循环、O(n²)复杂度 | 算法优化 | 10-1000倍 |
| 重复的高开销计算 | 缓存与记忆化 | 2-100倍 |
| 内存分配密集型代码 | 减少内存分配 | 1.5-5倍 |
| 顺序访问违规 | 内存局部性 | 1.5-3倍 |
| 过多I/O操作 | I/O优化 | 5-50倍 |
| 阻塞式异步操作 | I/O优化 | 2-10倍 |
| 循环内的属性访问 | 缓存与记忆化 | 1.2-2倍 |
问题映射快速参考:
加载references/quick-reference.md获取详细的问题-分类映射和反模式检测方法。
输出: 所有问题的分类列表,包含对应的优化类别。不要过滤或排序——列出阶段1中发现的所有问题。
Phase 3: Optimize Using Performance Patterns
阶段3:使用性能模式进行优化
Step 1: Identify your bottleneck category from Phase 2 analysis.
Step 2: Load MANDATORY references for your category. Read each file completely with no range limits.
| Category | MANDATORY Files | Optional | Do NOT Load |
|---|---|---|---|
| Algorithmic (O(n²), nested loops, repeated lookups) | reduce-looping.md<br>reduce-branching.md | — | memoization, caching, I/O, allocation |
| Caching (property access in loops, repeated calculations) | memoization.md<br>cache-property-access.md | cache-storage-api.md (for Storage APIs) | I/O, allocation |
| I/O (blocking async, excessive I/O operations) | batching.md<br>defer-await.md | — | algorithmic, memory |
| Memory (allocation-heavy, GC pressure) | object-operations.md<br>avoid-allocations.md | — | I/O, caching |
| Locality (sequential access violations, cache misses) | predictable-execution.md | — | all others |
| Safety (unbounded loops, runaway queues) | bounded-iteration.md | — | all others |
| Micro-opt (hot path fine-tuning, 1.1-2x improvements) | currying.md<br>performance-misc.md | — | all others (apply only after algorithmic fixes) |
Notes:
- If bottleneck spans multiple categories, load references for all relevant categories
- Only apply micro-optimizations if: bottleneck is in hot path, algorithmic optimization already applied, need additional 1.1-2x performance
Step 3: Scan for quick reference during optimization
Load AGENTS.md to see compressed rule summaries organized by category. Use as a quick lookup while implementing patterns from the detailed reference files above.
Apply patterns systematically:
- Load the reference file for the identified issue category
- Scan the ❌/✅ examples to find matching patterns
- Apply the optimization with minimal changes to preserve correctness
- Add comments explaining the optimization and referencing the pattern
Example optimization:
typescript
// ❌ Before: O(n²) - nested iteration
for (const user of users) {
const items = allItems.filter(item => item.userId === user.id);
process(items);
}
// ✅ After: O(n) - single pass with Map lookup
// Performance: reduce-looping.md - build lookup once pattern
const itemsByUser = new Map<string, Item[]>();
for (const item of allItems) {
if (!itemsByUser.has(item.userId)) {
itemsByUser.set(item.userId, []);
}
itemsByUser.get(item.userId)!.push(item);
}
for (const user of users) {
const items = itemsByUser.get(user.id) ?? [];
process(items);
}步骤1: 从阶段2的分析结果中确定瓶颈类别。
步骤2: 加载对应类别的必填参考文件。需完整阅读每个文件,无范围限制。
| 分类 | 必填文件 | 可选文件 | 禁止加载 |
|---|---|---|---|
| 算法优化(O(n²)、嵌套循环、重复查找) | reduce-looping.md<br>reduce-branching.md | — | 记忆化、缓存、I/O、内存分配相关文件 |
| 缓存(循环内属性访问、重复计算) | memoization.md<br>cache-property-access.md | cache-storage-api.md(针对Storage API) | I/O、内存分配相关文件 |
| I/O优化(阻塞式异步、过多I/O操作) | batching.md<br>defer-await.md | — | 算法、内存相关文件 |
| 内存优化(内存分配密集、GC压力) | object-operations.md<br>avoid-allocations.md | — | I/O、缓存相关文件 |
| 内存局部性(顺序访问违规、缓存未命中) | predictable-execution.md | — | 所有其他文件 |
| 安全优化(无界循环、失控队列) | bounded-iteration.md | — | 所有其他文件 |
| 微优化(热点路径微调、1.1-2倍提升) | currying.md<br>performance-misc.md | — | 所有其他文件(仅在算法优化完成后应用) |
注意:
- 若瓶颈涉及多个类别,加载所有相关类别的参考文件
- 仅在以下情况应用微优化:瓶颈位于热点路径、已完成算法优化、需要额外1.1-2倍的性能提升
步骤3:优化时参考快速指南
加载AGENTS.md查看按类别组织的压缩规则摘要。在实现上述详细参考文件中的模式时,可将其作为快速查询工具。
系统化应用模式:
- 加载对应问题类别的参考文件
- 扫描❌/✅示例找到匹配的模式
- 应用优化,尽量减少变更以保留正确性
- 添加注释说明优化内容并引用对应模式
优化示例:
typescript
// ❌ 优化前:O(n²) - 嵌套循环
for (const user of users) {
const items = allItems.filter(item => item.userId === user.id);
process(items);
}
// ✅ 优化后:O(n) - 单次遍历+Map查找
// 性能模式:reduce-looping.md - 一次性构建查找表
const itemsByUser = new Map<string, Item[]>();
for (const item of allItems) {
if (!itemsByUser.has(item.userId)) {
itemsByUser.set(item.userId, []);
}
itemsByUser.get(item.userId)!.push(item);
}
for (const user of users) {
const items = itemsByUser.get(user.id) ?? [];
process(items);
}Phase 4: Verify Improvements
阶段4:验证性能提升
Measure performance gain:
- Re-run profiler with same inputs
- Compare before/after runtime percentages
- Document speedup factor (e.g., "2.3x faster")
Verify correctness:
- Run existing test suite - all tests must pass
- Add new tests for edge cases affected by optimization
- Manual testing for user-facing functionality
Document optimization:
typescript
// Performance optimization applied: 2026-01-28
// Issue: Nested iteration causing O(n²) complexity with 10k items
// Pattern: reduce-looping.md - Map-based lookup
// Speedup: 145x faster (5200ms → 36ms)
// Verified: All tests pass, manual QA completeDeciding whether to keep the optimization:
- >10x speedup: Always keep if tests pass
- 2-10x speedup: Keep if tests pass and code remains maintainable
- 1.2-2x speedup: Keep for hot paths (>1000 executions/sec) or real-time systems
- 1.05-1.2x speedup: Keep only if trivial change or critical rendering/animation loop
- <1.05x speedup: Revert unless it also improves readability
Real-time systems (60fps rendering, live data visualization):
Even 1.05x improvements matter in critical hot paths. Use frame timing profiler to verify impact on frame budget (16.67ms for 60fps).
If tests fail: Fix the optimization or revert. Performance bugs are still bugs.
测量性能提升:
- 使用相同输入重新运行性能分析工具
- 比较优化前后的运行时间占比
- 记录速度提升倍数(例如:“快2.3倍”)
验证正确性:
- 运行现有测试套件 - 所有测试必须通过
- 为优化影响的边界情况添加新测试
- 对用户可见的功能进行手动测试
记录优化内容:
typescript
// 性能优化实施时间:2026-01-28
// 问题:嵌套循环导致O(n²)复杂度,处理10000个元素时性能低下
// 应用模式:reduce-looping.md - 基于Map的查找
// 速度提升:145倍(5200ms → 36ms)
// 验证情况:所有测试通过,手动QA完成决定是否保留优化:
- >10倍提升: 若测试通过,始终保留
- 2-10倍提升: 若测试通过且代码仍可维护,保留
- 1.2-2倍提升: 对于热点路径(>1000次/秒执行)或实时系统,保留
- 1.05-1.2倍提升: 仅当变更极小或用于关键渲染/动画循环时保留
- <1.05倍提升: 除非同时提升可读性,否则回滚
实时系统(60fps渲染、实时数据可视化):
即使是1.05倍的提升,在关键热点路径中也至关重要。使用帧时间分析工具验证对帧预算的影响(60fps对应16.67ms)。
若测试失败: 修复优化或回滚。性能问题依然是Bug。
Freedom Calibration
自由度校准
Calibrate guidance specificity to optimization impact:
| Optimization Type | Freedom Level | Guidance Format | Example |
|---|---|---|---|
| Algorithmic (10x+ gain) | Medium freedom | Multiple valid approaches, pick based on constraints | "Use Map for O(1) lookup or Set for deduplication" |
| Caching (2-10x gain) | Medium freedom | Pattern with examples, cache invalidation strategy | "Memoize with WeakMap if lifecycle matches source objects" |
| Micro-optimization (1.1-2x) | Low freedom | Exact pattern from reference, measure first | "Cache array.length in loop: |
The test: "What's the speedup and maintenance cost?"
- 10x+ speedup → Worth complexity, medium freedom with patterns
- 2-10x speedup → Justify with measurements, medium freedom
- 1.2-2x speedup → Valuable for hot paths and real-time systems, low freedom with exact patterns
- 1.05-1.2x speedup → Only if trivial change or critical hot path (60fps rendering, etc.)
根据优化影响调整指导的具体程度:
| 优化类型 | 自由度 | 指导格式 | 示例 |
|---|---|---|---|
| 算法优化(10倍+提升) | 中等自由度 | 提供多种有效方案,根据约束选择 | "使用Map实现O(1)查找,或使用Set去重" |
| 缓存优化(2-10倍提升) | 中等自由度 | 带示例的模式,包含缓存失效策略 | "若生命周期与源对象匹配,使用WeakMap进行记忆化" |
| 微优化(1.1-2倍提升) | 低自由度 | 参考文件中的精确模式,需先测量 | "在循环中缓存array.length: |
测试标准: "速度提升幅度与维护成本如何?"
- 10倍+提升 → 值得付出复杂度成本,提供中等自由度的模式选择
- 2-10倍提升 → 需通过数据证明价值,提供中等自由度
- 1.2-2倍提升 → 对热点路径和实时系统有价值,提供低自由度的精确模式
- 1.05-1.2倍提升 → 仅当变更极小或用于关键热点路径(如60fps渲染)时保留
Important Notes
重要提示
- Audit everything philosophy - Audit ALL code for performance anti-patterns. Utility functions, formatters, parsers, and validators are frequently called in loops or real-time systems even when they appear simple. Do not make assumptions about usage frequency.
- Report all findings - Whether profiling data is available or not, perform systematic static analysis to identify and report ALL anti-patterns with their expected gains. Do not filter based on "severity" or "priority."
- Reference files are authoritative - The patterns in references/ have been validated. Follow them exactly unless measurements prove otherwise.
- Hot path definition - Code executed >1000 times per user interaction or >100 times per second in server contexts. For real-time systems (60fps rendering, live visualization), hot paths are functions in the critical rendering loop consuming >1ms per frame.
- Real-time systems have stricter requirements - 60fps = 16.67ms frame budget. 120fps = 8.33ms. Even 1.05x improvements in hot paths are valuable. Profile with frame timing, not just total execution time.
- Regression testing - Performance optimizations frequently introduce subtle bugs in edge cases. Add tests before optimizing.
- Memory profiling matters - Some optimizations (memoization, caching) trade memory for speed. Monitor memory usage in production, especially for long-running real-time applications.
- 审计所有代码的理念 - 审计所有代码中的性能反模式。工具函数、格式化器、解析器和验证器即使看似简单,也常被循环或实时系统频繁调用。不要根据主观判断的使用频率跳过代码。
- 报告所有发现 - 无论是否有性能分析数据,执行系统化静态分析以识别并报告所有反模式及预期性能提升。不要根据“严重程度”或“优先级”过滤。
- 参考文件具有权威性 - references/中的模式已通过验证。除非测量数据证明更好的方案,否则严格遵循。
- 热点路径定义 - 每个用户交互执行>1000次,或服务器环境中每秒执行>100次的代码。对于实时系统(60fps渲染、实时可视化),热点路径是指关键渲染循环中每次帧消耗>1ms的函数。
- 实时系统有更严格的要求 - 60fps=16.67ms帧预算,120fps=8.33ms。即使是1.05倍的提升,在关键热点路径中也至关重要。使用帧时间分析工具,而非仅测量总执行时间。
- 回归测试 - 性能优化很容易在边界情况中引入隐蔽Bug。优化前添加测试。
- 内存分析至关重要 - 部分优化(记忆化、缓存)是以内存换速度。需监控生产环境中的内存使用情况,尤其是长期运行的实时应用。
Quick Decision Tree
快速决策树
Use this table to rapidly identify which optimization category applies.
Audit everything: Identify ALL performance anti-patterns in the code regardless of current usage context. Report all findings with expected gains.
| If You See... | Root Cause | Optimization Category | Expected Gain |
|---|---|---|---|
Nested | O(n²) complexity | Algorithmic (reduce-looping) | 10-1000x |
| Multiple passes over data | Algorithmic (reduce-looping) | 2-10x |
Repeated | O(n) linear search | Algorithmic (reduce-looping, use Set/Map) | 10-100x |
Many | Branch-heavy code | Algorithmic (reduce-branching) | 1.5-3x |
| Same function called with same inputs repeatedly | Redundant computation | Caching (memoization) | 2-100x |
| Property access overhead | Caching (cache-property-access) | 1.2-2x |
| Expensive I/O in loop | Caching (cache-storage-api) | 5-20x |
Multiple | Sequential I/O blocking | I/O (batching, defer-await) | 2-10x |
| Premature async suspension | I/O (defer-await) | 1.5-3x |
Many object spreads | Allocation overhead | Memory (avoid-allocations) | 1.5-5x |
| Creating objects/arrays inside hot loops | GC pressure from allocations | Memory (avoid-allocations) | 2-5x |
| Unnecessary immutability cost | Memory (object-operations) | 1.5-3x |
| Accessing array elements non-sequentially | Cache locality issues | Memory Locality (predictable-execution) | 1.5-3x |
| Runaway resource usage | Safety (bounded-iteration) | Prevents crashes |
| Function called with mostly same first N params | Repeated parameter passing | Micro-opt (currying) | 1.1-1.5x |
| V8 deoptimization | Micro-opt (performance-misc) | 3-5x |
String concatenation in loop with | Quadratic string copying | Micro-opt (performance-misc) | 2-10x |
How to use this table:
- Identify the pattern from profiler bottleneck
- Find matching row in "If You See..." column
- Jump to corresponding Optimization Category in Phase 3
- Load MANDATORY reference files for that category
使用下表快速确定适用的优化类别。
审计所有代码: 无论当前使用场景如何,识别代码中所有性能反模式。报告所有发现的问题及预期性能提升。
| 若你发现... | 根本原因 | 优化类别 | 预期提升 |
|---|---|---|---|
针对同一数据的嵌套 | O(n²)复杂度 | 算法优化(reduce-looping) | 10-1000倍 |
| 多次遍历同一数据 | 算法优化(reduce-looping) | 2-10倍 |
重复调用 | O(n)线性查找 | 算法优化(reduce-looping,使用Set/Map) | 10-100倍 |
同一变量上的大量 | 分支密集型代码 | 算法优化(reduce-branching) | 1.5-3倍 |
| 同一函数使用相同输入重复调用 | 冗余计算 | 缓存优化(记忆化) | 2-100倍 |
循环中多次访问 | 属性访问开销 | 缓存优化(cache-property-access) | 1.2-2倍 |
循环中调用 | 循环中的高开销I/O | 缓存优化(cache-storage-api) | 5-20倍 |
序列调用多个 | 顺序I/O阻塞 | I/O优化(批处理、defer-await) | 2-10倍 |
在可能不需要结果的条件前使用 | 过早的异步暂停 | I/O优化(defer-await) | 1.5-3倍 |
大量对象展开 | 内存分配开销 | 内存优化(avoid-allocations) | 1.5-5倍 |
| 在热点循环中创建对象/数组 | 内存分配导致的GC压力 | 内存优化(avoid-allocations) | 2-5倍 |
可安全修改时使用 | 不必要的不可变成本 | 内存优化(object-operations) | 1.5-3倍 |
| 非顺序访问数组元素 | 缓存局部性问题 | 内存局部性优化(predictable-execution) | 1.5-3倍 |
| 资源失控 | 安全优化(bounded-iteration) | 防止崩溃 |
| 函数调用时前N个参数基本相同 | 重复参数传递 | 微优化(currying) | 1.1-1.5倍 |
热点循环内的 | V8去优化 | 微优化(performance-misc) | 3-5倍 |
循环中使用 | 二次方复杂度的字符串复制 | 微优化(performance-misc) | 2-10倍 |
使用方法:
- 从性能分析结果中识别瓶颈模式
- 在“若你发现...”列中找到匹配项
- 跳至阶段3中对应的优化类别
- 加载该类别的必填参考文件