high-perf-browser
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseHigh Performance Browser Networking Framework
高性能浏览器网络框架
A systematic approach to web performance optimization grounded in how browsers, protocols, and networks actually work. Apply these principles when building frontend applications, reviewing performance budgets, configuring servers, or diagnosing slow page loads.
这是一套基于浏览器、协议和网络实际工作原理的Web性能优化系统化方法。在构建前端应用、审核性能预算、配置服务器或诊断页面加载缓慢问题时,均可应用这些原则。
Core Principle
核心原则
Latency, not bandwidth, is the bottleneck. Most web performance problems stem from too many round trips, not too little throughput. A 5x bandwidth increase yields diminishing returns; a 5x latency reduction transforms the user experience.
The foundation: Every network request passes through DNS resolution, TCP handshake, TLS negotiation, and HTTP exchange before a single byte of content arrives. Each step adds round-trip latency. High-performance applications minimize round trips, parallelize requests, and eliminate unnecessary network hops. Understanding the protocol stack is not optional -- it is the prerequisite for meaningful optimization.
延迟而非带宽才是性能瓶颈。 大多数Web性能问题源于过多的往返请求,而非吞吐量不足。将带宽提升5倍带来的收益会逐渐递减;但将延迟降低5倍则会彻底改变用户体验。
基础认知: 在任何内容字节到达之前,每个网络请求都要经过DNS解析、TCP握手、TLS协商和HTTP交换这几个步骤。每个步骤都会增加往返延迟。高性能应用会尽量减少往返请求、并行处理请求并消除不必要的网络跳转。理解协议栈并非可选操作——这是进行有效优化的前提。
Scoring
评分标准
Goal: 10/10. When reviewing or building web applications, rate performance 0-10 based on adherence to the principles below. A 10/10 means full alignment with all guidelines; lower scores indicate gaps to address. Always provide the current score and specific improvements needed to reach 10/10.
目标:10/10。 在审核或构建Web应用时,根据是否遵循以下原则为性能评分(0-10分)。10/10表示完全符合所有指南;分数越低则说明存在需要解决的问题。始终要给出当前分数以及达到10/10所需的具体改进措施。
The High Performance Browser Networking Framework
高性能浏览器网络框架
Six domains for building fast, resilient web applications:
构建快速、可靠Web应用的六大领域:
1. Network Fundamentals
1. 网络基础
Core concept: Every HTTP request pays a latency tax: DNS lookup, TCP three-way handshake, and TLS negotiation -- all before any application data flows. Reducing or eliminating these round trips is the single highest-leverage optimization.
Why it works: Light travels at a finite speed. A packet from New York to London takes ~28ms one way regardless of bandwidth. TCP slow start means new connections begin transmitting slowly. TLS adds 1-2 more round trips. These physics-level constraints cannot be solved with bigger pipes -- only with fewer trips.
Key insights:
- TCP three-way handshake adds one full RTT before data transfer begins
- TCP slow start limits initial throughput to ~14KB (10 segments) in the first round trip -- keep critical resources under this threshold
- TLS 1.2 adds 2 RTTs; TLS 1.3 reduces this to 1 RTT (0-RTT with session resumption)
- Head-of-line blocking in TCP means one lost packet stalls all streams on that connection
- Bandwidth-delay product determines in-flight data capacity; high-latency links underutilize bandwidth
- DNS resolution can add 20-120ms; pre-resolve with
dns-prefetch
Code applications:
| Context | Pattern | Example |
|---|---|---|
| Connection warmup | Pre-establish connections to critical origins | |
| DNS prefetch | Resolve third-party domains early | |
| TLS optimization | Enable TLS 1.3 and session resumption | Server config: |
| Initial payload | Keep critical HTML under 14KB compressed | Inline critical CSS, defer non-essential scripts |
| Connection reuse | Keep-alive connections to avoid repeated handshakes | |
See: references/network-fundamentals.md for TCP congestion control, bandwidth-delay product, and TLS handshake details.
核心概念: 每个HTTP请求都要付出延迟成本:DNS查询、TCP三次握手和TLS协商——这些都发生在应用数据传输之前。减少或消除这些往返请求是性价比最高的优化手段。
为什么有效: 光速是有限的。从纽约到伦敦的数据包单程传输大约需要28ms,与带宽无关。TCP慢启动意味着新连接初始传输速度很慢。TLS会额外增加1-2次往返延迟。这些物理层面的限制无法通过扩容带宽解决——只能通过减少往返请求来缓解。
关键见解:
- TCP三次握手会在数据传输开始前增加一次完整的RTT(往返时间)
- TCP慢启动限制了首次往返的初始吞吐量约为14KB(10个分段)——要将关键资源控制在这个阈值以内
- TLS 1.2会增加2次RTT;TLS 1.3则将其减少到1次RTT(会话恢复时可实现0-RTT)
- TCP中的队头阻塞意味着一个丢失的数据包会阻塞该连接上的所有流
- 带宽延迟乘积决定了在途数据的容量;高延迟链路会未充分利用带宽
- DNS解析可能会增加20-120ms的延迟;可使用提前解析
dns-prefetch
代码实践:
| 场景 | 模式 | 示例 |
|---|---|---|
| 连接预热 | 提前与关键源建立连接 | |
| DNS预解析 | 提前解析第三方域名 | |
| TLS优化 | 启用TLS 1.3和会话恢复 | 服务器配置: |
| 初始负载 | 确保压缩后的关键HTML小于14KB | 内联关键CSS,延迟加载非必要脚本 |
| 连接复用 | 保持连接存活以避免重复握手 | |
参考:references/network-fundamentals.md 了解TCP拥塞控制、带宽延迟乘积和TLS握手的详细信息。
2. HTTP Protocol Evolution
2. HTTP协议演进
Core concept: HTTP has evolved from a simple request-response protocol to a multiplexed, binary, server-push-capable system. Choosing the right protocol version and configuring it properly eliminates entire categories of performance problems.
Why it works: HTTP/1.1 forces browsers into workarounds like domain sharding and sprite sheets because it cannot multiplex requests. HTTP/2 solves multiplexing but inherits TCP head-of-line blocking. HTTP/3 (QUIC) moves to UDP, eliminating head-of-line blocking and enabling connection migration. Each generation removes a bottleneck.
Key insights:
- HTTP/1.1 allows only one outstanding request per TCP connection; browsers open 6 connections per host as a workaround
- HTTP/2 multiplexes unlimited streams over a single TCP connection, making domain sharding counterproductive
- HPACK header compression in HTTP/2 reduces repetitive header overhead by 85-95%
- HTTP/3 runs over QUIC (UDP), eliminating TCP head-of-line blocking and enabling 0-RTT connection resumption
- Server Push (HTTP/2) sends resources before the browser requests them -- use sparingly and prefer instead
103 Early Hints - Connection coalescing in HTTP/2 lets one connection serve multiple hostnames sharing a certificate
Code applications:
| Context | Pattern | Example |
|---|---|---|
| HTTP/2 migration | Remove HTTP/1.1 workarounds | Undo domain sharding, remove sprite sheets, stop concatenating files |
| Stream prioritization | Signal resource importance to the server | CSS and fonts at highest priority; images at lower priority |
| 103 Early Hints | Send preload hints before the full response | Server sends |
| QUIC/HTTP/3 | Enable HTTP/3 on CDN or origin | Add |
| Header optimization | Minimize custom headers to reduce overhead | Audit cookies and custom headers; remove unnecessary ones |
See: references/http-protocols.md for protocol comparison, migration strategies, and server push vs. Early Hints.
核心概念: HTTP已从简单的请求-响应协议演变为支持多路复用、二进制格式和服务器推送的系统。选择正确的协议版本并进行合理配置可以消除整类性能问题。
为什么有效: HTTP/1.1每个TCP连接仅允许一个未完成的请求;浏览器作为变通方案,每个主机最多打开6个连接。HTTP/2在单个TCP连接上多路复用无限流,这使得域名分片变得适得其反。HTTP/3运行在QUIC(UDP)之上,消除了TCP队头阻塞并支持0-RTT连接恢复。每一代协议都解决了一个瓶颈。
关键见解:
- HTTP/1.1每个TCP连接仅允许一个未完成的请求;浏览器作为变通方案,每个主机最多打开6个连接
- HTTP/2在单个TCP连接上多路复用无限流,这使得域名分片变得适得其反
- HTTP/2中的HPACK头部压缩可将重复头部的开销减少85-95%
- HTTP/3运行在QUIC(UDP)之上,消除了TCP队头阻塞并支持0-RTT连接恢复
- Server Push(HTTP/2)在浏览器请求之前发送资源——要谨慎使用,优先选择
103 Early Hints - HTTP/2中的连接合并允许一个连接为多个共享证书的主机提供服务
代码实践:
| 场景 | 模式 | 示例 |
|---|---|---|
| HTTP/2迁移 | 移除HTTP/1.1的变通方案 | 取消域名分片、移除精灵图、停止文件合并 |
| 流优先级 | 向服务器标识资源的重要性 | CSS和字体设为最高优先级;图片设为较低优先级 |
| 103 Early Hints | 在完整响应之前发送预加载提示 | 服务器发送 |
| QUIC/HTTP/3 | 在CDN或源站启用HTTP/3 | 添加 |
| 头部优化 | 尽量减少自定义头部以降低开销 | 审核Cookie和自定义头部;移除不必要的内容 |
参考:references/http-protocols.md 了解协议对比、迁移策略以及Server Push与Early Hints的区别。
3. Resource Loading and Critical Rendering Path
3. 资源加载与关键渲染路径
Core concept: The browser must build the DOM, CSSOM, and render tree before painting pixels. Any resource that blocks this pipeline delays first paint. Optimizing the critical rendering path means identifying and eliminating these bottlenecks.
Why it works: CSS is render-blocking: the browser will not paint until all CSS is parsed. JavaScript is parser-blocking by default: halts DOM construction until the script downloads and executes. Fonts can block text rendering for up to 3 seconds. Each blocking resource adds latency directly to time-to-first-paint.
<script>Key insights:
- Critical rendering path: HTML bytes -> DOM -> CSSOM -> Render Tree -> Layout -> Paint -> Composite
- CSS blocks rendering; JavaScript blocks parsing -- these have different optimization strategies
- downloads scripts in parallel and executes immediately;
asyncdownloads in parallel but executes after DOM parsingdefer - fetches critical resources at high priority without blocking rendering
<link rel="preload"> - fetches resources for likely next navigations at low priority
<link rel="prefetch"> - Inline critical CSS (above-the-fold styles) and defer the rest to eliminate the render-blocking CSS request
- Fonts: use to avoid invisible text during font loading
font-display: swap
Code applications:
| Context | Pattern | Example |
|---|---|---|
| Critical CSS | Inline above-the-fold styles in | |
| Script loading | Use | |
| Resource hints | Preload critical fonts, hero images, above-fold assets | |
| Image optimization | Lazy-load below-fold images; use modern formats | |
| Font loading | Prevent invisible text with font-display | |
See: references/resource-loading.md for async/defer behavior, resource hint strategies, and image optimization.
核心概念: 浏览器必须先构建DOM、CSSOM和渲染树,然后才能绘制像素。任何阻塞此流程的资源都会延迟首次绘制。优化关键渲染路径意味着识别并消除这些瓶颈。
为什么有效: CSS会阻塞渲染:浏览器在解析完所有CSS之前不会进行绘制。JavaScript默认会阻塞解析:标签会暂停DOM构建,直到脚本下载并执行完成。字体在加载期间最多可能阻塞文本渲染3秒。每个阻塞资源都会直接增加首次绘制的延迟。
<script>关键见解:
- 关键渲染路径:HTML字节 -> DOM -> CSSOM -> 渲染树 -> 布局 -> 绘制 -> 合成
- CSS阻塞渲染;JavaScript阻塞解析——两者的优化策略不同
- 会并行下载脚本并立即执行;
async会并行下载但在DOM解析完成后执行defer - 以高优先级获取关键资源,且不会阻塞渲染
<link rel="preload"> - 以低优先级获取可能用于下一次导航的资源
<link rel="prefetch"> - 内联关键CSS(首屏样式)并延迟加载其余部分,以消除阻塞渲染的CSS请求
- 字体:使用避免在字体加载期间出现不可见文本
font-display: swap
代码实践:
| 场景 | 模式 | 示例 |
|---|---|---|
| 关键CSS | 在 | |
| 脚本加载 | 大多数脚本使用 | |
| 资源提示 | 预加载关键字体、首屏图片和首屏资源 | |
| 图片优化 | 懒加载首屏以下的图片;使用现代格式 | |
| 字体加载 | 使用font-display避免不可见文本 | |
参考:references/resource-loading.md 了解async/defer的行为、资源提示策略和图片优化。
4. Caching Strategies
4. 缓存策略
Core concept: The fastest network request is one that never happens. A layered caching strategy -- browser memory, disk cache, service worker, CDN, and origin -- dramatically reduces load times for repeat visitors and subsequent navigations.
Why it works: Cache-Control headers tell the browser and intermediaries exactly how long a response remains valid. Content-hashed URLs enable aggressive immutable caching. Service workers provide a programmable cache layer that works offline. Each cache hit eliminates a full network round trip.
Key insights:
- for content-hashed static assets (JS, CSS, images)
Cache-Control: max-age=31536000, immutable - still caches but revalidates every time -- use for HTML documents
Cache-Control: no-cache - and
ETagenable conditional requests (Last-Modified) that save bandwidth304 Not Modified - serves cached content immediately while fetching a fresh copy in the background
stale-while-revalidate - Service workers intercept fetch requests and can serve from cache, fall back to network, or implement custom strategies
- CDN caching moves content closer to users, reducing RTT; configure headers correctly to avoid cache pollution
Vary
Code applications:
| Context | Pattern | Example |
|---|---|---|
| Static assets | Long-lived immutable cache with hash busting | |
| HTML documents | Revalidate on every request | |
| API responses | Short TTL with stale-while-revalidate | |
| Offline support | Service worker cache-first strategy | Cache static shell; network-first for dynamic content |
| CDN config | Cache at edge with proper Vary headers | |
See: references/caching-strategies.md for cache hierarchy, service worker patterns, and CDN configuration.
核心概念: 最快的网络请求是无需发起的请求。分层缓存策略——浏览器内存缓存、磁盘缓存、Service Worker、CDN和源站——可显著减少重复访问者和后续导航的加载时间。
为什么有效: Cache-Control头部会告知浏览器和中间节点响应的有效期。带内容哈希的URL可实现激进的不可变缓存。Service Worker提供了可编程的缓存层,支持离线使用。每次缓存命中都能消除一次完整的网络往返。
关键见解:
- 适用于带内容哈希的静态资源(JS、CSS、图片)
Cache-Control: max-age=31536000, immutable - 仍会缓存内容,但每次都会重新验证——适用于HTML文档
Cache-Control: no-cache - 和
ETag支持条件请求(Last-Modified)以节省带宽304 Not Modified - 会立即返回缓存内容,同时在后台获取最新副本
stale-while-revalidate - Service Worker可拦截fetch请求,并可选择从缓存返回、回退到网络或实现自定义策略
- CDN缓存将内容部署在离用户更近的边缘节点,减少RTT;正确配置头部以避免缓存污染
Vary
代码实践:
| 场景 | 模式 | 示例 |
|---|---|---|
| 静态资源 | 长期不可变缓存+哈希破局 | |
| HTML文档 | 每次请求都重新验证 | |
| API响应 | 短TTL+stale-while-revalidate | |
| 离线支持 | Service Worker缓存优先策略 | 缓存静态外壳;动态内容使用网络优先策略 |
| CDN配置 | 在边缘节点缓存并设置正确的Vary头部 | |
参考:references/caching-strategies.md 了解缓存层级、Service Worker模式和CDN配置。
5. Core Web Vitals Optimization
5. Core Web Vitals优化
Core concept: Core Web Vitals -- LCP, INP, and CLS -- are Google's user-centric performance metrics that directly impact search ranking and user experience. Each metric targets a different phase: loading (LCP), interactivity (INP), and visual stability (CLS).
Why it works: These metrics measure what users actually experience, not what servers report. A page can have a fast TTFB but terrible LCP if the hero image loads late. A page can load quickly but feel sluggish if main-thread JavaScript blocks input handling (poor INP). Optimizing for these metrics means optimizing for real user perception.
Key insights:
- LCP (Largest Contentful Paint): target < 2.5s -- optimize the largest visible element (hero image, heading block, or video poster)
- INP (Interaction to Next Paint): target < 200ms -- keep main thread free; break long tasks; use for non-urgent work
requestIdleCallback - CLS (Cumulative Layout Shift): target < 0.1 -- reserve space for dynamic content; set explicit dimensions on images and embeds
- TTFB (Time to First Byte): target < 800ms -- optimize server response time, use CDN, enable compression
- FCP (First Contentful Paint): target < 1.8s -- eliminate render-blocking resources, inline critical CSS
- Measure with Real User Monitoring (RUM) in production, not just synthetic tests in lab conditions
Code applications:
| Context | Pattern | Example |
|---|---|---|
| LCP optimization | Preload LCP element; set | |
| INP optimization | Break long tasks; yield to main thread | |
| CLS prevention | Reserve space for async content | |
| TTFB reduction | CDN, server-side caching, streaming SSR | Edge rendering with |
| Performance budget | Set thresholds and block deploys that exceed them | LCP < 2.5s, INP < 200ms, CLS < 0.1 in CI pipeline |
See: references/core-web-vitals.md for measurement tools, debugging workflows, and optimization checklists.
核心概念: Core Web Vitals——LCP、INP和CLS——是谷歌提出的以用户为中心的性能指标,直接影响搜索排名和用户体验。每个指标针对不同阶段:加载(LCP)、交互性(INP)和视觉稳定性(CLS)。
为什么有效: 这些指标衡量的是用户的实际体验,而非服务器的报告数据。一个页面可能TTFB(首字节时间)很快,但如果首屏图片加载过晚,LCP(最大内容绘制)会很差。一个页面可能加载很快,但如果主线程JavaScript阻塞了输入处理,会让用户感觉卡顿(INP很差)。优化这些指标就是优化真实用户的感知。
关键见解:
- LCP(Largest Contentful Paint,最大内容绘制):目标<2.5s——优化最大可见元素(首屏图片、标题块或视频封面)
- INP(Interaction to Next Paint,交互到下一次绘制):目标<200ms——保持主线程空闲;拆分长任务;使用处理非紧急工作
requestIdleCallback - CLS(Cumulative Layout Shift,累积布局偏移):目标<0.1——为动态内容预留空间;为图片和嵌入元素设置明确尺寸
- TTFB(Time to First Byte,首字节时间):目标<800ms——优化服务器响应时间、使用CDN、启用压缩
- FCP(First Contentful Paint,首次内容绘制):目标<1.8s——消除阻塞渲染的资源、内联关键CSS
- 在生产环境中使用真实用户监控(RUM)进行测量,而不仅仅是实验室中的合成测试
代码实践:
| 场景 | 模式 | 示例 |
|---|---|---|
| LCP优化 | 预加载LCP元素;设置 | |
| INP优化 | 拆分长任务;让出主线程 | 使用 |
| CLS预防 | 为异步内容预留空间 | |
| TTFB降低 | CDN、服务器端缓存、流式SSR | 使用 |
| 性能预算 | 设置阈值并阻止超出阈值的部署 | 在CI流水线中设置LCP<2.5s、INP<200ms、CLS<0.1 |
参考:references/core-web-vitals.md 了解测量工具、调试流程和优化清单。
6. Real-Time Communication
6. 实时通信
Core concept: When data must flow continuously between client and server, choosing the right transport -- WebSocket, SSE, or long polling -- determines latency, resource usage, and scalability.
Why it works: HTTP's request-response model creates overhead for real-time data. WebSocket establishes a persistent full-duplex connection with minimal framing overhead (~2 bytes per frame). Server-Sent Events (SSE) provide a simpler server-to-client push over standard HTTP. The right choice depends on whether communication is unidirectional or bidirectional, how frequently data flows, and infrastructure constraints.
Key insights:
- WebSocket: full-duplex, minimal framing overhead, ideal for chat, gaming, and collaborative editing
- SSE: server-to-client only, auto-reconnects, works through HTTP proxies, simpler to implement than WebSocket
- Long polling: fallback when WebSocket/SSE are unavailable; high overhead from repeated HTTP requests
- WebSocket connections bypass HTTP/2 multiplexing -- each WebSocket is a separate TCP connection
- Implement heartbeat/ping frames to detect dead connections; mobile networks silently drop idle connections
- Connection management: exponential backoff on reconnection; queue messages during disconnection
Code applications:
| Context | Pattern | Example |
|---|---|---|
| Chat / collaboration | WebSocket with heartbeat and reconnection | |
| Live feeds / notifications | SSE for server-to-client streaming | |
| Legacy fallback | Long polling when WebSocket is blocked | |
| Connection resilience | Exponential backoff on reconnection | Delay: 1s, 2s, 4s, 8s... capped at 30s |
| Scaling | Use a pub/sub broker behind WebSocket servers | Redis Pub/Sub or NATS for horizontal scaling |
See: references/real-time-communication.md for WebSocket lifecycle, SSE patterns, and scaling strategies.
核心概念: 当数据需要在客户端和服务器之间持续传输时,选择合适的传输方式——WebSocket、SSE或长轮询——会决定延迟、资源使用率和可扩展性。
为什么有效: HTTP的请求-响应模型对于实时数据传输会产生额外开销。WebSocket建立一个持久的全双工连接,帧开销极小(每帧约2字节)。Server-Sent Events(SSE)通过标准HTTP提供更简单的服务器到客户端推送。选择哪种方式取决于通信是单向还是双向、数据传输频率以及基础设施限制。
关键见解:
- WebSocket:全双工、帧开销极小,适用于聊天、游戏和协作编辑
- SSE:仅服务器到客户端、自动重连、可通过HTTP代理,比WebSocket更易实现
- 长轮询:当WebSocket/SSE不可用时的 fallback 方案;重复HTTP请求会产生高开销
- WebSocket连接会绕过HTTP/2多路复用——每个WebSocket都是一个独立的TCP连接
- 实现心跳/ ping帧以检测死连接;移动网络会静默断开空闲连接
- 连接管理:重连时使用指数退避;断开连接期间对消息进行排队
代码实践:
| 场景 | 模式 | 示例 |
|---|---|---|
| 聊天/协作 | WebSocket+心跳+重连 | |
| 实时流/通知 | SSE用于服务器到客户端流 | |
| 兼容旧环境 | 当WebSocket被阻塞时使用长轮询 | 循环调用 |
| 连接可靠性 | 重连时使用指数退避 | 延迟时间:1s、2s、4s、8s... 上限30s |
| 扩容 | 在WebSocket服务器后端使用发布/订阅代理 | 使用Redis Pub/Sub或NATS进行水平扩容 |
参考:references/real-time-communication.md 了解WebSocket生命周期、SSE模式和扩容策略。
Common Mistakes
常见错误
| Mistake | Why It Fails | Fix |
|---|---|---|
| Adding bandwidth to fix slow pages | Latency, not bandwidth, is the bottleneck for most web traffic | Reduce round trips: preconnect, cache, CDN |
| Loading all JS upfront | Parser-blocking scripts delay first paint and interactivity | Code-split; use |
| No resource hints | Browser discovers critical resources too late in the parse | Add |
Cache-Control missing or | Every visit re-downloads all resources from origin | Set proper |
| Ignoring CLS | Layout shifts destroy user trust and hurt search ranking | Set explicit dimensions on all images, embeds, and ads |
| Using WebSocket for everything | Unnecessary complexity when SSE or HTTP polling suffices | Match transport to data flow pattern; SSE for server push |
| Domain sharding on HTTP/2 | Defeats multiplexing; creates extra TCP connections | Consolidate to one origin; let HTTP/2 multiplex |
| No compression | HTML, CSS, JS transfer at full size, wasting bandwidth | Enable Brotli (preferred) or Gzip on server and CDN |
| 错误 | 失败原因 | 修复方案 |
|---|---|---|
| 通过扩容带宽解决页面加载缓慢问题 | 对于大多数Web流量,延迟而非带宽才是瓶颈 | 减少往返请求:使用预连接、缓存、CDN |
| 一次性加载所有JS | 阻塞解析的脚本会延迟首次绘制和交互性 | 代码拆分;使用 |
| 未使用资源提示 | 浏览器在解析后期才发现关键资源 | 为关键首屏资源添加 |
缺少Cache-Control或全局设置 | 每次访问都要从源站重新下载所有资源 | 为静态资源设置合适的 |
| 忽略CLS | 布局偏移会破坏用户信任并影响搜索排名 | 为所有图片、嵌入元素和广告设置明确尺寸 |
| 所有场景都使用WebSocket | 当SSE或HTTP轮询足够时,会增加不必要的复杂度 | 根据数据流模式选择传输方式;服务器推送优先使用SSE |
| 在HTTP/2上使用域名分片 | 破坏多路复用;创建额外的TCP连接 | 合并到单个源站;让HTTP/2处理多路复用 |
| 未启用压缩 | HTML、CSS、JS以原始大小传输,浪费带宽 | 在服务器和CDN上启用Brotli(首选)或Gzip |
Quick Diagnostic
快速诊断
| Question | If No | Action |
|---|---|---|
| Is TTFB under 800ms? | Server or network too slow | Add CDN, enable server caching, check backend |
| Is LCP under 2.5s? | Largest element loads too late | Preload LCP resource; set |
| Is INP under 200ms? | Main thread blocked during interactions | Break long tasks; defer non-critical JS |
| Is CLS under 0.1? | Elements shift after initial render | Set explicit dimensions; reserve space for dynamic content |
| Are static assets cached with content hashes? | Repeat visitors re-download everything | Add hash to filenames; set |
| Is HTTP/2 or HTTP/3 enabled? | Missing multiplexing and header compression | Enable HTTP/2 on server; add HTTP/3 via CDN |
| Are render-blocking resources minimized? | CSS and sync JS delay first paint | Inline critical CSS; |
| Is compression enabled (Brotli/Gzip)? | Transferring uncompressed text resources | Enable Brotli on server/CDN; fall back to Gzip |
| 问题 | 如果答案为否 | 行动 |
|---|---|---|
| TTFB是否低于800ms? | 服务器或网络太慢 | 添加CDN、启用服务器缓存、检查后端 |
| LCP是否低于2.5s? | 最大元素加载过晚 | 预加载LCP资源;设置 |
| INP是否低于200ms? | 交互期间主线程被阻塞 | 拆分长任务;延迟加载非关键JS |
| CLS是否低于0.1? | 初始渲染后元素发生偏移 | 设置明确尺寸;为动态内容预留空间 |
| 静态资源是否使用内容哈希进行缓存? | 重复访问者需要重新下载所有内容 | 为文件名添加哈希;设置 |
| 是否启用了HTTP/2或HTTP/3? | 缺少多路复用和头部压缩 | 在服务器上启用HTTP/2;通过CDN添加HTTP/3支持 |
| 阻塞渲染的资源是否已最小化? | CSS和同步JS延迟了首次绘制 | 内联关键CSS;使用 |
| 是否启用了压缩(Brotli/Gzip)? | 传输未压缩的文本资源 | 在服务器/CDN上启用Brotli; fallback到Gzip |
Reference Files
参考文件
- network-fundamentals.md: TCP handshake, congestion control, TLS optimization, DNS resolution, head-of-line blocking
- http-protocols.md: HTTP/1.1 workarounds, HTTP/2 multiplexing, HTTP/3 and QUIC, migration strategies
- resource-loading.md: Critical rendering path, async/defer, resource hints, image and font optimization
- caching-strategies.md: Cache-Control headers, service workers, CDN configuration, cache invalidation
- core-web-vitals.md: LCP, INP, CLS optimization, measurement tools, performance budgets
- real-time-communication.md: WebSocket, SSE, long polling, connection management, scaling
- network-fundamentals.md:TCP握手、拥塞控制、TLS优化、DNS解析、队头阻塞
- http-protocols.md:HTTP/1.1变通方案、HTTP/2多路复用、HTTP/3与QUIC、迁移策略
- resource-loading.md:关键渲染路径、async/defer、资源提示、图片和字体优化
- caching-strategies.md:Cache-Control头部、Service Worker、CDN配置、缓存失效
- core-web-vitals.md:LCP、INP、CLS优化、测量工具、性能预算
- real-time-communication.md:WebSocket、SSE、长轮询、连接管理、扩容
Further Reading
延伸阅读
This skill is based on Ilya Grigorik's comprehensive guide to browser networking and web performance:
- "High Performance Browser Networking" by Ilya Grigorik (the complete reference for networking protocols, browser internals, and performance optimization)
- hpbn.co -- Free online edition maintained by the author
本技能基于Ilya Grigorik的浏览器网络和Web性能综合指南:
- High Performance Browser Networking by Ilya Grigorik(网络协议、浏览器内部机制和性能优化的完整参考资料)
- hpbn.co —— 作者维护的免费在线版本
About the Author
关于作者
Ilya Grigorik is a web performance engineer, author, and developer advocate who spent over a decade at Google working on Chrome, web platform performance, and HTTP standards. He was a co-chair of the W3C Web Performance Working Group and contributed to the development of HTTP/2 and related web standards. His book High Performance Browser Networking (O'Reilly, 2013) is widely regarded as the definitive reference for understanding how browsers interact with the network -- from TCP and TLS fundamentals through HTTP protocol evolution to real-time communication patterns. Grigorik's approach emphasizes that meaningful optimization requires understanding the underlying protocols, not just applying surface-level tricks, and that latency is the fundamental constraint shaping web performance.
Ilya Grigorik是Web性能工程师、作家和开发者布道师,曾在谷歌工作十余年,负责Chrome、Web平台性能和HTTP标准相关工作。他是W3C Web性能工作组联合主席,参与了HTTP/2及相关Web标准的开发。他的著作《High Performance Browser Networking》(O'Reilly,2013)被广泛认为是理解浏览器与网络交互的权威参考资料——从TCP和TLS基础到HTTP协议演进,再到实时通信模式。Grigorik的方法强调,有效的优化需要理解底层协议,而不仅仅是应用表面的技巧;延迟是影响Web性能的根本约束。