marketplace-rate-limiting
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseAPI Rate Limiting & Resilience
API限流与弹性处理
When this skill applies
本技能的适用场景
Use this skill when building any integration that calls VTEX APIs — catalog sync, order processing, price/inventory updates, or fulfillment operations — and needs to handle rate limits gracefully without losing data or degrading performance.
- Implementing retry logic with exponential backoff and jitter
- Reading and reacting to VTEX rate limit headers (,
Retry-After,X-RateLimit-Remaining)X-RateLimit-Reset - Building circuit breakers for high-throughput integrations
- Controlling request throughput with queuing
Do not use this skill for:
- Catalog-specific synchronization logic (see )
marketplace-catalog-sync - Order event consumption and processing (see )
marketplace-order-hook - Invoice and tracking submission (see )
marketplace-fulfillment
当你构建任何调用VTEX API的集成(如目录同步、订单处理、价格/库存更新或履约操作),且需要优雅处理限流、不丢失数据或不降低性能时,可以使用本技能。
- 实现带抖动的指数退避重试逻辑
- 读取并响应VTEX限流响应头(、
Retry-After、X-RateLimit-Remaining)X-RateLimit-Reset - 为高吞吐量集成构建断路器
- 通过请求队列控制请求吞吐量
请勿在以下场景使用本技能:
- 目录特定的同步逻辑(请参考)
marketplace-catalog-sync - 订单事件消费与处理(请参考)
marketplace-order-hook - 发票与物流跟踪提交(请参考)
marketplace-fulfillment
Decision rules
决策规则
- Always implement exponential backoff with jitter on 429 responses. Formula: .
delay = min(maxDelay, baseDelay * 2^attempt) * (0.5 + random(0, 0.5)) - Always read the header on 429 responses. Use the greater of the
Retry-Aftervalue and the calculated backoff delay.Retry-After - Use a circuit breaker when a service consistently fails (e.g., after 5 consecutive failures), to prevent cascading failures and give the service time to recover.
- Use a request queue to control throughput and avoid bursts that trigger rate limits.
- Monitor proactively on successful responses and slow down before hitting 429.
X-RateLimit-Remaining - VTEX rate limits vary by API:
- Pricing API: PUT/POST: 40 requests/second/account with 1000 burst credits. DELETE: 16 requests/second/account with 300 burst credits.
- Catalog API: Varies by endpoint; no published fixed limits.
- Orders API: Subject to general platform limits; VTEX recommends 1-minute backoff on 429.
- Burst Credits: When you exceed the rate limit, excess requests consume burst credits (1 credit per excess request). When burst credits reach 0, the request is blocked with 429. Credits refill over time at the same rate as the route's limit when the route is not being used.
Rate Limit Response Headers:
| Header | Description |
|---|---|
| Seconds to wait before retrying (present on 429 responses) |
| Number of requests remaining in the current window |
| Timestamp (seconds) when the rate limit window resets |
Architecture/Data Flow:
text
Your Integration VTEX API
│ │
│── Request ──────────────────────────▶│
│◀── 200 OK ─────────────────────────│ (success)
│ │
│── Request ──────────────────────────▶│
│◀── 429 + Retry-After: 30 ──────────│ (rate limited)
│ │
│ [Wait: max(Retry-After, backoff)] │
│ [backoff = base * 2^attempt * jitter]│
│ │
│── Retry ───────────────────────────▶│
│◀── 200 OK ─────────────────────────│ (success)- 收到429响应时,必须实现带抖动的指数退避重试逻辑。计算公式:。
delay = min(maxDelay, baseDelay * 2^attempt) * (0.5 + random(0, 0.5)) - 收到429响应时,必须读取响应头。使用
Retry-After值与计算出的退避延迟中较大的那个值。Retry-After - 当服务持续失败时(例如连续5次失败),使用断路器模式,以防止级联故障并给服务恢复时间。
- 使用请求队列控制吞吐量,避免突发请求触发限流。
- 主动监控成功响应中的,在达到429限制前降低请求速率。
X-RateLimit-Remaining - VTEX不同API的限流规则不同:
- Pricing API:PUT/POST请求:40请求/秒/账户,含1000个突发额度。DELETE请求:16请求/秒/账户,含300个突发额度。
- Catalog API:限流规则因端点而异,无公开固定限制。
- Orders API:遵循平台通用限制;VTEX建议收到429后退避1分钟。
- 突发额度:当你超过限流阈值时,超额请求会消耗突发额度(每超额1次请求消耗1个额度)。当突发额度耗尽时,请求会被429状态码拦截。当路由未被使用时,额度会以与路由限流速率相同的速度恢复。
限流响应头说明:
| 响应头 | 描述 |
|---|---|
| 重试前需等待的秒数(仅在429响应中返回) |
| 当前限流窗口内剩余的请求次数 |
| 限流窗口重置的时间戳(秒) |
架构/数据流:
text
Your Integration VTEX API
│ │
│── Request ──────────────────────────▶│
│◀── 200 OK ─────────────────────────│ (success)
│ │
│── Request ──────────────────────────▶│
│◀── 429 + Retry-After: 30 ──────────│ (rate limited)
│ │
│ [Wait: max(Retry-After, backoff)] │
│ [backoff = base * 2^attempt * jitter]│
│ │
│── Retry ───────────────────────────▶│
│◀── 200 OK ─────────────────────────│ (success)Hard constraints
硬性约束
Constraint: Implement Exponential Backoff on 429 Responses
约束:针对429响应实现指数退避
When receiving a 429 response, the integration MUST wait before retrying using exponential backoff with jitter. The wait time MUST respect the header when present.
Retry-AfterWhy this matters
Immediate retries after a 429 will be rejected again and consume burst credits faster, leading to prolonged blocking. Without jitter, all clients retry simultaneously after the window resets, causing another rate limit spike (thundering herd).
Detection
If you see immediate retry on 429 (no delay, no backoff) → STOP and implement exponential backoff. If you see retry logic without reading the header → warn that the header should be respected. If you see retry loops or with intervals less than 5 seconds → warn about tight loops.
Retry-Afterwhile(true)setIntervalCorrect
typescript
import axios, { AxiosInstance, AxiosError, AxiosRequestConfig, AxiosResponse } from "axios";
interface RetryConfig {
maxRetries: number;
baseDelayMs: number;
maxDelayMs: number;
}
const DEFAULT_RETRY_CONFIG: RetryConfig = {
maxRetries: 5,
baseDelayMs: 1000,
maxDelayMs: 60000,
};
/**
* Calculates exponential backoff delay with full jitter.
*
* Formula: min(maxDelay, baseDelay * 2^attempt) * random(0.5, 1.0)
*
* The jitter prevents thundering herd when multiple clients
* are rate-limited simultaneously.
*/
function calculateBackoffWithJitter(
attempt: number,
baseDelayMs: number,
maxDelayMs: number
): number {
const exponentialDelay = baseDelayMs * Math.pow(2, attempt);
const boundedDelay = Math.min(maxDelayMs, exponentialDelay);
// Full jitter: random value between 50% and 100% of the bounded delay
const jitter = 0.5 + Math.random() * 0.5;
return Math.floor(boundedDelay * jitter);
}
/**
* Executes an API request with automatic retry on 429 responses.
* Respects the Retry-After header and applies exponential backoff with jitter.
*/
async function requestWithRetry<T>(
client: AxiosInstance,
config: AxiosRequestConfig,
retryConfig: RetryConfig = DEFAULT_RETRY_CONFIG
): Promise<AxiosResponse<T>> {
let lastError: AxiosError | undefined;
for (let attempt = 0; attempt <= retryConfig.maxRetries; attempt++) {
try {
return await client.request<T>(config);
} catch (error: unknown) {
if (!axios.isAxiosError(error)) {
throw error;
}
lastError = error;
const status = error.response?.status;
// Only retry on 429 (rate limited) and 503 (circuit breaker)
if (status !== 429 && status !== 503) {
throw error;
}
if (attempt === retryConfig.maxRetries) {
break; // Exhausted retries
}
// Respect Retry-After header if present (value is in seconds)
const retryAfterHeader = error.response?.headers?.["retry-after"];
const retryAfterMs = retryAfterHeader
? parseInt(retryAfterHeader, 10) * 1000
: 0;
// Use the greater of Retry-After or calculated backoff
const backoffMs = calculateBackoffWithJitter(
attempt,
retryConfig.baseDelayMs,
retryConfig.maxDelayMs
);
const delayMs = Math.max(retryAfterMs, backoffMs);
console.warn(
`Rate limited (${status}). Retry ${attempt + 1}/${retryConfig.maxRetries} ` +
`in ${delayMs}ms (Retry-After: ${retryAfterHeader ?? "none"}, ` +
`backoff: ${backoffMs}ms)`
);
await new Promise((resolve) => setTimeout(resolve, delayMs));
}
}
throw lastError ?? new Error("Request failed after all retries");
}Wrong
typescript
// WRONG: Immediate retry without backoff or Retry-After respect
async function retryImmediately<T>(
client: AxiosInstance,
config: AxiosRequestConfig,
maxRetries: number = 3
): Promise<T> {
for (let i = 0; i < maxRetries; i++) {
try {
const response = await client.request<T>(config);
return response.data;
} catch (error: unknown) {
// Retries immediately — will hit 429 again and drain burst credits
// Does not read Retry-After header — ignores server guidance
console.log(`Retry ${i + 1}...`);
// No delay at all — thundering herd when multiple instances retry
}
}
throw new Error("Failed after retries");
}收到429响应时,集成必须在重试前使用带抖动的指数退避等待。若存在响应头,等待时间必须遵循该值。
Retry-After为什么这很重要
收到429后立即重试会再次被拒绝,并更快消耗突发额度,导致更长时间的拦截。如果没有抖动,所有客户端会在窗口重置后同时重试,引发另一次限流高峰(惊群效应)。
问题检测
如果发现收到429后立即重试(无延迟、无退避)→ 停止并实现指数退避。如果发现重试逻辑未读取响应头→ 警告应遵循该响应头。如果发现重试循环或间隔小于5秒的→ 警告存在过紧的循环。
Retry-Afterwhile(true)setInterval正确示例
typescript
import axios, { AxiosInstance, AxiosError, AxiosRequestConfig, AxiosResponse } from "axios";
interface RetryConfig {
maxRetries: number;
baseDelayMs: number;
maxDelayMs: number;
}
const DEFAULT_RETRY_CONFIG: RetryConfig = {
maxRetries: 5,
baseDelayMs: 1000,
maxDelayMs: 60000,
};
/**
* Calculates exponential backoff delay with full jitter.
*
* Formula: min(maxDelay, baseDelay * 2^attempt) * random(0.5, 1.0)
*
* The jitter prevents thundering herd when multiple clients
* are rate-limited simultaneously.
*/
function calculateBackoffWithJitter(
attempt: number,
baseDelayMs: number,
maxDelayMs: number
): number {
const exponentialDelay = baseDelayMs * Math.pow(2, attempt);
const boundedDelay = Math.min(maxDelayMs, exponentialDelay);
// Full jitter: random value between 50% and 100% of the bounded delay
const jitter = 0.5 + Math.random() * 0.5;
return Math.floor(boundedDelay * jitter);
}
/**
* Executes an API request with automatic retry on 429 responses.
* Respects the Retry-After header and applies exponential backoff with jitter.
*/
async function requestWithRetry<T>(
client: AxiosInstance,
config: AxiosRequestConfig,
retryConfig: RetryConfig = DEFAULT_RETRY_CONFIG
): Promise<AxiosResponse<T>> {
let lastError: AxiosError | undefined;
for (let attempt = 0; attempt <= retryConfig.maxRetries; attempt++) {
try {
return await client.request<T>(config);
} catch (error: unknown) {
if (!axios.isAxiosError(error)) {
throw error;
}
lastError = error;
const status = error.response?.status;
// Only retry on 429 (rate limited) and 503 (circuit breaker)
if (status !== 429 && status !== 503) {
throw error;
}
if (attempt === retryConfig.maxRetries) {
break; // Exhausted retries
}
// Respect Retry-After header if present (value is in seconds)
const retryAfterHeader = error.response?.headers?.["retry-after"];
const retryAfterMs = retryAfterHeader
? parseInt(retryAfterHeader, 10) * 1000
: 0;
// Use the greater of Retry-After or calculated backoff
const backoffMs = calculateBackoffWithJitter(
attempt,
retryConfig.baseDelayMs,
retryConfig.maxDelayMs
);
const delayMs = Math.max(retryAfterMs, backoffMs);
console.warn(
`Rate limited (${status}). Retry ${attempt + 1}/${retryConfig.maxRetries} ` +
`in ${delayMs}ms (Retry-After: ${retryAfterHeader ?? "none"}, ` +
`backoff: ${backoffMs}ms)`
);
await new Promise((resolve) => setTimeout(resolve, delayMs));
}
}
throw lastError ?? new Error("Request failed after all retries");
}错误示例
typescript
// WRONG: Immediate retry without backoff or Retry-After respect
async function retryImmediately<T>(
client: AxiosInstance,
config: AxiosRequestConfig,
maxRetries: number = 3
): Promise<T> {
for (let i = 0; i < maxRetries; i++) {
try {
const response = await client.request<T>(config);
return response.data;
} catch (error: unknown) {
// Retries immediately — will hit 429 again and drain burst credits
// Does not read Retry-After header — ignores server guidance
console.log(`Retry ${i + 1}...`);
// No delay at all — thundering herd when multiple instances retry
}
}
throw new Error("Failed after retries");
}Constraint: Respect the Retry-After Header
约束:遵循Retry-After响应头
When a 429 response includes a header, the integration MUST wait at least the specified number of seconds before retrying. The backoff delay should be the maximum of the calculated backoff and the value.
Retry-AfterRetry-AfterWhy this matters
The header is the server's explicit instruction on when it will accept requests again. Ignoring it results in requests being rejected until the specified time has passed, wasting bandwidth and potentially extending the block period.
Retry-AfterDetection
If you see retry logic that does not read or use the header value → warn that the header should be checked. If the retry delay is always a fixed value regardless of the header → warn.
Retry-AfterCorrect
typescript
function getRetryDelayMs(error: AxiosError, attempt: number): number {
const retryAfterHeader = error.response?.headers?.["retry-after"];
// Parse Retry-After (could be seconds or HTTP-date)
let retryAfterMs = 0;
if (retryAfterHeader) {
const seconds = parseInt(retryAfterHeader, 10);
if (!isNaN(seconds)) {
retryAfterMs = seconds * 1000;
} else {
// HTTP-date format
const retryDate = new Date(retryAfterHeader).getTime();
retryAfterMs = Math.max(0, retryDate - Date.now());
}
}
// Calculate backoff with jitter
const backoffMs = calculateBackoffWithJitter(attempt, 1000, 60000);
// Use the larger value — respect server guidance
return Math.max(retryAfterMs, backoffMs);
}Wrong
typescript
// WRONG: Fixed 1-second retry ignoring Retry-After header
async function fixedRetry<T>(
client: AxiosInstance,
config: AxiosRequestConfig
): Promise<T> {
try {
const response = await client.request<T>(config);
return response.data;
} catch {
// Always waits 1 second regardless of Retry-After header
// If Retry-After says 60 seconds, this will fail again and again
await new Promise((resolve) => setTimeout(resolve, 1000));
const response = await client.request<T>(config);
return response.data;
}
}当429响应包含响应头时,集成必须等待至少指定的秒数后再重试。退避延迟应取计算出的退避时间与值中的较大者。
Retry-AfterRetry-After为什么这很重要
Retry-After问题检测
如果发现重试逻辑未读取或使用响应头的值→ 警告应检查该响应头。如果重试延迟始终为固定值,不考虑响应头→ 警告。
Retry-After正确示例
typescript
function getRetryDelayMs(error: AxiosError, attempt: number): number {
const retryAfterHeader = error.response?.headers?.["retry-after"];
// Parse Retry-After (could be seconds or HTTP-date)
let retryAfterMs = 0;
if (retryAfterHeader) {
const seconds = parseInt(retryAfterHeader, 10);
if (!isNaN(seconds)) {
retryAfterMs = seconds * 1000;
} else {
// HTTP-date format
const retryDate = new Date(retryAfterHeader).getTime();
retryAfterMs = Math.max(0, retryDate - Date.now());
}
}
// Calculate backoff with jitter
const backoffMs = calculateBackoffWithJitter(attempt, 1000, 60000);
// Use the larger value — respect server guidance
return Math.max(retryAfterMs, backoffMs);
}错误示例
typescript
// WRONG: Fixed 1-second retry ignoring Retry-After header
async function fixedRetry<T>(
client: AxiosInstance,
config: AxiosRequestConfig
): Promise<T> {
try {
const response = await client.request<T>(config);
return response.data;
} catch {
// Always waits 1 second regardless of Retry-After header
// If Retry-After says 60 seconds, this will fail again and again
await new Promise((resolve) => setTimeout(resolve, 1000));
const response = await client.request<T>(config);
return response.data;
}
}Constraint: No Tight Retry Loops
约束:禁止过紧的重试循环
Integrations MUST NOT use loops for retrying or / with intervals less than 5 seconds for polling VTEX APIs.
while(true)setIntervalsetTimeoutWhy this matters
Tight loops generate excessive requests that quickly exhaust rate limits, degrade VTEX platform performance for all users, and can make the VTEX Admin unavailable for the account. VTEX explicitly warns that excessive 429 errors can make Admin unavailable.
Detection
If you see or retry patterns without adequate delays → warn about tight loops. If you see with intervals less than 5000ms for API calls → warn about polling frequency.
while(true)for(;;)setIntervalCorrect
typescript
// Correct: Controlled polling with adequate intervals
async function pollWithBackpressure(
client: AxiosInstance,
intervalMs: number = 30000 // 30 seconds minimum
): Promise<void> {
const poll = async (): Promise<void> => {
try {
const response = await client.get("/api/orders/feed");
const events = response.data;
if (events.length > 0) {
await processEvents(events);
await commitEvents(
client,
events.map((e: { handle: string }) => e.handle)
);
}
} catch (error: unknown) {
if (axios.isAxiosError(error) && error.response?.status === 429) {
const retryAfter = parseInt(
error.response.headers["retry-after"] || "60",
10
);
console.warn(`Rate limited, waiting ${retryAfter}s`);
await new Promise((resolve) => setTimeout(resolve, retryAfter * 1000));
return;
}
console.error("Polling error:", error);
}
// Schedule next poll
setTimeout(poll, intervalMs);
};
// Start polling
await poll();
}
async function processEvents(events: unknown[]): Promise<void> {
console.log(`Processing ${events.length} events`);
}
async function commitEvents(client: AxiosInstance, handles: string[]): Promise<void> {
await client.post("/api/orders/feed", { handles });
}Wrong
typescript
// WRONG: Tight loop with no backpressure
async function tightLoop(client: AxiosInstance): Promise<void> {
while (true) {
try {
const response = await client.get("/api/orders/feed");
await processEvents(response.data);
} catch {
// Immediate retry — no delay, burns through rate limits
continue;
}
}
}
// WRONG: setInterval with 1-second polling
setInterval(async () => {
// 1 request/second = 3600/hour — will trigger rate limits quickly
const client = createClient();
await client.get("/api/catalog_system/pvt/sku/stockkeepingunitids");
}, 1000);
function createClient(): AxiosInstance {
return axios.create({ baseURL: "https://account.vtexcommercestable.com.br" });
}集成不得使用循环进行重试,或使用间隔小于5秒的/轮询VTEX API。
while(true)setIntervalsetTimeout为什么这很重要
过紧的循环会生成大量请求,迅速耗尽限流额度,降低VTEX平台对所有用户的性能,甚至可能导致该账户的VTEX Admin无法访问。VTEX明确警告,过多的429错误会导致Admin无法使用。
问题检测
如果发现或等无足够延迟的重试模式→ 警告存在过紧循环。如果发现的API轮询间隔小于5000ms→ 警告轮询频率过高。
while(true)for(;;)setInterval正确示例
typescript
// Correct: Controlled polling with adequate intervals
async function pollWithBackpressure(
client: AxiosInstance,
intervalMs: number = 30000 // 30 seconds minimum
): Promise<void> {
const poll = async (): Promise<void> => {
try {
const response = await client.get("/api/orders/feed");
const events = response.data;
if (events.length > 0) {
await processEvents(events);
await commitEvents(
client,
events.map((e: { handle: string }) => e.handle)
);
}
} catch (error: unknown) {
if (axios.isAxiosError(error) && error.response?.status === 429) {
const retryAfter = parseInt(
error.response.headers["retry-after"] || "60",
10
);
console.warn(`Rate limited, waiting ${retryAfter}s`);
await new Promise((resolve) => setTimeout(resolve, retryAfter * 1000));
return;
}
console.error("Polling error:", error);
}
// Schedule next poll
setTimeout(poll, intervalMs);
};
// Start polling
await poll();
}
async function processEvents(events: unknown[]): Promise<void> {
console.log(`Processing ${events.length} events`);
}
async function commitEvents(client: AxiosInstance, handles: string[]): Promise<void> {
await client.post("/api/orders/feed", { handles });
}错误示例
typescript
// WRONG: Tight loop with no backpressure
async function tightLoop(client: AxiosInstance): Promise<void> {
while (true) {
try {
const response = await client.get("/api/orders/feed");
await processEvents(response.data);
} catch {
// Immediate retry — no delay, burns through rate limits
continue;
}
}
}
// WRONG: setInterval with 1-second polling
setInterval(async () => {
// 1 request/second = 3600/hour — will trigger rate limits quickly
const client = createClient();
await client.get("/api/catalog_system/pvt/sku/stockkeepingunitids");
}, 1000);
function createClient(): AxiosInstance {
return axios.create({ baseURL: "https://account.vtexcommercestable.com.br" });
}Preferred pattern
推荐模式
Create a Rate-Limit-Aware HTTP Client
创建支持限流感知的HTTP客户端
Wrap your HTTP client with automatic retry logic.
typescript
import axios, { AxiosInstance, AxiosRequestConfig, AxiosResponse } from "axios";
interface RateLimitedClientConfig {
accountName: string;
appKey: string;
appToken: string;
maxRetries?: number;
baseDelayMs?: number;
maxDelayMs?: number;
}
function createRateLimitedClient(config: RateLimitedClientConfig): {
client: AxiosInstance;
request: <T>(requestConfig: AxiosRequestConfig) => Promise<AxiosResponse<T>>;
} {
const client = axios.create({
baseURL: `https://${config.accountName}.vtexcommercestable.com.br`,
headers: {
"Content-Type": "application/json",
"X-VTEX-API-AppKey": config.appKey,
"X-VTEX-API-AppToken": config.appToken,
},
timeout: 30000,
});
const retryConfig: RetryConfig = {
maxRetries: config.maxRetries ?? 5,
baseDelayMs: config.baseDelayMs ?? 1000,
maxDelayMs: config.maxDelayMs ?? 60000,
};
return {
client,
request: <T>(requestConfig: AxiosRequestConfig) =>
requestWithRetry<T>(client, requestConfig, retryConfig),
};
}为你的HTTP客户端包装自动重试逻辑。
typescript
import axios, { AxiosInstance, AxiosRequestConfig, AxiosResponse } from "axios";
interface RateLimitedClientConfig {
accountName: string;
appKey: string;
appToken: string;
maxRetries?: number;
baseDelayMs?: number;
maxDelayMs?: number;
}
function createRateLimitedClient(config: RateLimitedClientConfig): {
client: AxiosInstance;
request: <T>(requestConfig: AxiosRequestConfig) => Promise<AxiosResponse<T>>;
} {
const client = axios.create({
baseURL: `https://${config.accountName}.vtexcommercestable.com.br`,
headers: {
"Content-Type": "application/json",
"X-VTEX-API-AppKey": config.appKey,
"X-VTEX-API-AppToken": config.appToken,
},
timeout: 30000,
});
const retryConfig: RetryConfig = {
maxRetries: config.maxRetries ?? 5,
baseDelayMs: config.baseDelayMs ?? 1000,
maxDelayMs: config.maxDelayMs ?? 60000,
};
return {
client,
request: <T>(requestConfig: AxiosRequestConfig) =>
requestWithRetry<T>(client, requestConfig, retryConfig),
};
}Implement a Circuit Breaker
实现断路器
Prevent cascading failures when a service is consistently failing.
typescript
enum CircuitState {
CLOSED = "CLOSED", // Normal operation — requests flow through
OPEN = "OPEN", // Service failing — requests blocked
HALF_OPEN = "HALF_OPEN", // Testing recovery — one request allowed
}
class CircuitBreaker {
private state: CircuitState = CircuitState.CLOSED;
private failureCount: number = 0;
private lastFailureTime: number = 0;
private successCount: number = 0;
constructor(
private readonly failureThreshold: number = 5,
private readonly recoveryTimeMs: number = 30000,
private readonly halfOpenSuccessThreshold: number = 3
) {}
async execute<T>(operation: () => Promise<T>): Promise<T> {
if (this.state === CircuitState.OPEN) {
if (Date.now() - this.lastFailureTime < this.recoveryTimeMs) {
throw new Error(
`Circuit breaker is OPEN. Retry after ${this.recoveryTimeMs}ms.`
);
}
// Transition to half-open for a test request
this.state = CircuitState.HALF_OPEN;
this.successCount = 0;
}
try {
const result = await operation();
this.onSuccess();
return result;
} catch (error) {
this.onFailure();
throw error;
}
}
private onSuccess(): void {
if (this.state === CircuitState.HALF_OPEN) {
this.successCount++;
if (this.successCount >= this.halfOpenSuccessThreshold) {
this.state = CircuitState.CLOSED;
this.failureCount = 0;
console.log("Circuit breaker: CLOSED (recovered)");
}
} else {
this.failureCount = 0;
}
}
private onFailure(): void {
this.failureCount++;
this.lastFailureTime = Date.now();
if (this.failureCount >= this.failureThreshold) {
this.state = CircuitState.OPEN;
console.warn(
`Circuit breaker: OPEN after ${this.failureCount} failures`
);
}
}
getState(): CircuitState {
return this.state;
}
}当服务持续失败时,防止级联故障。
typescript
enum CircuitState {
CLOSED = "CLOSED", // Normal operation — requests flow through
OPEN = "OPEN", // Service failing — requests blocked
HALF_OPEN = "HALF_OPEN", // Testing recovery — one request allowed
}
class CircuitBreaker {
private state: CircuitState = CircuitState.CLOSED;
private failureCount: number = 0;
private lastFailureTime: number = 0;
private successCount: number = 0;
constructor(
private readonly failureThreshold: number = 5,
private readonly recoveryTimeMs: number = 30000,
private readonly halfOpenSuccessThreshold: number = 3
) {}
async execute<T>(operation: () => Promise<T>): Promise<T> {
if (this.state === CircuitState.OPEN) {
if (Date.now() - this.lastFailureTime < this.recoveryTimeMs) {
throw new Error(
`Circuit breaker is OPEN. Retry after ${this.recoveryTimeMs}ms.`
);
}
// Transition to half-open for a test request
this.state = CircuitState.HALF_OPEN;
this.successCount = 0;
}
try {
const result = await operation();
this.onSuccess();
return result;
} catch (error) {
this.onFailure();
throw error;
}
}
private onSuccess(): void {
if (this.state === CircuitState.HALF_OPEN) {
this.successCount++;
if (this.successCount >= this.halfOpenSuccessThreshold) {
this.state = CircuitState.CLOSED;
this.failureCount = 0;
console.log("Circuit breaker: CLOSED (recovered)");
}
} else {
this.failureCount = 0;
}
}
private onFailure(): void {
this.failureCount++;
this.lastFailureTime = Date.now();
if (this.failureCount >= this.failureThreshold) {
this.state = CircuitState.OPEN;
console.warn(
`Circuit breaker: OPEN after ${this.failureCount} failures`
);
}
}
getState(): CircuitState {
return this.state;
}
}Implement a Request Queue
实现请求队列
Queue requests to control throughput and avoid bursts.
typescript
interface QueuedRequest<T> {
config: AxiosRequestConfig;
resolve: (value: AxiosResponse<T>) => void;
reject: (error: Error) => void;
}
class RequestQueue {
private queue: Array<QueuedRequest<unknown>> = [];
private processing: boolean = false;
private readonly requestsPerSecond: number;
private readonly circuitBreaker: CircuitBreaker;
constructor(
private readonly client: {
request: <T>(config: AxiosRequestConfig) => Promise<AxiosResponse<T>>;
},
requestsPerSecond: number = 10,
circuitBreaker?: CircuitBreaker
) {
this.requestsPerSecond = requestsPerSecond;
this.circuitBreaker = circuitBreaker ?? new CircuitBreaker();
}
async enqueue<T>(config: AxiosRequestConfig): Promise<AxiosResponse<T>> {
return new Promise<AxiosResponse<T>>((resolve, reject) => {
this.queue.push({
config,
resolve: resolve as (value: AxiosResponse<unknown>) => void,
reject,
});
this.processQueue();
});
}
private async processQueue(): Promise<void> {
if (this.processing || this.queue.length === 0) {
return;
}
this.processing = true;
const delayBetweenRequests = 1000 / this.requestsPerSecond;
while (this.queue.length > 0) {
const request = this.queue.shift()!;
try {
const result = await this.circuitBreaker.execute(() =>
this.client.request(request.config)
);
request.resolve(result);
} catch (error) {
request.reject(error instanceof Error ? error : new Error(String(error)));
}
// Throttle between requests
if (this.queue.length > 0) {
await new Promise((resolve) =>
setTimeout(resolve, delayBetweenRequests)
);
}
}
this.processing = false;
}
getQueueLength(): number {
return this.queue.length;
}
}通过队列控制请求吞吐量,避免突发请求。
typescript
interface QueuedRequest<T> {
config: AxiosRequestConfig;
resolve: (value: AxiosResponse<T>) => void;
reject: (error: Error) => void;
}
class RequestQueue {
private queue: Array<QueuedRequest<unknown>> = [];
private processing: boolean = false;
private readonly requestsPerSecond: number;
private readonly circuitBreaker: CircuitBreaker;
constructor(
private readonly client: {
request: <T>(config: AxiosRequestConfig) => Promise<AxiosResponse<T>>;
},
requestsPerSecond: number = 10,
circuitBreaker?: CircuitBreaker
) {
this.requestsPerSecond = requestsPerSecond;
this.circuitBreaker = circuitBreaker ?? new CircuitBreaker();
}
async enqueue<T>(config: AxiosRequestConfig): Promise<AxiosResponse<T>> {
return new Promise<AxiosResponse<T>>((resolve, reject) => {
this.queue.push({
config,
resolve: resolve as (value: AxiosResponse<unknown>) => void,
reject,
});
this.processQueue();
});
}
private async processQueue(): Promise<void> {
if (this.processing || this.queue.length === 0) {
return;
}
this.processing = true;
const delayBetweenRequests = 1000 / this.requestsPerSecond;
while (this.queue.length > 0) {
const request = this.queue.shift()!;
try {
const result = await this.circuitBreaker.execute(() =>
this.client.request(request.config)
);
request.resolve(result);
} catch (error) {
request.reject(error instanceof Error ? error : new Error(String(error)));
}
// Throttle between requests
if (this.queue.length > 0) {
await new Promise((resolve) =>
setTimeout(resolve, delayBetweenRequests)
);
}
}
this.processing = false;
}
getQueueLength(): number {
return this.queue.length;
}
}Monitor Rate Limit Headers Proactively
主动监控限流响应头
Read rate limit headers to slow down before hitting 429.
typescript
import { AxiosResponse } from "axios";
interface RateLimitInfo {
remaining: number | null;
resetAt: number | null;
retryAfter: number | null;
}
function parseRateLimitHeaders(response: AxiosResponse): RateLimitInfo {
return {
remaining: response.headers["x-ratelimit-remaining"]
? parseInt(response.headers["x-ratelimit-remaining"], 10)
: null,
resetAt: response.headers["x-ratelimit-reset"]
? parseInt(response.headers["x-ratelimit-reset"], 10) * 1000
: null,
retryAfter: response.headers["retry-after"]
? parseInt(response.headers["retry-after"], 10) * 1000
: null,
};
}
async function adaptiveRequest<T>(
client: AxiosInstance,
config: AxiosRequestConfig,
queue: RequestQueue
): Promise<AxiosResponse<T>> {
const response = await queue.enqueue<T>(config);
const rateInfo = parseRateLimitHeaders(response);
// Proactively slow down when remaining requests are low
if (rateInfo.remaining !== null && rateInfo.remaining < 10) {
console.warn(
`Rate limit approaching: ${rateInfo.remaining} requests remaining. ` +
`Slowing down.`
);
// Add extra delay to reduce pressure
await new Promise((resolve) => setTimeout(resolve, 2000));
}
return response;
}读取限流响应头,在达到429限制前降低请求速率。
typescript
import { AxiosResponse } from "axios";
interface RateLimitInfo {
remaining: number | null;
resetAt: number | null;
retryAfter: number | null;
}
function parseRateLimitHeaders(response: AxiosResponse): RateLimitInfo {
return {
remaining: response.headers["x-ratelimit-remaining"]
? parseInt(response.headers["x-ratelimit-remaining"], 10)
: null,
resetAt: response.headers["x-ratelimit-reset"]
? parseInt(response.headers["x-ratelimit-reset"], 10) * 1000
: null,
retryAfter: response.headers["retry-after"]
? parseInt(response.headers["retry-after"], 10) * 1000
: null,
};
}
async function adaptiveRequest<T>(
client: AxiosInstance,
config: AxiosRequestConfig,
queue: RequestQueue
): Promise<AxiosResponse<T>> {
const response = await queue.enqueue<T>(config);
const rateInfo = parseRateLimitHeaders(response);
// Proactively slow down when remaining requests are low
if (rateInfo.remaining !== null && rateInfo.remaining < 10) {
console.warn(
`Rate limit approaching: ${rateInfo.remaining} requests remaining. ` +
`Slowing down.`
);
// Add extra delay to reduce pressure
await new Promise((resolve) => setTimeout(resolve, 2000));
}
return response;
}Complete Example
完整示例
typescript
import axios from "axios";
async function buildResilientIntegration(): Promise<void> {
const { client, request } = createRateLimitedClient({
accountName: process.env.VTEX_ACCOUNT_NAME!,
appKey: process.env.VTEX_APP_KEY!,
appToken: process.env.VTEX_APP_TOKEN!,
maxRetries: 5,
baseDelayMs: 1000,
maxDelayMs: 60000,
});
const circuitBreaker = new CircuitBreaker(
5, // Open after 5 failures
30000, // Wait 30s before testing recovery
3 // Close after 3 successful half-open requests
);
const queue = new RequestQueue({ request }, 10, circuitBreaker);
// Example: Batch update prices with rate limiting
const skuIds = ["sku-1", "sku-2", "sku-3", "sku-4", "sku-5"];
for (const skuId of skuIds) {
try {
const response = await queue.enqueue({
method: "POST",
url: `/notificator/seller01/changenotification/${skuId}/price`,
});
const rateInfo = parseRateLimitHeaders(response);
if (rateInfo.remaining !== null && rateInfo.remaining < 5) {
console.warn("Approaching rate limit, adding delay");
await new Promise((resolve) => setTimeout(resolve, 5000));
}
} catch (error) {
if (error instanceof Error && error.message.includes("Circuit breaker is OPEN")) {
console.error("Circuit breaker open — pausing all requests");
await new Promise((resolve) => setTimeout(resolve, 30000));
} else {
console.error(`Failed to update price for ${skuId}:`, error);
}
}
}
}typescript
import axios from "axios";
async function buildResilientIntegration(): Promise<void> {
const { client, request } = createRateLimitedClient({
accountName: process.env.VTEX_ACCOUNT_NAME!,
appKey: process.env.VTEX_APP_KEY!,
appToken: process.env.VTEX_APP_TOKEN!,
maxRetries: 5,
baseDelayMs: 1000,
maxDelayMs: 60000,
});
const circuitBreaker = new CircuitBreaker(
5, // Open after 5 failures
30000, // Wait 30s before testing recovery
3 // Close after 3 successful half-open requests
);
const queue = new RequestQueue({ request }, 10, circuitBreaker);
// Example: Batch update prices with rate limiting
const skuIds = ["sku-1", "sku-2", "sku-3", "sku-4", "sku-5"];
for (const skuId of skuIds) {
try {
const response = await queue.enqueue({
method: "POST",
url: `/notificator/seller01/changenotification/${skuId}/price`,
});
const rateInfo = parseRateLimitHeaders(response);
if (rateInfo.remaining !== null && rateInfo.remaining < 5) {
console.warn("Approaching rate limit, adding delay");
await new Promise((resolve) => setTimeout(resolve, 5000));
}
} catch (error) {
if (error instanceof Error && error.message.includes("Circuit breaker is OPEN")) {
console.error("Circuit breaker open — pausing all requests");
await new Promise((resolve) => setTimeout(resolve, 30000));
} else {
console.error(`Failed to update price for ${skuId}:`, error);
}
}
}
}Common failure modes
常见失败模式
- Fixed retry delay without jitter. Using a fixed delay (e.g., always 5 seconds) instead of exponential backoff with jitter causes the "thundering herd" problem: all rate-limited clients retry simultaneously, creating another burst that triggers rate limiting again. Use exponential backoff with random jitter so retries are spread across time.
typescript
// Correct: Exponential backoff with jitter
function getRetryDelay(attempt: number): number {
const baseDelay = 1000;
const maxDelay = 60000;
const exponential = baseDelay * Math.pow(2, attempt);
const bounded = Math.min(maxDelay, exponential);
const jitter = 0.5 + Math.random() * 0.5;
return Math.floor(bounded * jitter);
}
// attempt 0: ~500-1000ms
// attempt 1: ~1000-2000ms
// attempt 2: ~2000-4000ms
// attempt 3: ~4000-8000ms
// attempt 4: ~8000-16000ms- No proactive rate management. Only handling 429 errors reactively (after being rate limited) instead of monitoring rate limit headers to slow down proactively. By the time you receive a 429, you've already lost burst credits. Monitor on successful responses and reduce request rate when remaining quota is low.
X-RateLimit-Remaining
typescript
// Correct: Proactive rate management
async function proactiveRateManagement(
client: AxiosInstance,
requests: AxiosRequestConfig[]
): Promise<void> {
let delayBetweenRequests = 100; // Start at 100ms between requests
for (const config of requests) {
const response = await requestWithRetry(client, config);
const rateInfo = parseRateLimitHeaders(response);
// Proactively adjust speed based on remaining quota
if (rateInfo.remaining !== null) {
if (rateInfo.remaining < 5) {
delayBetweenRequests = 5000; // Slow down significantly
} else if (rateInfo.remaining < 20) {
delayBetweenRequests = 1000; // Moderate slowdown
} else {
delayBetweenRequests = 100; // Normal speed
}
}
await new Promise((resolve) => setTimeout(resolve, delayBetweenRequests));
}
}- 无抖动的固定重试延迟:使用固定延迟(例如始终5秒)而非带抖动的指数退避会导致“惊群效应”:所有被限流的客户端同时重试,形成新的突发请求,再次触发限流。使用带随机抖动的指数退避,使重试时间分散。
typescript
// Correct: Exponential backoff with jitter
function getRetryDelay(attempt: number): number {
const baseDelay = 1000;
const maxDelay = 60000;
const exponential = baseDelay * Math.pow(2, attempt);
const bounded = Math.min(maxDelay, exponential);
const jitter = 0.5 + Math.random() * 0.5;
return Math.floor(bounded * jitter);
}
// attempt 0: ~500-1000ms
// attempt 1: ~1000-2000ms
// attempt 2: ~2000-4000ms
// attempt 3: ~4000-8000ms
// attempt 4: ~8000-16000ms- 无主动限流管理:仅在收到429后被动处理(被限流后),而非主动监控限流响应头并提前降低请求速率。当你收到429时,已经消耗了突发额度。在成功响应中监控,当剩余额度较低时降低请求速率。
X-RateLimit-Remaining
typescript
// Correct: Proactive rate management
async function proactiveRateManagement(
client: AxiosInstance,
requests: AxiosRequestConfig[]
): Promise<void> {
let delayBetweenRequests = 100; // Start at 100ms between requests
for (const config of requests) {
const response = await requestWithRetry(client, config);
const rateInfo = parseRateLimitHeaders(response);
// Proactively adjust speed based on remaining quota
if (rateInfo.remaining !== null) {
if (rateInfo.remaining < 5) {
delayBetweenRequests = 5000; // Slow down significantly
} else if (rateInfo.remaining < 20) {
delayBetweenRequests = 1000; // Moderate slowdown
} else {
delayBetweenRequests = 100; // Normal speed
}
}
await new Promise((resolve) => setTimeout(resolve, delayBetweenRequests));
}
}Review checklist
检查清单
- Is exponential backoff with jitter implemented for 429 responses?
- Is the header read and respected on 429 responses?
Retry-After - Are there no tight retry loops (,
while(true)< 5 seconds)?setInterval - Is a circuit breaker in place for consistently failing services?
- Are headers monitored proactively to slow down before hitting limits?
X-RateLimit-Remaining - Are the correct numeric thresholds used (maxRetries: 5, baseDelayMs: 1000, maxDelayMs: 60000)?
- Are Pricing API limits respected (40 req/s PUT/POST, 16 req/s DELETE, burst credits)?
- 是否针对429响应实现了带抖动的指数退避?
- 是否读取并遵循了429响应中的头?
Retry-After - 是否存在过紧的重试循环(、
while(true)间隔小于5秒)?setInterval - 是否为持续失败的服务配置了断路器?
- 是否主动监控头,在达到限流前降低请求速率?
X-RateLimit-Remaining - 是否使用了正确的数值阈值(maxRetries:5、baseDelayMs:1000、maxDelayMs:60000)?
- 是否遵循了Pricing API的限流规则(PUT/POST 40请求/秒、DELETE 16请求/秒、突发额度)?
Reference
参考资料
- Best Practices for Avoiding Rate Limit Errors — Official VTEX guide on rate limit management and best practices
- Handling Errors and Exceptions — VTEX guide on error handling including 429 and 5xx responses
- API Response Status Codes — Complete list of VTEX API response codes and their meanings
- Pricing API Overview - Rate Limits — Specific rate limit documentation for the Pricing API including burst credits
- Feed v3 - Best Practices — Rate limiting recommendations for order feed integrations
- How to Load Test a Store — VTEX documentation on rate limiting behavior, 429 responses, and circuit breakers
- Best Practices for Avoiding Rate Limit Errors — VTEX官方限流管理最佳实践指南
- Handling Errors and Exceptions — VTEX错误处理指南,包括429和5xx响应
- API Response Status Codes — VTEX API响应码完整列表及含义
- Pricing API Overview - Rate Limits — Pricing API特定限流文档,包括突发额度
- Feed v3 - Best Practices — 订单流集成的限流建议
- How to Load Test a Store — VTEX限流行为、429响应及断路器相关文档