caching-strategist
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseCaching Strategist
缓存策略设计
Design effective caching strategies for performance and consistency.
为性能与一致性设计有效的缓存策略。
Cache Layers
缓存层级
CDN: Static assets, public pages (TTL: days/weeks)
Application Cache (Redis): API responses, sessions (TTL: minutes/hours)
Database Cache: Query results (TTL: seconds/minutes)
Client Cache: Browser/app local cache
CDN: 静态资源、公共页面(TTL:天/周)
应用层缓存(Redis): API响应、会话(TTL:分钟/小时)
数据库缓存: 查询结果(TTL:秒/分钟)
客户端缓存: 浏览器/应用本地缓存
Cache Key Strategy
缓存键策略
typescript
// Hierarchical key structure
const CACHE_KEYS = {
user: (id: string) => `user:${id}`,
userPosts: (userId: string, page: number) => `user:${userId}:posts:${page}`,
post: (id: string) => `post:${id}`,
postComments: (postId: string) => `post:${postId}:comments`,
};
// Include version in keys for easy invalidation
const CACHE_VERSION = "v1";
const key = `${CACHE_VERSION}:${CACHE_KEYS.user(userId)}`;typescript
// Hierarchical key structure
const CACHE_KEYS = {
user: (id: string) => `user:${id}`,
userPosts: (userId: string, page: number) => `user:${userId}:posts:${page}`,
post: (id: string) => `post:${id}`,
postComments: (postId: string) => `post:${postId}:comments`,
};
// Include version in keys for easy invalidation
const CACHE_VERSION = "v1";
const key = `${CACHE_VERSION}:${CACHE_KEYS.user(userId)}`;TTL Strategy
TTL策略
typescript
const TTL = {
// Frequently changing
REALTIME: 10, // 10 seconds
SHORT: 60, // 1 minute
// Moderate updates
MEDIUM: 300, // 5 minutes
STANDARD: 3600, // 1 hour
// Rarely changing
LONG: 86400, // 1 day
VERY_LONG: 604800, // 1 week
};
// Usage
await redis.setex(key, TTL.MEDIUM, JSON.stringify(data));typescript
const TTL = {
// Frequently changing
REALTIME: 10, // 10 seconds
SHORT: 60, // 1 minute
// Moderate updates
MEDIUM: 300, // 5 minutes
STANDARD: 3600, // 1 hour
// Rarely changing
LONG: 86400, // 1 day
VERY_LONG: 604800, // 1 week
};
// Usage
await redis.setex(key, TTL.MEDIUM, JSON.stringify(data));Cache-Aside Pattern
旁路缓存模式
typescript
export const getCachedUser = async (userId: string): Promise<User> => {
const key = CACHE_KEYS.user(userId);
// Try cache first
const cached = await redis.get(key);
if (cached) {
return JSON.parse(cached);
}
// Cache miss - fetch from DB
const user = await db.users.findById(userId);
// Store in cache
await redis.setex(key, TTL.STANDARD, JSON.stringify(user));
return user;
};typescript
export const getCachedUser = async (userId: string): Promise<User> => {
const key = CACHE_KEYS.user(userId);
// Try cache first
const cached = await redis.get(key);
if (cached) {
return JSON.parse(cached);
}
// Cache miss - fetch from DB
const user = await db.users.findById(userId);
// Store in cache
await redis.setex(key, TTL.STANDARD, JSON.stringify(user));
return user;
};Cache Invalidation
缓存失效
typescript
// Invalidate on update
export const updateUser = async (userId: string, data: UpdateUserDto) => {
const user = await db.users.update(userId, data);
// Invalidate cache
await redis.del(CACHE_KEYS.user(userId));
// Invalidate related caches
await redis.del(CACHE_KEYS.userPosts(userId, "*"));
return user;
};
// Tag-based invalidation
const addCacheTags = (key: string, tags: string[]) => {
tags.forEach((tag) => {
redis.sadd(`cache_tag:${tag}`, key);
});
};
const invalidateByTag = async (tag: string) => {
const keys = await redis.smembers(`cache_tag:${tag}`);
if (keys.length) {
await redis.del(...keys);
await redis.del(`cache_tag:${tag}`);
}
};typescript
// Invalidate on update
export const updateUser = async (userId: string, data: UpdateUserDto) => {
const user = await db.users.update(userId, data);
// Invalidate cache
await redis.del(CACHE_KEYS.user(userId));
// Invalidate related caches
await redis.del(CACHE_KEYS.userPosts(userId, "*"));
return user;
};
// Tag-based invalidation
const addCacheTags = (key: string, tags: string[]) => {
tags.forEach((tag) => {
redis.sadd(`cache_tag:${tag}`, key);
});
};
const invalidateByTag = async (tag: string) => {
const keys = await redis.smembers(`cache_tag:${tag}`);
if (keys.length) {
await redis.del(...keys);
await redis.del(`cache_tag:${tag}`);
}
};Cache Warming
缓存预热
typescript
// Pre-populate cache for common queries
export const warmCache = async () => {
const popularPosts = await db.posts.findPopular(100);
for (const post of popularPosts) {
const key = CACHE_KEYS.post(post.id);
await redis.setex(key, TTL.LONG, JSON.stringify(post));
}
};
// Schedule warming
cron.schedule("0 */6 * * *", warmCache); // Every 6 hourstypescript
// Pre-populate cache for common queries
export const warmCache = async () => {
const popularPosts = await db.posts.findPopular(100);
for (const post of popularPosts) {
const key = CACHE_KEYS.post(post.id);
await redis.setex(key, TTL.LONG, JSON.stringify(post));
}
};
// Schedule warming
cron.schedule("0 */6 * * *", warmCache); // Every 6 hoursCache Stampede Prevention
缓存击穿预防
typescript
// Use locks to prevent multiple simultaneous fetches
export const getCachedWithLock = async (
key: string,
fetchFn: () => Promise<any>
) => {
const cached = await redis.get(key);
if (cached) return JSON.parse(cached);
const lockKey = `lock:${key}`;
const acquired = await redis.set(lockKey, "1", "EX", 10, "NX");
if (acquired) {
try {
// Fetch and cache
const data = await fetchFn();
await redis.setex(key, TTL.STANDARD, JSON.stringify(data));
return data;
} finally {
await redis.del(lockKey);
}
} else {
// Wait for other request to finish
await new Promise((resolve) => setTimeout(resolve, 100));
return getCachedWithLock(key, fetchFn);
}
};typescript
// Use locks to prevent multiple simultaneous fetches
export const getCachedWithLock = async (
key: string,
fetchFn: () => Promise<any>
) => {
const cached = await redis.get(key);
if (cached) return JSON.parse(cached);
const lockKey = `lock:${key}`;
const acquired = await redis.set(lockKey, "1", "EX", 10, "NX");
if (acquired) {
try {
// Fetch and cache
const data = await fetchFn();
await redis.setex(key, TTL.STANDARD, JSON.stringify(data));
return data;
} finally {
await redis.del(lockKey);
}
} else {
// Wait for other request to finish
await new Promise((resolve) => setTimeout(resolve, 100));
return getCachedWithLock(key, fetchFn);
}
};Cache Correctness Checklist
缓存正确性检查清单
markdown
- [ ] Cache keys are unique and predictable
- [ ] TTL is appropriate for data freshness
- [ ] Invalidation happens on all updates
- [ ] Related caches invalidated together
- [ ] Cache stampede prevention in place
- [ ] Fallback to DB if cache fails
- [ ] Monitoring cache hit rate
- [ ] Cache size doesn't grow unbounded
- [ ] Sensitive data not cached or encrypted
- [ ] Cache warming for critical pathsmarkdown
- [ ] 缓存键唯一且可预测
- [ ] TTL设置符合数据新鲜度需求
- [ ] 所有更新操作均触发缓存失效
- [ ] 关联缓存同步失效
- [ ] 已部署缓存击穿预防机制
- [ ] 缓存失效时回退到数据库
- [ ] 监控缓存命中率
- [ ] 缓存大小不会无限制增长
- [ ] 敏感数据不缓存或已加密
- [ ] 关键路径已配置缓存预热Best Practices
最佳实践
- Cache immutable data aggressively
- Short TTLs for frequently changing data
- Invalidate on write, not on read
- Monitor hit rates and adjust
- Use tags for bulk invalidation
- Prevent cache stampedes
- Graceful degradation if cache down
- 对不可变数据进行激进缓存
- 频繁变更数据使用短TTL
- 写入时触发失效,而非读取时
- 监控命中率并按需调整
- 使用标签进行批量失效
- 预防缓存击穿
- 缓存故障时优雅降级
Output Checklist
输出检查清单
- Cache key naming strategy
- TTL values per data type
- Invalidation triggers documented
- Cache-aside implementation
- Stampede prevention
- Cache warming strategy
- Monitoring/metrics setup
- Correctness checklist completed
- 缓存键命名策略已确定
- 各数据类型的TTL值已定义
- 失效触发机制已记录
- 旁路缓存模式已实现
- 缓存击穿预防已部署
- 缓存预热策略已制定
- 监控/指标已配置
- 正确性检查清单已完成