redis-expert

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Redis Expert

Redis专家

Expert guidance for Redis - the in-memory data structure store used as cache, message broker, and database with microsecond latency.
Redis专家级指导:Redis是一款内存型数据结构存储系统,可作为缓存、消息代理和数据库使用,具备微秒级延迟。

Core Concepts

核心概念

Data Structures

数据结构

  • Strings (binary-safe, up to 512MB)
  • Lists (linked lists)
  • Sets (unordered unique strings)
  • Sorted Sets (sets ordered by score)
  • Hashes (field-value pairs)
  • Streams (append-only log)
  • Bitmaps and HyperLogLog
  • Geospatial indexes
  • 字符串(二进制安全,最大512MB)
  • 列表(链表结构)
  • 集合(无序唯一字符串集合)
  • 有序集合(按分数排序的集合)
  • 哈希表(键值对集合)
  • 流(仅追加日志)
  • 位图与HyperLogLog
  • 地理空间索引

Key Features

核心特性

  • In-memory storage with persistence
  • Pub/Sub messaging
  • Transactions
  • Lua scripting
  • Pipelining
  • Master-Replica replication
  • Redis Sentinel (high availability)
  • Redis Cluster (horizontal scaling)
  • 内存存储支持持久化
  • 发布/订阅(Pub/Sub)消息机制
  • 事务支持
  • Lua脚本
  • 流水线(Pipelining)
  • 主从复制
  • Redis Sentinel(高可用)
  • Redis Cluster(水平扩容)

Use Cases

应用场景

  • Caching layer
  • Session storage
  • Real-time analytics
  • Message queues
  • Rate limiting
  • Leaderboards
  • Geospatial queries
  • 缓存层
  • 会话存储
  • 实时分析
  • 消息队列
  • 限流
  • 排行榜
  • 地理空间查询

Installation and Configuration

安装与配置

Docker Setup

Docker部署

bash
undefined
bash
undefined

Development

开发环境

docker run --name redis -p 6379:6379 -d redis:7-alpine
docker run --name redis -p 6379:6379 -d redis:7-alpine

Production with persistence

生产环境(带持久化)

docker run --name redis
-p 6379:6379
-v redis-data:/data
-d redis:7-alpine
redis-server --appendonly yes --requirepass strongpassword
docker run --name redis
-p 6379:6379
-v redis-data:/data
-d redis:7-alpine
redis-server --appendonly yes --requirepass strongpassword

Redis with config file

使用配置文件启动Redis

docker run --name redis
-p 6379:6379
-v ./redis.conf:/usr/local/etc/redis/redis.conf
-d redis:7-alpine
redis-server /usr/local/etc/redis/redis.conf
undefined
docker run --name redis
-p 6379:6379
-v ./redis.conf:/usr/local/etc/redis/redis.conf
-d redis:7-alpine
redis-server /usr/local/etc/redis/redis.conf
undefined

Configuration (redis.conf)

配置(redis.conf)

conf
undefined
conf
undefined

Network

网络设置

bind 0.0.0.0 port 6379 protected-mode yes
bind 0.0.0.0 port 6379 protected-mode yes

Security

安全设置

requirepass strongpassword
requirepass strongpassword

Memory

内存设置

maxmemory 2gb maxmemory-policy allkeys-lru
maxmemory 2gb maxmemory-policy allkeys-lru

Persistence

持久化设置

save 900 1 # Save after 900s if 1 key changed save 300 10 # Save after 300s if 10 keys changed save 60 10000 # Save after 60s if 10000 keys changed
appendonly yes appendfilename "appendonly.aof" appendfsync everysec
save 900 1 # 900秒内至少1个键变更则保存快照 save 300 10 # 300秒内至少10个键变更则保存快照 save 60 10000 # 60秒内至少10000个键变更则保存快照
appendonly yes appendfilename "appendonly.aof" appendfsync everysec

Replication

复制设置

replica-read-only yes repl-diskless-sync yes
replica-read-only yes repl-diskless-sync yes

Performance

性能设置

tcp-backlog 511 timeout 0 tcp-keepalive 300
undefined
tcp-backlog 511 timeout 0 tcp-keepalive 300
undefined

Node.js Client (ioredis)

Node.js客户端(ioredis)

Basic Operations

基础操作

typescript
import Redis from 'ioredis';

const redis = new Redis({
  host: 'localhost',
  port: 6379,
  password: 'strongpassword',
  db: 0,
  retryStrategy: (times) => {
    const delay = Math.min(times * 50, 2000);
    return delay;
  },
});

// Strings
await redis.set('user:1000:name', 'Alice');
await redis.set('counter', 42);
await redis.get('user:1000:name'); // 'Alice'

// Expiration (TTL)
await redis.setex('session:abc123', 3600, JSON.stringify({ userId: 1000 }));
await redis.expire('user:1000:name', 300); // 5 minutes
await redis.ttl('user:1000:name'); // Returns remaining seconds

// Atomic operations
await redis.incr('page:views'); // 1
await redis.incr('page:views'); // 2
await redis.incrby('score', 10); // Increment by 10
await redis.decr('inventory:item123');

// Hashes (objects)
await redis.hset('user:1000', {
  name: 'Alice',
  email: 'alice@example.com',
  age: 30,
});

await redis.hget('user:1000', 'name'); // 'Alice'
await redis.hgetall('user:1000'); // { name: 'Alice', email: '...', age: '30' }
await redis.hincrby('user:1000', 'loginCount', 1);

// Lists (queues, stacks)
await redis.lpush('queue:jobs', 'job1', 'job2', 'job3'); // Push to left
await redis.rpush('queue:jobs', 'job4'); // Push to right
await redis.lpop('queue:jobs'); // Pop from left (FIFO)
await redis.rpop('queue:jobs'); // Pop from right (LIFO)
await redis.lrange('queue:jobs', 0, -1); // Get all items

// Sets (unique values)
await redis.sadd('tags:post:1', 'javascript', 'nodejs', 'redis');
await redis.smembers('tags:post:1'); // ['javascript', 'nodejs', 'redis']
await redis.sismember('tags:post:1', 'nodejs'); // 1 (true)
await redis.scard('tags:post:1'); // 3 (count)

// Set operations
await redis.sadd('tags:post:2', 'nodejs', 'typescript', 'docker');
await redis.sinter('tags:post:1', 'tags:post:2'); // ['nodejs'] (intersection)
await redis.sunion('tags:post:1', 'tags:post:2'); // All unique tags
await redis.sdiff('tags:post:1', 'tags:post:2'); // ['javascript', 'redis']

// Sorted Sets (leaderboards)
await redis.zadd('leaderboard', 1000, 'player1', 1500, 'player2', 800, 'player3');
await redis.zrange('leaderboard', 0, -1, 'WITHSCORES'); // Ascending
await redis.zrevrange('leaderboard', 0, 9); // Top 10 (descending)
await redis.zincrby('leaderboard', 50, 'player1'); // Add to score
await redis.zrank('leaderboard', 'player1'); // Get rank (0-indexed)
await redis.zscore('leaderboard', 'player1'); // Get score
typescript
import Redis from 'ioredis';

const redis = new Redis({
  host: 'localhost',
  port: 6379,
  password: 'strongpassword',
  db: 0,
  retryStrategy: (times) => {
    const delay = Math.min(times * 50, 2000);
    return delay;
  },
});

// 字符串操作
await redis.set('user:1000:name', 'Alice');
await redis.set('counter', 42);
await redis.get('user:1000:name'); // 'Alice'

// 过期时间(TTL)
await redis.setex('session:abc123', 3600, JSON.stringify({ userId: 1000 }));
await redis.expire('user:1000:name', 300); // 5分钟
await redis.ttl('user:1000:name'); // 返回剩余秒数

// 原子操作
await redis.incr('page:views'); // 1
await redis.incr('page:views'); // 2
await redis.incrby('score', 10); // 增加10
await redis.decr('inventory:item123');

// 哈希表(对象)
await redis.hset('user:1000', {
  name: 'Alice',
  email: 'alice@example.com',
  age: 30,
});

await redis.hget('user:1000', 'name'); // 'Alice'
await redis.hgetall('user:1000'); // { name: 'Alice', email: '...', age: '30' }
await redis.hincrby('user:1000', 'loginCount', 1);

// 列表(队列、栈)
await redis.lpush('queue:jobs', 'job1', 'job2', 'job3'); // 从左侧推入
await redis.rpush('queue:jobs', 'job4'); // 从右侧推入
await redis.lpop('queue:jobs'); // 从左侧弹出(FIFO)
await redis.rpop('queue:jobs'); // 从右侧弹出(LIFO)
await redis.lrange('queue:jobs', 0, -1); // 获取所有元素

// 集合(唯一值)
await redis.sadd('tags:post:1', 'javascript', 'nodejs', 'redis');
await redis.smembers('tags:post:1'); // ['javascript', 'nodejs', 'redis']
await redis.sismember('tags:post:1', 'nodejs'); // 1(true)
await redis.scard('tags:post:1'); // 3(数量)

// 集合操作
await redis.sadd('tags:post:2', 'nodejs', 'typescript', 'docker');
await redis.sinter('tags:post:1', 'tags:post:2'); // ['nodejs'](交集)
await redis.sunion('tags:post:1', 'tags:post:2'); // 所有唯一标签
await redis.sdiff('tags:post:1', 'tags:post:2'); // ['javascript', 'redis']

// 有序集合(排行榜)
await redis.zadd('leaderboard', 1000, 'player1', 1500, 'player2', 800, 'player3');
await redis.zrange('leaderboard', 0, -1, 'WITHSCORES'); // 升序
await redis.zrevrange('leaderboard', 0, 9); // 前10名(降序)
await redis.zincrby('leaderboard', 50, 'player1'); // 增加分数
await redis.zrank('leaderboard', 'player1'); // 获取排名(从0开始)
await redis.zscore('leaderboard', 'player1'); // 获取分数

Advanced Patterns

高级模式

Caching with JSON

基于JSON的缓存

typescript
// Cache helper
class CacheService {
  constructor(private redis: Redis) {}

  async get<T>(key: string): Promise<T | null> {
    const data = await this.redis.get(key);
    return data ? JSON.parse(data) : null;
  }

  async set(key: string, value: any, ttl: number = 3600): Promise<void> {
    await this.redis.setex(key, ttl, JSON.stringify(value));
  }

  async delete(key: string): Promise<void> {
    await this.redis.del(key);
  }

  async getOrSet<T>(
    key: string,
    factory: () => Promise<T>,
    ttl: number = 3600
  ): Promise<T> {
    const cached = await this.get<T>(key);
    if (cached) return cached;

    const fresh = await factory();
    await this.set(key, fresh, ttl);
    return fresh;
  }
}

// Usage
const cache = new CacheService(redis);

const user = await cache.getOrSet(
  'user:1000',
  async () => await db.user.findById(1000),
  3600
);
typescript
// 缓存助手类
class CacheService {
  constructor(private redis: Redis) {}

  async get<T>(key: string): Promise<T | null> {
    const data = await this.redis.get(key);
    return data ? JSON.parse(data) : null;
  }

  async set(key: string, value: any, ttl: number = 3600): Promise<void> {
    await this.redis.setex(key, ttl, JSON.stringify(value));
  }

  async delete(key: string): Promise<void> {
    await this.redis.del(key);
  }

  async getOrSet<T>(
    key: string,
    factory: () => Promise<T>,
    ttl: number = 3600
  ): Promise<T> {
    const cached = await this.get<T>(key);
    if (cached) return cached;

    const fresh = await factory();
    await this.set(key, fresh, ttl);
    return fresh;
  }
}

// 使用示例
const cache = new CacheService(redis);

const user = await cache.getOrSet(
  'user:1000',
  async () => await db.user.findById(1000),
  3600
);

Rate Limiting

限流

typescript
class RateLimiter {
  constructor(private redis: Redis) {}

  async checkRateLimit(
    key: string,
    limit: number,
    window: number
  ): Promise<{ allowed: boolean; remaining: number }> {
    const current = await this.redis.incr(key);

    if (current === 1) {
      await this.redis.expire(key, window);
    }

    return {
      allowed: current <= limit,
      remaining: Math.max(0, limit - current),
    };
  }
}

// Usage: 100 requests per hour per IP
const limiter = new RateLimiter(redis);
const result = await limiter.checkRateLimit(`ratelimit:${ip}`, 100, 3600);

if (!result.allowed) {
  return res.status(429).json({ error: 'Too many requests' });
}
typescript
class RateLimiter {
  constructor(private redis: Redis) {}

  async checkRateLimit(
    key: string,
    limit: number,
    window: number
  ): Promise<{ allowed: boolean; remaining: number }> {
    const current = await this.redis.incr(key);

    if (current === 1) {
      await this.redis.expire(key, window);
    }

    return {
      allowed: current <= limit,
      remaining: Math.max(0, limit - current),
    };
  }
}

// 使用示例:每个IP每小时最多100次请求
const limiter = new RateLimiter(redis);
const result = await limiter.checkRateLimit(`ratelimit:${ip}`, 100, 3600);

if (!result.allowed) {
  return res.status(429).json({ error: '请求过于频繁' });
}

Sliding Window Rate Limiting

滑动窗口限流

typescript
async function slidingWindowRateLimit(
  redis: Redis,
  key: string,
  limit: number,
  window: number
): Promise<boolean> {
  const now = Date.now();
  const windowStart = now - window * 1000;

  // Remove old entries
  await redis.zremrangebyscore(key, 0, windowStart);

  // Count requests in window
  const count = await redis.zcard(key);

  if (count < limit) {
    // Add current request
    await redis.zadd(key, now, `${now}-${Math.random()}`);
    await redis.expire(key, window);
    return true;
  }

  return false;
}
typescript
async function slidingWindowRateLimit(
  redis: Redis,
  key: string,
  limit: number,
  window: number
): Promise<boolean> {
  const now = Date.now();
  const windowStart = now - window * 1000;

  // 删除旧条目
  await redis.zremrangebyscore(key, 0, windowStart);

  // 统计窗口内的请求数
  const count = await redis.zcard(key);

  if (count < limit) {
    // 添加当前请求
    await redis.zadd(key, now, `${now}-${Math.random()}`);
    await redis.expire(key, window);
    return true;
  }

  return false;
}

Distributed Locking

分布式锁

typescript
class RedisLock {
  constructor(private redis: Redis) {}

  async acquire(
    resource: string,
    ttl: number = 10000,
    retryDelay: number = 50,
    retryCount: number = 100
  ): Promise<string | null> {
    const lockKey = `lock:${resource}`;
    const lockValue = crypto.randomUUID();

    for (let i = 0; i < retryCount; i++) {
      const acquired = await this.redis.set(
        lockKey,
        lockValue,
        'PX',
        ttl,
        'NX'
      );

      if (acquired === 'OK') {
        return lockValue;
      }

      await new Promise((resolve) => setTimeout(resolve, retryDelay));
    }

    return null;
  }

  async release(resource: string, lockValue: string): Promise<boolean> {
    const lockKey = `lock:${resource}`;

    // Use Lua script to ensure atomicity
    const script = `
      if redis.call("get", KEYS[1]) == ARGV[1] then
        return redis.call("del", KEYS[1])
      else
        return 0
      end
    `;

    const result = await this.redis.eval(script, 1, lockKey, lockValue);
    return result === 1;
  }

  async withLock<T>(
    resource: string,
    fn: () => Promise<T>,
    ttl: number = 10000
  ): Promise<T> {
    const lockValue = await this.acquire(resource, ttl);
    if (!lockValue) {
      throw new Error('Failed to acquire lock');
    }

    try {
      return await fn();
    } finally {
      await this.release(resource, lockValue);
    }
  }
}

// Usage
const lock = new RedisLock(redis);

await lock.withLock('resource:123', async () => {
  // Critical section - only one process can execute this
  const data = await fetchData();
  await processData(data);
});
typescript
class RedisLock {
  constructor(private redis: Redis) {}

  async acquire(
    resource: string,
    ttl: number = 10000,
    retryDelay: number = 50,
    retryCount: number = 100
  ): Promise<string | null> {
    const lockKey = `lock:${resource}`;
    const lockValue = crypto.randomUUID();

    for (let i = 0; i < retryCount; i++) {
      const acquired = await this.redis.set(
        lockKey,
        lockValue,
        'PX',
        ttl,
        'NX'
      );

      if (acquired === 'OK') {
        return lockValue;
      }

      await new Promise((resolve) => setTimeout(resolve, retryDelay));
    }

    return null;
  }

  async release(resource: string, lockValue: string): Promise<boolean> {
    const lockKey = `lock:${resource}`;

    // 使用Lua脚本保证原子性
    const script = `
      if redis.call("get", KEYS[1]) == ARGV[1] then
        return redis.call("del", KEYS[1])
      else
        return 0
      end
    `;

    const result = await this.redis.eval(script, 1, lockKey, lockValue);
    return result === 1;
  }

  async withLock<T>(
    resource: string,
    fn: () => Promise<T>,
    ttl: number = 10000
  ): Promise<T> {
    const lockValue = await this.acquire(resource, ttl);
    if (!lockValue) {
      throw new Error('获取锁失败');
    }

    try {
      return await fn();
    } finally {
      await this.release(resource, lockValue);
    }
  }
}

// 使用示例
const lock = new RedisLock(redis);

await lock.withLock('resource:123', async () => {
  // 临界区 - 同一时间仅一个进程可执行
  const data = await fetchData();
  await processData(data);
});

Pub/Sub

发布/订阅(Pub/Sub)

typescript
// Publisher
const publisher = new Redis();

await publisher.publish('notifications', JSON.stringify({
  type: 'new_message',
  userId: 1000,
  message: 'Hello!',
}));

// Subscriber
const subscriber = new Redis();

subscriber.subscribe('notifications', (err, count) => {
  console.log(`Subscribed to ${count} channels`);
});

subscriber.on('message', (channel, message) => {
  const data = JSON.parse(message);
  console.log(`Received from ${channel}:`, data);
});

// Pattern subscription
subscriber.psubscribe('user:*:notifications', (err, count) => {
  console.log(`Subscribed to ${count} patterns`);
});

subscriber.on('pmessage', (pattern, channel, message) => {
  console.log(`Pattern ${pattern} matched ${channel}:`, message);
});

// Unsubscribe
await subscriber.unsubscribe('notifications');
await subscriber.punsubscribe('user:*:notifications');
typescript
// 发布者
const publisher = new Redis();

await publisher.publish('notifications', JSON.stringify({
  type: 'new_message',
  userId: 1000,
  message: '你好!',
}));

// 订阅者
const subscriber = new Redis();

subscriber.subscribe('notifications', (err, count) => {
  console.log(`已订阅${count}个频道`);
});

subscriber.on('message', (channel, message) => {
  const data = JSON.parse(message);
  console.log(`${channel}收到消息:`, data);
});

// 模式订阅
subscriber.psubscribe('user:*:notifications', (err, count) => {
  console.log(`已订阅${count}个模式`);
});

subscriber.on('pmessage', (pattern, channel, message) => {
  console.log(`模式${pattern}匹配到频道${channel}`, message);
});

// 取消订阅
await subscriber.unsubscribe('notifications');
await subscriber.punsubscribe('user:*:notifications');

Redis Streams

Redis流(Redis Streams)

typescript
// Add to stream
await redis.xadd(
  'events',
  '*', // Auto-generate ID
  'type', 'user_registered',
  'userId', '1000',
  'email', 'alice@example.com'
);

// Read from stream
const messages = await redis.xread('COUNT', 10, 'STREAMS', 'events', '0');
/*
[
  ['events', [
    ['1609459200000-0', ['type', 'user_registered', 'userId', '1000']],
    ['1609459201000-0', ['type', 'order_placed', 'orderId', '500']]
  ]]
]
*/

// Consumer Groups
await redis.xgroup('CREATE', 'events', 'worker-group', '0', 'MKSTREAM');

// Read as consumer
const messages = await redis.xreadgroup(
  'GROUP', 'worker-group', 'consumer-1',
  'COUNT', 10,
  'STREAMS', 'events', '>'
);

// Acknowledge message
await redis.xack('events', 'worker-group', '1609459200000-0');

// Pending messages
const pending = await redis.xpending('events', 'worker-group');
typescript
// 添加到流
await redis.xadd(
  'events',
  '*', // 自动生成ID
  'type', 'user_registered',
  'userId', '1000',
  'email', 'alice@example.com'
);

// 从流读取
const messages = await redis.xread('COUNT', 10, 'STREAMS', 'events', '0');
/*
[
  ['events', [
    ['1609459200000-0', ['type', 'user_registered', 'userId', '1000']],
    ['1609459201000-0', ['type', 'order_placed', 'orderId', '500']]
  ]]
]
*/

// 消费者组
await redis.xgroup('CREATE', 'events', 'worker-group', '0', 'MKSTREAM');

// 作为消费者读取
const messages = await redis.xreadgroup(
  'GROUP', 'worker-group', 'consumer-1',
  'COUNT', 10,
  'STREAMS', 'events', '>'
);

// 确认消息
await redis.xack('events', 'worker-group', '1609459200000-0');

// 待处理消息
const pending = await redis.xpending('events', 'worker-group');

Transactions

事务

typescript
// Multi/Exec (transaction)
const pipeline = redis.multi();
pipeline.set('key1', 'value1');
pipeline.set('key2', 'value2');
pipeline.incr('counter');
const results = await pipeline.exec();

// Watch (optimistic locking)
await redis.watch('balance:1000');
const balance = parseInt(await redis.get('balance:1000') || '0');

if (balance >= amount) {
  const multi = redis.multi();
  multi.decrby('balance:1000', amount);
  multi.incrby('balance:2000', amount);
  await multi.exec(); // Executes only if balance:1000 wasn't modified
} else {
  await redis.unwatch();
}
typescript
// Multi/Exec(事务)
const pipeline = redis.multi();
pipeline.set('key1', 'value1');
pipeline.set('key2', 'value2');
pipeline.incr('counter');
const results = await pipeline.exec();

// Watch(乐观锁)
await redis.watch('balance:1000');
const balance = parseInt(await redis.get('balance:1000') || '0');

if (balance >= amount) {
  const multi = redis.multi();
  multi.decrby('balance:1000', amount);
  multi.incrby('balance:2000', amount);
  await multi.exec(); // 仅当balance:1000未被修改时执行
} else {
  await redis.unwatch();
}

Pipelining

流水线

typescript
// Pipeline multiple commands
const pipeline = redis.pipeline();
pipeline.set('key1', 'value1');
pipeline.set('key2', 'value2');
pipeline.get('key1');
pipeline.get('key2');
const results = await pipeline.exec();
// [[null, 'OK'], [null, 'OK'], [null, 'value1'], [null, 'value2']]

// Batch operations
async function batchSet(items: Record<string, string>) {
  const pipeline = redis.pipeline();
  for (const [key, value] of Object.entries(items)) {
    pipeline.set(key, value);
  }
  await pipeline.exec();
}
typescript
// 批量执行多个命令
const pipeline = redis.pipeline();
pipeline.set('key1', 'value1');
pipeline.set('key2', 'value2');
pipeline.get('key1');
pipeline.get('key2');
const results = await pipeline.exec();
// [[null, 'OK'], [null, 'OK'], [null, 'value1'], [null, 'value2']]

// 批量设置操作
async function batchSet(items: Record<string, string>) {
  const pipeline = redis.pipeline();
  for (const [key, value] of Object.entries(items)) {
    pipeline.set(key, value);
  }
  await pipeline.exec();
}

Lua Scripts

Lua脚本

typescript
// Atomic increment with max
const script = `
  local current = redis.call('GET', KEYS[1])
  local max = tonumber(ARGV[1])

  if current and tonumber(current) >= max then
    return tonumber(current)
  else
    return redis.call('INCR', KEYS[1])
  end
`;

const result = await redis.eval(script, 1, 'counter', 100);

// Load script once, execute many times
const sha = await redis.script('LOAD', script);
const result = await redis.evalsha(sha, 1, 'counter', 100);
typescript
// 带最大值限制的原子递增
const script = `
  local current = redis.call('GET', KEYS[1])
  local max = tonumber(ARGV[1])

  if current and tonumber(current) >= max then
    return tonumber(current)
  else
    return redis.call('INCR', KEYS[1])
  end
`;

const result = await redis.eval(script, 1, 'counter', 100);

// 加载一次脚本,多次执行
const sha = await redis.script('LOAD', script);
const result = await redis.evalsha(sha, 1, 'counter', 100);

Redis Cluster

Redis集群

Setup

搭建

bash
undefined
bash
undefined

Create 6 nodes (3 masters, 3 replicas)

创建6个节点(3主3从)

for port in {7000..7005}; do mkdir -p cluster/${port} cat > cluster/${port}/redis.conf <<EOF port ${port} cluster-enabled yes cluster-config-file nodes.conf cluster-node-timeout 5000 appendonly yes EOF redis-server cluster/${port}/redis.conf & done
for port in {7000..7005}; do mkdir -p cluster/${port} cat > cluster/${port}/redis.conf <<EOF port ${port} cluster-enabled yes cluster-config-file nodes.conf cluster-node-timeout 5000 appendonly yes EOF redis-server cluster/${port}/redis.conf & done

Create cluster

创建集群

redis-cli --cluster create
127.0.0.1:7000 127.0.0.1:7001 127.0.0.1:7002
127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005
--cluster-replicas 1
undefined
redis-cli --cluster create
127.0.0.1:7000 127.0.0.1:7001 127.0.0.1:7002
127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005
--cluster-replicas 1
undefined

Cluster Client

集群客户端

typescript
import Redis from 'ioredis';

const cluster = new Redis.Cluster([
  { host: '127.0.0.1', port: 7000 },
  { host: '127.0.0.1', port: 7001 },
  { host: '127.0.0.1', port: 7002 },
]);

// Operations work transparently
await cluster.set('key', 'value');
await cluster.get('key');
typescript
import Redis from 'ioredis';

const cluster = new Redis.Cluster([
  { host: '127.0.0.1', port: 7000 },
  { host: '127.0.0.1', port: 7001 },
  { host: '127.0.0.1', port: 7002 },
]);

// 操作透明执行
await cluster.set('key', 'value');
await cluster.get('key');

Best Practices

最佳实践

Memory Management

内存管理

  • Set maxmemory limit
  • Choose appropriate eviction policy:
    • allkeys-lru
      : Remove least recently used keys
    • allkeys-lfu
      : Remove least frequently used keys
    • volatile-lru
      : Remove LRU keys with expire set
    • volatile-ttl
      : Remove keys with shortest TTL
  • Monitor memory usage:
    INFO memory
  • Use memory-efficient data structures
  • 设置maxmemory限制
  • 选择合适的淘汰策略:
    • allkeys-lru
      :移除最近最少使用的键
    • allkeys-lfu
      :移除最不经常使用的键
    • volatile-lru
      :移除带过期时间的LRU键
    • volatile-ttl
      :移除TTL最短的键
  • 监控内存使用:
    INFO memory
  • 使用内存高效的数据结构

Key Naming

键命名规范

typescript
// Good: hierarchical, descriptive
'user:1000:profile'
'session:abc123'
'cache:api:users:page:1'
'ratelimit:ip:192.168.1.1:2024-01-19'

// Use consistent separators
const key = ['user', userId, 'profile'].join(':');
typescript
// 推荐:分层结构,描述清晰
'user:1000:profile'
'session:abc123'
'cache:api:users:page:1'
'ratelimit:ip:192.168.1.1:2024-01-19'

// 使用统一的分隔符
const key = ['user', userId, 'profile'].join(':');

Expiration

过期策略

  • Always set TTL for cache keys
  • Use appropriate TTL based on data freshness
  • Monitor keys without expiration:
    redis-cli --bigkeys
  • 始终为缓存键设置TTL
  • 根据数据新鲜度选择合适的TTL
  • 监控无过期时间的键:
    redis-cli --bigkeys

Persistence

持久化

  • Use AOF for durability (appendonly yes)
  • Use RDB for backups (save snapshots)
  • Test restore procedures
  • 使用AOF保证数据耐久性(appendonly yes)
  • 使用RDB进行备份(保存快照)
  • 测试恢复流程

Monitoring

监控

bash
undefined
bash
undefined

Monitor commands in real-time

实时监控命令

redis-cli MONITOR
redis-cli MONITOR

Stats

统计信息

redis-cli INFO
redis-cli INFO

Slow queries

慢查询

redis-cli SLOWLOG GET 10
redis-cli SLOWLOG GET 10

Memory analysis

内存分析

redis-cli --bigkeys
redis-cli --bigkeys

Latency

延迟测试

redis-cli --latency
undefined
redis-cli --latency
undefined

Performance Optimization

性能优化

Connection Pooling

连接池

typescript
const redis = new Redis({
  host: 'localhost',
  port: 6379,
  maxRetriesPerRequest: 3,
  enableReadyCheck: true,
  lazyConnect: true,
});
typescript
const redis = new Redis({
  host: 'localhost',
  port: 6379,
  maxRetriesPerRequest: 3,
  enableReadyCheck: true,
  lazyConnect: true,
});

Avoid KEYS Command

避免使用KEYS命令

typescript
// ❌ Bad: Blocks entire server
const keys = await redis.keys('user:*');

// ✅ Good: Use SCAN for large datasets
async function* scanKeys(pattern: string) {
  let cursor = '0';
  do {
    const [newCursor, keys] = await redis.scan(
      cursor,
      'MATCH',
      pattern,
      'COUNT',
      100
    );
    cursor = newCursor;
    yield* keys;
  } while (cursor !== '0');
}

for await (const key of scanKeys('user:*')) {
  console.log(key);
}
typescript
// ❌ 不推荐:阻塞整个服务器
const keys = await redis.keys('user:*');

// ✅ 推荐:大数据集使用SCAN
async function* scanKeys(pattern: string) {
  let cursor = '0';
  do {
    const [newCursor, keys] = await redis.scan(
      cursor,
      'MATCH',
      pattern,
      'COUNT',
      100
    );
    cursor = newCursor;
    yield* keys;
  } while (cursor !== '0');
}

for await (const key of scanKeys('user:*')) {
  console.log(key);
}

Optimize Data Structures

优化数据结构

typescript
// Use hashes for objects instead of multiple keys
// ❌ Bad: 3 keys
await redis.set('user:1000:name', 'Alice');
await redis.set('user:1000:email', 'alice@example.com');
await redis.set('user:1000:age', '30');

// ✅ Good: 1 key
await redis.hset('user:1000', {
  name: 'Alice',
  email: 'alice@example.com',
  age: '30',
});
typescript
// 使用哈希表存储对象,而非多个独立键
// ❌ 不推荐:3个键
await redis.set('user:1000:name', 'Alice');
await redis.set('user:1000:email', 'alice@example.com');
await redis.set('user:1000:age', '30');

// ✅ 推荐:1个键
await redis.hset('user:1000', {
  name: 'Alice',
  email: 'alice@example.com',
  age: '30',
});

Anti-Patterns to Avoid

需要避免的反模式

Using Redis as primary database: Use for caching/sessions ❌ Not setting TTL on cache keys: Causes memory bloat ❌ Using KEYS in production: Use SCAN instead ❌ Large values in keys: Keep values small (<1MB) ❌ No monitoring: Track memory, latency, hit rate ❌ Synchronous blocking operations: Use async operations ❌ Not handling connection failures: Implement retry logic ❌ Storing large collections in single key: Split into multiple keys
将Redis作为主数据库:仅用于缓存/会话存储 ❌ 不为缓存键设置TTL:导致内存膨胀 ❌ 生产环境使用KEYS命令:使用SCAN替代 ❌ 键值过大:保持值大小小于1MB ❌ 不进行监控:跟踪内存、延迟、命中率 ❌ 同步阻塞操作:使用异步操作 ❌ 不处理连接失败:实现重试逻辑 ❌ 单个键存储大型集合:拆分为多个键

Common Use Cases

常见应用场景

Session Store (Express)

会话存储(Express)

typescript
import session from 'express-session';
import RedisStore from 'connect-redis';

app.use(
  session({
    store: new RedisStore({ client: redis }),
    secret: 'secret',
    resave: false,
    saveUninitialized: false,
    cookie: {
      secure: true,
      httpOnly: true,
      maxAge: 1000 * 60 * 60 * 24, // 24 hours
    },
  })
);
typescript
import session from 'express-session';
import RedisStore from 'connect-redis';

app.use(
  session({
    store: new RedisStore({ client: redis }),
    secret: 'secret',
    resave: false,
    saveUninitialized: false,
    cookie: {
      secure: true,
      httpOnly: true,
      maxAge: 1000 * 60 * 60 * 24, // 24小时
    },
  })
);

Job Queue (BullMQ)

任务队列(BullMQ)

typescript
import { Queue, Worker } from 'bullmq';

const queue = new Queue('emails', { connection: redis });

// Add job
await queue.add('send-email', {
  to: 'user@example.com',
  subject: 'Welcome',
  body: 'Hello!',
});

// Process jobs
const worker = new Worker('emails', async (job) => {
  await sendEmail(job.data);
}, { connection: redis });
typescript
import { Queue, Worker } from 'bullmq';

const queue = new Queue('emails', { connection: redis });

// 添加任务
await queue.add('send-email', {
  to: 'user@example.com',
  subject: '欢迎',
  body: '你好!',
});

// 处理任务
const worker = new Worker('emails', async (job) => {
  await sendEmail(job.data);
}, { connection: redis });

Resources

资源