aws-aurora

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

AWS Aurora Skill

AWS Aurora 实战指南

Load with: base.md + [typescript.md | python.md]
Amazon Aurora is a MySQL/PostgreSQL-compatible relational database with serverless scaling, high availability, and enterprise features.

加载方式:base.md + [typescript.md | python.md]
Amazon Aurora是一款兼容MySQL/PostgreSQL的关系型数据库,具备无服务器扩展、高可用性和企业级特性。

Core Principle

核心原则

Use RDS Proxy for serverless, Data API for simplicity, connection pooling always.
Aurora excels at ACID-compliant workloads. For serverless architectures (Lambda), always use RDS Proxy or Data API to handle connection management. Never open raw connections from Lambda functions.

无服务器架构使用RDS Proxy,追求简便使用Data API,始终启用连接池。
Aurora在ACID兼容的工作负载方面表现出色。对于无服务器架构(Lambda),务必使用RDS Proxy或Data API来处理连接管理。绝不要从Lambda函数建立原始连接。

Aurora Options

Aurora 选型指南

OptionBest For
Aurora Serverless v2Variable workloads, auto-scaling (0.5-128 ACUs)
Aurora ProvisionedPredictable workloads, maximum performance
Aurora GlobalMulti-region, disaster recovery
Data APIServerless without VPC, simple HTTP access
RDS ProxyConnection pooling for Lambda, high concurrency

选项适用场景
Aurora Serverless v2可变工作负载、自动扩展(0.5-128 ACU)
Aurora 预配置实例可预测工作负载、极致性能
Aurora 全局数据库多区域、灾难恢复
Data API无需VPC的无服务器架构、简单HTTP访问
RDS ProxyLambda的连接池、高并发场景

Connection Strategies

连接策略

Strategy 1: RDS Proxy (Recommended for Lambda)

策略1:RDS Proxy(Lambda推荐方案)

Lambda → RDS Proxy → Aurora
         (pool)
  • Connection pooling and reuse
  • Automatic failover handling
  • IAM authentication support
  • Works with existing SQL clients
Lambda → RDS Proxy → Aurora
         (pool)
  • 连接池与复用
  • 自动故障转移处理
  • 支持IAM身份验证
  • 兼容现有SQL客户端

Strategy 2: Data API (Simplest for Serverless)

策略2:Data API(无服务器最简方案)

Lambda → Data API (HTTP) → Aurora
  • No VPC required
  • No connection management
  • Higher latency per query
  • Limited to Aurora Serverless
Lambda → Data API (HTTP) → Aurora
  • 无需VPC
  • 无需管理连接
  • 单查询延迟较高
  • 仅支持Aurora Serverless

Strategy 3: Direct Connection (Not for Lambda)

策略3:直接连接(不适用于Lambda)

App Server → Aurora
(persistent connection)
  • Only for long-running servers (ECS, EC2)
  • Manage connection pool yourself
  • Not suitable for serverless

应用服务器 → Aurora
(持久连接)
  • 仅适用于长运行服务器(ECS、EC2)
  • 需自行管理连接池
  • 不适合无服务器架构

RDS Proxy Setup

RDS Proxy 配置

Create Proxy (AWS Console/CDK)

创建Proxy(AWS控制台/CDK)

typescript
// CDK example
import * as rds from 'aws-cdk-lib/aws-rds';

const proxy = new rds.DatabaseProxy(this, 'Proxy', {
  proxyTarget: rds.ProxyTarget.fromCluster(cluster),
  secrets: [cluster.secret!],
  vpc,
  securityGroups: [proxySecurityGroup],
  requireTLS: true,
  idleClientTimeout: cdk.Duration.minutes(30),
  maxConnectionsPercent: 90,
  maxIdleConnectionsPercent: 10,
  borrowTimeout: cdk.Duration.seconds(30)
});
typescript
// CDK example
import * as rds from 'aws-cdk-lib/aws-rds';

const proxy = new rds.DatabaseProxy(this, 'Proxy', {
  proxyTarget: rds.ProxyTarget.fromCluster(cluster),
  secrets: [cluster.secret!],
  vpc,
  securityGroups: [proxySecurityGroup],
  requireTLS: true,
  idleClientTimeout: cdk.Duration.minutes(30),
  maxConnectionsPercent: 90,
  maxIdleConnectionsPercent: 10,
  borrowTimeout: cdk.Duration.seconds(30)
});

Connect via Proxy (TypeScript/Node.js)

通过Proxy连接(TypeScript/Node.js)

typescript
// lib/db.ts
import { Pool } from 'pg';
import { Signer } from '@aws-sdk/rds-signer';

const signer = new Signer({
  hostname: process.env.RDS_PROXY_ENDPOINT!,
  port: 5432,
  username: process.env.DB_USER!,
  region: process.env.AWS_REGION!
});

// IAM authentication
async function getPool(): Promise<Pool> {
  const token = await signer.getAuthToken();

  return new Pool({
    host: process.env.RDS_PROXY_ENDPOINT,
    port: 5432,
    database: process.env.DB_NAME,
    user: process.env.DB_USER,
    password: token,
    ssl: { rejectUnauthorized: true },
    max: 1,  // Single connection for Lambda
    idleTimeoutMillis: 120000,
    connectionTimeoutMillis: 10000
  });
}

// Usage in Lambda
let pool: Pool | null = null;

export async function handler(event: any) {
  if (!pool) {
    pool = await getPool();
  }

  const result = await pool.query('SELECT * FROM users WHERE id = $1', [event.userId]);
  return result.rows[0];
}
typescript
// lib/db.ts
import { Pool } from 'pg';
import { Signer } from '@aws-sdk/rds-signer';

const signer = new Signer({
  hostname: process.env.RDS_PROXY_ENDPOINT!,
  port: 5432,
  username: process.env.DB_USER!,
  region: process.env.AWS_REGION!
});

// IAM authentication
async function getPool(): Promise<Pool> {
  const token = await signer.getAuthToken();

  return new Pool({
    host: process.env.RDS_PROXY_ENDPOINT,
    port: 5432,
    database: process.env.DB_NAME,
    user: process.env.DB_USER,
    password: token,
    ssl: { rejectUnauthorized: true },
    max: 1,  // Single connection for Lambda
    idleTimeoutMillis: 120000,
    connectionTimeoutMillis: 10000
  });
}

// Usage in Lambda
let pool: Pool | null = null;

export async function handler(event: any) {
  if (!pool) {
    pool = await getPool();
  }

  const result = await pool.query('SELECT * FROM users WHERE id = $1', [event.userId]);
  return result.rows[0];
}

Proxy Configuration Best Practices

Proxy 配置最佳实践

bash
undefined
bash
undefined

Key settings for Lambda workloads

Key settings for Lambda workloads

MaxConnectionsPercent: 90 # Use most of DB connections MaxIdleConnectionsPercent: 10 # Keep some idle for bursts ConnectionBorrowTimeout: 30s # Wait for available connection IdleClientTimeout: 30min # Close idle proxy connections
MaxConnectionsPercent: 90 # Use most of DB connections MaxIdleConnectionsPercent: 10 # Keep some idle for bursts ConnectionBorrowTimeout: 30s # Wait for available connection IdleClientTimeout: 30min # Close idle proxy connections

Monitor these CloudWatch metrics:

Monitor these CloudWatch metrics:

- DatabaseConnectionsCurrentlyBorrowed

- DatabaseConnectionsCurrentlyBorrowed

- DatabaseConnectionsCurrentlySessionPinned

- DatabaseConnectionsCurrentlySessionPinned

- QueryDatabaseResponseLatency

- QueryDatabaseResponseLatency


---

---

Data API (HTTP-based)

Data API(基于HTTP)

Enable Data API

启用Data API

bash
undefined
bash
undefined

Must be Aurora Serverless

Must be Aurora Serverless

aws rds modify-db-cluster
--db-cluster-identifier my-cluster
--enable-http-endpoint
undefined
aws rds modify-db-cluster
--db-cluster-identifier my-cluster
--enable-http-endpoint
undefined

TypeScript with Data API Client v2

使用TypeScript与Data API Client v2

bash
npm install data-api-client
typescript
// lib/db.ts
import DataAPIClient from 'data-api-client';

const db = DataAPIClient({
  secretArn: process.env.DB_SECRET_ARN!,
  resourceArn: process.env.DB_CLUSTER_ARN!,
  database: process.env.DB_NAME!,
  region: process.env.AWS_REGION!
});

// Simple query
const users = await db.query('SELECT * FROM users WHERE active = :active', {
  active: true
});

// Insert with returning
const result = await db.query(
  'INSERT INTO users (email, name) VALUES (:email, :name) RETURNING *',
  { email: 'user@test.com', name: 'Test User' }
);

// Transaction
const transaction = await db.transaction();
try {
  await transaction.query('UPDATE accounts SET balance = balance - :amount WHERE id = :from', {
    amount: 100, from: 1
  });
  await transaction.query('UPDATE accounts SET balance = balance + :amount WHERE id = :to', {
    amount: 100, to: 2
  });
  await transaction.commit();
} catch (error) {
  await transaction.rollback();
  throw error;
}
bash
npm install data-api-client
typescript
// lib/db.ts
import DataAPIClient from 'data-api-client';

const db = DataAPIClient({
  secretArn: process.env.DB_SECRET_ARN!,
  resourceArn: process.env.DB_CLUSTER_ARN!,
  database: process.env.DB_NAME!,
  region: process.env.AWS_REGION!
});

// Simple query
const users = await db.query('SELECT * FROM users WHERE active = :active', {
  active: true
});

// Insert with returning
const result = await db.query(
  'INSERT INTO users (email, name) VALUES (:email, :name) RETURNING *',
  { email: 'user@test.com', name: 'Test User' }
);

// Transaction
const transaction = await db.transaction();
try {
  await transaction.query('UPDATE accounts SET balance = balance - :amount WHERE id = :from', {
    amount: 100, from: 1
  });
  await transaction.query('UPDATE accounts SET balance = balance + :amount WHERE id = :to', {
    amount: 100, to: 2
  });
  await transaction.commit();
} catch (error) {
  await transaction.rollback();
  throw error;
}

Python with boto3

使用Python与boto3

python
undefined
python
undefined

requirements.txt

requirements.txt

boto3>=1.34.0
boto3>=1.34.0

db.py

db.py

import boto3 import os
rds_data = boto3.client('rds-data')
CLUSTER_ARN = os.environ['DB_CLUSTER_ARN'] SECRET_ARN = os.environ['DB_SECRET_ARN'] DATABASE = os.environ['DB_NAME']
def execute_sql(sql: str, parameters: list = None): """Execute SQL via Data API.""" params = { 'resourceArn': CLUSTER_ARN, 'secretArn': SECRET_ARN, 'database': DATABASE, 'sql': sql }
if parameters:
    params['parameters'] = parameters

return rds_data.execute_statement(**params)
def get_user(user_id: int): result = execute_sql( 'SELECT * FROM users WHERE id = :id', [{'name': 'id', 'value': {'longValue': user_id}}] ) return result.get('records', [])
def create_user(email: str, name: str): result = execute_sql( 'INSERT INTO users (email, name) VALUES (:email, :name) RETURNING *', [ {'name': 'email', 'value': {'stringValue': email}}, {'name': 'name', 'value': {'stringValue': name}} ] ) return result.get('generatedFields')
import boto3 import os
rds_data = boto3.client('rds-data')
CLUSTER_ARN = os.environ['DB_CLUSTER_ARN'] SECRET_ARN = os.environ['DB_SECRET_ARN'] DATABASE = os.environ['DB_NAME']
def execute_sql(sql: str, parameters: list = None): """Execute SQL via Data API.""" params = { 'resourceArn': CLUSTER_ARN, 'secretArn': SECRET_ARN, 'database': DATABASE, 'sql': sql }
if parameters:
    params['parameters'] = parameters

return rds_data.execute_statement(**params)
def get_user(user_id: int): result = execute_sql( 'SELECT * FROM users WHERE id = :id', [{'name': 'id', 'value': {'longValue': user_id}}] ) return result.get('records', [])
def create_user(email: str, name: str): result = execute_sql( 'INSERT INTO users (email, name) VALUES (:email, :name) RETURNING *', [ {'name': 'email', 'value': {'stringValue': email}}, {'name': 'name', 'value': {'stringValue': name}} ] ) return result.get('generatedFields')

Transaction

Transaction

def transfer_funds(from_id: int, to_id: int, amount: float): transaction = rds_data.begin_transaction( resourceArn=CLUSTER_ARN, secretArn=SECRET_ARN, database=DATABASE ) transaction_id = transaction['transactionId']
try:
    execute_sql(
        'UPDATE accounts SET balance = balance - :amount WHERE id = :id',
        [
            {'name': 'amount', 'value': {'doubleValue': amount}},
            {'name': 'id', 'value': {'longValue': from_id}}
        ]
    )

    execute_sql(
        'UPDATE accounts SET balance = balance + :amount WHERE id = :id',
        [
            {'name': 'amount', 'value': {'doubleValue': amount}},
            {'name': 'id', 'value': {'longValue': to_id}}
        ]
    )

    rds_data.commit_transaction(
        resourceArn=CLUSTER_ARN,
        secretArn=SECRET_ARN,
        transactionId=transaction_id
    )
except Exception as e:
    rds_data.rollback_transaction(
        resourceArn=CLUSTER_ARN,
        secretArn=SECRET_ARN,
        transactionId=transaction_id
    )
    raise e

---
def transfer_funds(from_id: int, to_id: int, amount: float): transaction = rds_data.begin_transaction( resourceArn=CLUSTER_ARN, secretArn=SECRET_ARN, database=DATABASE ) transaction_id = transaction['transactionId']
try:
    execute_sql(
        'UPDATE accounts SET balance = balance - :amount WHERE id = :id',
        [
            {'name': 'amount', 'value': {'doubleValue': amount}},
            {'name': 'id', 'value': {'longValue': from_id}}
        ]
    )

    execute_sql(
        'UPDATE accounts SET balance = balance + :amount WHERE id = :id',
        [
            {'name': 'amount', 'value': {'doubleValue': amount}},
            {'name': 'id', 'value': {'longValue': to_id}}
        ]
    )

    rds_data.commit_transaction(
        resourceArn=CLUSTER_ARN,
        secretArn=SECRET_ARN,
        transactionId=transaction_id
    )
except Exception as e:
    rds_data.rollback_transaction(
        resourceArn=CLUSTER_ARN,
        secretArn=SECRET_ARN,
        transactionId=transaction_id
    )
    raise e

---

Prisma with Aurora

Prisma 与 Aurora 集成

Setup (VPC Connection via RDS Proxy)

配置(通过RDS Proxy的VPC连接)

bash
npm install prisma @prisma/client
npx prisma init
prisma
// prisma/schema.prisma
generator client {
  provider = "prisma-client-js"
}

datasource db {
  provider = "postgresql"
  url      = env("DATABASE_URL")
}

model User {
  id        Int      @id @default(autoincrement())
  email     String   @unique
  name      String
  posts     Post[]
  createdAt DateTime @default(now())
  updatedAt DateTime @updatedAt
}

model Post {
  id        Int      @id @default(autoincrement())
  title     String
  content   String?
  published Boolean  @default(false)
  author    User     @relation(fields: [authorId], references: [id])
  authorId  Int
  createdAt DateTime @default(now())
}
bash
npm install prisma @prisma/client
npx prisma init
prisma
// prisma/schema.prisma
generator client {
  provider = "prisma-client-js"
}

datasource db {
  provider = "postgresql"
  url      = env("DATABASE_URL")
}

model User {
  id        Int      @id @default(autoincrement())
  email     String   @unique
  name      String
  posts     Post[]
  createdAt DateTime @default(now())
  updatedAt DateTime @updatedAt
}

model Post {
  id        Int      @id @default(autoincrement())
  title     String
  content   String?
  published Boolean  @default(false)
  author    User     @relation(fields: [authorId], references: [id])
  authorId  Int
  createdAt DateTime @default(now())
}

Environment

环境变量

bash
undefined
bash
undefined

Use RDS Proxy endpoint

Use RDS Proxy endpoint

DATABASE_URL="postgresql://user:password@proxy-endpoint.proxy-xxx.region.rds.amazonaws.com:5432/mydb?schema=public&connection_limit=1"
undefined
DATABASE_URL="postgresql://user:password@proxy-endpoint.proxy-xxx.region.rds.amazonaws.com:5432/mydb?schema=public&connection_limit=1"
undefined

Lambda Handler with Prisma

Lambda 处理器与Prisma

typescript
// handlers/users.ts
import { PrismaClient } from '@prisma/client';

// Reuse client across invocations
let prisma: PrismaClient | null = null;

function getPrisma(): PrismaClient {
  if (!prisma) {
    prisma = new PrismaClient({
      datasources: {
        db: { url: process.env.DATABASE_URL }
      }
    });
  }
  return prisma;
}

export async function handler(event: any) {
  const db = getPrisma();

  const users = await db.user.findMany({
    include: { posts: true },
    take: 10
  });

  return {
    statusCode: 200,
    body: JSON.stringify(users)
  };
}

typescript
// handlers/users.ts
import { PrismaClient } from '@prisma/client';

// Reuse client across invocations
let prisma: PrismaClient | null = null;

function getPrisma(): PrismaClient {
  if (!prisma) {
    prisma = new PrismaClient({
      datasources: {
        db: { url: process.env.DATABASE_URL }
      }
    });
  }
  return prisma;
}

export async function handler(event: any) {
  const db = getPrisma();

  const users = await db.user.findMany({
    include: { posts: true },
    take: 10
  });

  return {
    statusCode: 200,
    body: JSON.stringify(users)
  };
}

Aurora Serverless v2

Aurora Serverless v2

Capacity Configuration

容量配置

typescript
// CDK
const cluster = new rds.DatabaseCluster(this, 'Cluster', {
  engine: rds.DatabaseClusterEngine.auroraPostgres({
    version: rds.AuroraPostgresEngineVersion.VER_15_4
  }),
  serverlessV2MinCapacity: 0.5,  // Minimum ACUs
  serverlessV2MaxCapacity: 16,   // Maximum ACUs
  writer: rds.ClusterInstance.serverlessV2('writer'),
  readers: [
    rds.ClusterInstance.serverlessV2('reader', { scaleWithWriter: true })
  ],
  vpc,
  vpcSubnets: { subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS }
});
typescript
// CDK
const cluster = new rds.DatabaseCluster(this, 'Cluster', {
  engine: rds.DatabaseClusterEngine.auroraPostgres({
    version: rds.AuroraPostgresEngineVersion.VER_15_4
  }),
  serverlessV2MinCapacity: 0.5,  // Minimum ACUs
  serverlessV2MaxCapacity: 16,   // Maximum ACUs
  writer: rds.ClusterInstance.serverlessV2('writer'),
  readers: [
    rds.ClusterInstance.serverlessV2('reader', { scaleWithWriter: true })
  ],
  vpc,
  vpcSubnets: { subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS }
});

Capacity Guidelines

容量指南

WorkloadMin ACUsMax ACUs
Dev/Test0.52
Small Production28
Medium Production432
Large Production8128
工作负载最小ACU最大ACU
开发/测试0.52
小型生产环境28
中型生产环境432
大型生产环境8128

Handle Scale-to-Zero Wake-up

处理缩容至零后的唤醒

typescript
// Data API Client v2 handles this automatically
// For direct connections, implement retry logic:

import { Pool } from 'pg';

async function queryWithRetry(
  pool: Pool,
  sql: string,
  params: any[],
  maxRetries = 3
): Promise<any> {
  for (let attempt = 1; attempt <= maxRetries; attempt++) {
    try {
      return await pool.query(sql, params);
    } catch (error: any) {
      // Aurora Serverless waking up
      if (error.code === 'ETIMEDOUT' || error.message?.includes('Communications link failure')) {
        if (attempt === maxRetries) throw error;
        // Exponential backoff
        await new Promise(resolve => setTimeout(resolve, Math.pow(2, attempt) * 1000));
        continue;
      }
      throw error;
    }
  }
}

typescript
// Data API Client v2 handles this automatically
// For direct connections, implement retry logic:

import { Pool } from 'pg';

async function queryWithRetry(
  pool: Pool,
  sql: string,
  params: any[],
  maxRetries = 3
): Promise<any> {
  for (let attempt = 1; attempt <= maxRetries; attempt++) {
    try {
      return await pool.query(sql, params);
    } catch (error: any) {
      // Aurora Serverless waking up
      if (error.code === 'ETIMEDOUT' || error.message?.includes('Communications link failure')) {
        if (attempt === maxRetries) throw error;
        // Exponential backoff
        await new Promise(resolve => setTimeout(resolve, Math.pow(2, attempt) * 1000));
        continue;
      }
      throw error;
    }
  }
}

Migrations

数据库迁移

Using Prisma Migrate

使用Prisma Migrate

bash
undefined
bash
undefined

Development (creates migration)

Development (creates migration)

npx prisma migrate dev --name add_users_table
npx prisma migrate dev --name add_users_table

Production (apply migrations)

Production (apply migrations)

npx prisma migrate deploy
npx prisma migrate deploy

Generate client

Generate client

npx prisma generate
undefined
npx prisma generate
undefined

CI/CD Migration Script

CI/CD 迁移脚本

yaml
undefined
yaml
undefined

.github/workflows/deploy.yml

.github/workflows/deploy.yml

  • name: Run migrations run: |

    Connect via bastion or use a migration Lambda

    npx prisma migrate deploy env: DATABASE_URL: ${{ secrets.DATABASE_URL }}
undefined
  • name: Run migrations run: |

    Connect via bastion or use a migration Lambda

    npx prisma migrate deploy env: DATABASE_URL: ${{ secrets.DATABASE_URL }}
undefined

Migration Lambda

迁移Lambda

typescript
// lambdas/migrate.ts
import { execSync } from 'child_process';

export async function handler() {
  try {
    execSync('npx prisma migrate deploy', {
      env: {
        ...process.env,
        DATABASE_URL: process.env.DATABASE_URL
      },
      stdio: 'inherit'
    });
    return { statusCode: 200, body: 'Migrations applied' };
  } catch (error) {
    console.error('Migration failed:', error);
    throw error;
  }
}

typescript
// lambdas/migrate.ts
import { execSync } from 'child_process';

export async function handler() {
  try {
    execSync('npx prisma migrate deploy', {
      env: {
        ...process.env,
        DATABASE_URL: process.env.DATABASE_URL
      },
      stdio: 'inherit'
    });
    return { statusCode: 200, body: 'Migrations applied' };
  } catch (error) {
    console.error('Migration failed:', error);
    throw error;
  }
}

Connection Pooling (Non-Lambda)

连接池(非Lambda场景)

PgBouncer Sidecar (ECS/EKS)

PgBouncer 边车模式(ECS/EKS)

yaml
undefined
yaml
undefined

docker-compose.yml

docker-compose.yml

services: app: build: . environment: DATABASE_URL: postgresql://user:pass@pgbouncer:6432/mydb
pgbouncer: image: edoburu/pgbouncer environment: DATABASE_URL: postgresql://user:pass@aurora-endpoint:5432/mydb POOL_MODE: transaction MAX_CLIENT_CONN: 1000 DEFAULT_POOL_SIZE: 20
undefined
services: app: build: . environment: DATABASE_URL: postgresql://user:pass@pgbouncer:6432/mydb
pgbouncer: image: edoburu/pgbouncer environment: DATABASE_URL: postgresql://user:pass@aurora-endpoint:5432/mydb POOL_MODE: transaction MAX_CLIENT_CONN: 1000 DEFAULT_POOL_SIZE: 20
undefined

Application-Level Pooling

应用层连接池

typescript
// For long-running servers (not Lambda)
import { Pool } from 'pg';

const pool = new Pool({
  host: process.env.DB_HOST,
  port: 5432,
  database: process.env.DB_NAME,
  user: process.env.DB_USER,
  password: process.env.DB_PASSWORD,
  max: 20,                  // Max connections
  idleTimeoutMillis: 30000, // Close idle after 30s
  connectionTimeoutMillis: 10000
});

// Use pool for all queries
export async function query(sql: string, params?: any[]) {
  const client = await pool.connect();
  try {
    return await client.query(sql, params);
  } finally {
    client.release();
  }
}

typescript
// For long-running servers (not Lambda)
import { Pool } from 'pg';

const pool = new Pool({
  host: process.env.DB_HOST,
  port: 5432,
  database: process.env.DB_NAME,
  user: process.env.DB_USER,
  password: process.env.DB_PASSWORD,
  max: 20,                  // Max connections
  idleTimeoutMillis: 30000, // Close idle after 30s
  connectionTimeoutMillis: 10000
});

// Use pool for all queries
export async function query(sql: string, params?: any[]) {
  const client = await pool.connect();
  try {
    return await client.query(sql, params);
  } finally {
    client.release();
  }
}

Monitoring

监控

Key CloudWatch Metrics

关键CloudWatch指标

undefined
undefined

Aurora

Aurora

  • CPUUtilization
  • DatabaseConnections
  • FreeableMemory
  • ServerlessDatabaseCapacity (ACUs)
  • AuroraReplicaLag
  • CPUUtilization
  • DatabaseConnections
  • FreeableMemory
  • ServerlessDatabaseCapacity (ACUs)
  • AuroraReplicaLag

RDS Proxy

RDS Proxy

  • DatabaseConnectionsCurrentlyBorrowed
  • DatabaseConnectionsCurrentlySessionPinned
  • QueryDatabaseResponseLatency
  • ClientConnectionsReceived
undefined
  • DatabaseConnectionsCurrentlyBorrowed
  • DatabaseConnectionsCurrentlySessionPinned
  • QueryDatabaseResponseLatency
  • ClientConnectionsReceived
undefined

Performance Insights

性能洞察

bash
undefined
bash
undefined

Enable via console or CLI

Enable via console or CLI

aws rds modify-db-cluster
--db-cluster-identifier my-cluster
--enable-performance-insights
--performance-insights-retention-period 7

---
aws rds modify-db-cluster
--db-cluster-identifier my-cluster
--enable-performance-insights
--performance-insights-retention-period 7

---

Security

安全

IAM Database Authentication

IAM数据库身份验证

typescript
import { Signer } from '@aws-sdk/rds-signer';

const signer = new Signer({
  hostname: process.env.DB_HOST!,
  port: 5432,
  username: 'iam_user',
  region: 'us-east-1'
});

const token = await signer.getAuthToken();

// Use token as password (valid for 15 minutes)
const pool = new Pool({
  host: process.env.DB_HOST,
  user: 'iam_user',
  password: token,
  ssl: true
});
typescript
import { Signer } from '@aws-sdk/rds-signer';

const signer = new Signer({
  hostname: process.env.DB_HOST!,
  port: 5432,
  username: 'iam_user',
  region: 'us-east-1'
});

const token = await signer.getAuthToken();

// Use token as password (valid for 15 minutes)
const pool = new Pool({
  host: process.env.DB_HOST,
  user: 'iam_user',
  password: token,
  ssl: true
});

Secrets Manager Rotation

Secrets Manager 密钥轮换

typescript
import { SecretsManagerClient, GetSecretValueCommand } from '@aws-sdk/client-secrets-manager';

const client = new SecretsManagerClient({ region: 'us-east-1' });

async function getDbCredentials() {
  const response = await client.send(
    new GetSecretValueCommand({ SecretId: process.env.DB_SECRET_ARN })
  );
  return JSON.parse(response.SecretString!);
}

typescript
import { SecretsManagerClient, GetSecretValueCommand } from '@aws-sdk/client-secrets-manager';

const client = new SecretsManagerClient({ region: 'us-east-1' });

async function getDbCredentials() {
  const response = await client.send(
    new GetSecretValueCommand({ SecretId: process.env.DB_SECRET_ARN })
  );
  return JSON.parse(response.SecretString!);
}

CLI Quick Reference

CLI 快速参考

bash
undefined
bash
undefined

Cluster operations

Cluster operations

aws rds describe-db-clusters aws rds create-db-cluster --engine aurora-postgresql --db-cluster-identifier my-cluster aws rds delete-db-cluster --db-cluster-identifier my-cluster --skip-final-snapshot
aws rds describe-db-clusters aws rds create-db-cluster --engine aurora-postgresql --db-cluster-identifier my-cluster aws rds delete-db-cluster --db-cluster-identifier my-cluster --skip-final-snapshot

Serverless v2

Serverless v2

aws rds modify-db-cluster
--db-cluster-identifier my-cluster
--serverless-v2-scaling-configuration MinCapacity=0.5,MaxCapacity=16
aws rds modify-db-cluster
--db-cluster-identifier my-cluster
--serverless-v2-scaling-configuration MinCapacity=0.5,MaxCapacity=16

Data API

Data API

aws rds-data execute-statement
--resource-arn $CLUSTER_ARN
--secret-arn $SECRET_ARN
--database mydb
--sql "SELECT * FROM users"
aws rds-data execute-statement
--resource-arn $CLUSTER_ARN
--secret-arn $SECRET_ARN
--database mydb
--sql "SELECT * FROM users"

Proxy

Proxy

aws rds describe-db-proxies aws rds create-db-proxy --db-proxy-name my-proxy --engine-family POSTGRESQL ...
aws rds describe-db-proxies aws rds create-db-proxy --db-proxy-name my-proxy --engine-family POSTGRESQL ...

Snapshots

Snapshots

aws rds create-db-cluster-snapshot --db-cluster-identifier my-cluster --db-cluster-snapshot-identifier backup-1 aws rds restore-db-cluster-from-snapshot --db-cluster-identifier restored --snapshot-identifier backup-1

---
aws rds create-db-cluster-snapshot --db-cluster-identifier my-cluster --db-cluster-snapshot-identifier backup-1 aws rds restore-db-cluster-from-snapshot --db-cluster-identifier restored --snapshot-identifier backup-1

---

Anti-Patterns

反模式

  • Direct Lambda→Aurora connections - Always use RDS Proxy or Data API
  • No connection limits - Set
    max: 1
    for Lambda, use pooling for servers
  • Ignoring cold starts - Serverless v2 needs time to scale; keep minimum ACUs for production
  • No read replicas - Offload reads to replicas for heavy workloads
  • Missing IAM auth - Use IAM over static passwords when possible
  • No retry logic - Handle transient errors from scaling/failover
  • Over-provisioned capacity - Use Serverless v2 for variable workloads
  • Skipping Secrets Manager - Never hardcode credentials
  • Lambda直接连接Aurora - 务必使用RDS Proxy或Data API
  • 未设置连接限制 - Lambda设置
    max: 1
    ,服务器使用连接池
  • 忽略冷启动 - Serverless v2扩展需要时间;生产环境保留最小ACU
  • 未使用只读副本 - 高负载工作负载将读请求分流到副本
  • 未使用IAM身份验证 - 尽可能使用IAM而非静态密码
  • 未实现重试逻辑 - 处理扩展/故障转移导致的临时错误
  • 过度配置容量 - 可变工作负载使用Serverless v2
  • 未使用Secrets Manager - 绝不要硬编码凭证