aws-cloud-services
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseAWS Cloud Services
AWS云服务
A comprehensive skill for building, deploying, and managing cloud infrastructure on Amazon Web Services (AWS). Master S3 object storage, Lambda serverless functions, DynamoDB NoSQL databases, EC2 compute instances, RDS relational databases, IAM security, CloudFormation infrastructure as code, and enterprise-grade cloud architecture patterns.
一项用于在亚马逊云科技(AWS)上构建、部署和管理云基础设施的全面技能。掌握S3对象存储、Lambda无服务器函数、DynamoDB NoSQL数据库、EC2计算实例、RDS关系型数据库、IAM安全机制、CloudFormation基础设施即代码,以及企业级云架构模式。
When to Use This Skill
何时使用本技能
Use this skill when:
- Building scalable cloud applications on AWS infrastructure
- Implementing serverless architectures with Lambda and API Gateway
- Managing object storage and file uploads with S3
- Designing NoSQL database solutions with DynamoDB
- Deploying EC2 instances and managing compute resources
- Setting up RDS databases for relational data storage
- Implementing IAM security policies and access control
- Automating infrastructure deployment with CloudFormation
- Architecting multi-region, highly available systems
- Optimizing cloud costs and performance
- Migrating on-premises applications to AWS
- Implementing event-driven architectures
- Building data pipelines and analytics solutions
- Managing secrets and credentials securely
- Setting up CI/CD pipelines with AWS services
在以下场景使用本技能:
- 在AWS基础设施上构建可扩展的云应用
- 使用Lambda和API Gateway实现无服务器架构
- 使用S3管理对象存储和文件上传
- 使用DynamoDB设计NoSQL数据库解决方案
- 部署EC2实例并管理计算资源
- 搭建RDS数据库用于关系型数据存储
- 实施IAM安全策略和访问控制
- 使用CloudFormation自动化基础设施部署
- 构建多区域、高可用系统
- 优化云成本与性能
- 将本地应用迁移至AWS
- 实现事件驱动架构
- 构建数据管道和分析解决方案
- 安全管理密钥和凭证
- 使用AWS服务搭建CI/CD流水线
Core Concepts
核心概念
AWS Fundamentals
AWS基础
AWS is Amazon's comprehensive cloud computing platform offering 200+ services across compute, storage, databases, networking, security, and more.
AWS是亚马逊提供的综合性云计算平台,涵盖计算、存储、数据库、网络、安全等200+种服务。
Key Concepts
关键概念
Regions and Availability Zones
- Regions: Geographic areas with multiple data centers (e.g., us-east-1, eu-west-1)
- Availability Zones (AZs): Isolated data centers within a region
- Edge Locations: CDN endpoints for CloudFront content delivery
- Local Zones: Extensions of regions for ultra-low latency
AWS Account Structure
- Root Account: Primary account with full access (use sparingly)
- IAM Users: Individual user accounts with specific permissions
- IAM Roles: Temporary credentials for services and applications
- Organizations: Multi-account management for enterprises
Service Categories
- Compute: EC2, Lambda, ECS, EKS, Fargate
- Storage: S3, EBS, EFS, Glacier
- Database: RDS, DynamoDB, Aurora, ElastiCache, Redshift
- Networking: VPC, Route 53, CloudFront, API Gateway, ELB
- Security: IAM, Cognito, Secrets Manager, KMS, WAF
- Infrastructure: CloudFormation, CDK, Systems Manager
- Monitoring: CloudWatch, X-Ray, CloudTrail
区域与可用区
- 区域(Regions):包含多个数据中心的地理区域(例如us-east-1、eu-west-1)
- 可用区(AZs):区域内相互隔离的数据中心
- 边缘位置:CloudFront内容分发的CDN节点
- 本地区域:为超低延迟需求扩展的区域
AWS账户结构
- 根账户:拥有完全访问权限的主账户(应谨慎使用)
- IAM用户:拥有特定权限的个人账户
- IAM角色:为服务和应用提供临时凭证
- 组织(Organizations):企业级多账户管理
服务分类
- 计算:EC2、Lambda、ECS、EKS、Fargate
- 存储:S3、EBS、EFS、Glacier
- 数据库:RDS、DynamoDB、Aurora、ElastiCache、Redshift
- 网络:VPC、Route 53、CloudFront、API Gateway、ELB
- 安全:IAM、Cognito、Secrets Manager、KMS、WAF
- 基础设施:CloudFormation、CDK、Systems Manager
- 监控:CloudWatch、X-Ray、CloudTrail
AWS SDK for JavaScript v3
AWS SDK for JavaScript v3
The AWS SDK v3 is modular, tree-shakable, and optimized for modern JavaScript/TypeScript applications.
AWS SDK v3采用模块化设计,支持tree-shaking,针对现代JavaScript/TypeScript应用优化。
Key Improvements
主要改进
Modular Architecture
javascript
// v2 (monolithic)
const AWS = require('aws-sdk');
const s3 = new AWS.S3();
// v3 (modular)
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';
const client = new S3Client({ region: 'us-east-1' });Command Pattern
- Each operation is a command class
- Clear separation between client and commands
- Better TypeScript support and type inference
Middleware Stack
- Customizable request/response pipeline
- Built-in retry and exponential backoff
- Request signing and authentication
模块化架构
javascript
// v2 (单体式)
const AWS = require('aws-sdk');
const s3 = new AWS.S3();
// v3 (模块化)
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';
const client = new S3Client({ region: 'us-east-1' });命令模式
- 每个操作对应一个命令类
- 客户端与命令清晰分离
- 更好的TypeScript支持和类型推断
中间件栈
- 可自定义请求/响应流水线
- 内置重试和指数退避机制
- 请求签名与认证
Identity and Access Management (IAM)
身份与访问管理(IAM)
IAM controls authentication and authorization across all AWS services.
IAM控制所有AWS服务的认证与授权。
Core IAM Components
IAM核心组件
Users
- Individual identities with long-term credentials
- Access keys for programmatic access
- Passwords for console access
- MFA (Multi-Factor Authentication) support
Groups
- Collections of users
- Attach policies to manage permissions collectively
- Users can belong to multiple groups
Roles
- Temporary credentials assumed by users, services, or applications
- Cross-account access
- Service-to-service communication
- Federation with external identity providers
Policies
- JSON documents defining permissions
- Identity-based policies (attached to users/groups/roles)
- Resource-based policies (attached to resources like S3 buckets)
- Service control policies (SCPs) for Organizations
用户
- 拥有长期凭证的独立身份
- 用于编程访问的访问密钥
- 用于控制台访问的密码
- 支持多因素认证(MFA)
用户组
- 用户的集合
- 通过附加策略统一管理权限
- 用户可属于多个用户组
角色
- 由用户、服务或应用临时获取的凭证
- 跨账户访问
- 服务间通信
- 与外部身份提供商联邦认证
策略
- 定义权限的JSON文档
- 基于身份的策略(附加到用户/用户组/角色)
- 基于资源的策略(附加到S3存储桶等资源)
- 组织服务控制策略(SCPs)
Policy Structure
策略结构
json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::my-bucket/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "203.0.113.0/24"
}
}
}
]
}Policy Elements
- Effect: Allow or Deny
- Action: Services and operations (e.g., s3:GetObject)
- Resource: ARN of resources affected
- Condition: Optional constraints (IP, time, MFA, etc.)
- Principal: Who the policy applies to (for resource-based policies)
json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::my-bucket/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "203.0.113.0/24"
}
}
}
]
}策略元素
- Effect:允许或拒绝
- Action:服务与操作(例如s3:GetObject)
- Resource:受影响资源的ARN
- Condition:可选约束(IP、时间、MFA等)
- Principal:策略适用对象(基于资源的策略)
Least Privilege Principle
最小权限原则
Always grant minimum permissions necessary:
- Start with no permissions
- Add permissions incrementally as needed
- Use managed policies for common patterns
- Create custom policies for specific use cases
- Regularly audit and remove unused permissions
始终授予必要的最小权限:
- 初始无任何权限
- 根据需求逐步添加权限
- 针对通用模式使用托管策略
- 针对特定场景创建自定义策略
- 定期审计并移除未使用的权限
Credential Management
凭证管理
Credential Chain
凭证链
The SDK searches for credentials in this order:
- Environment variables: AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY
- Shared credentials file: ~/.aws/credentials
- Shared config file: ~/.aws/config
- IAM role (EC2/ECS/Lambda): Instance metadata service
- Process credentials: From a custom executable
SDK按以下顺序查找凭证:
- 环境变量:AWS_ACCESS_KEY_ID、AWS_SECRET_ACCESS_KEY
- 共享凭证文件:~/.aws/credentials
- 共享配置文件:~/.aws/config
- IAM角色(EC2/ECS/Lambda):实例元数据服务
- 进程凭证:来自自定义可执行文件
Best Practices
最佳实践
- Never hardcode credentials in source code
- Use IAM roles for EC2, Lambda, ECS
- Use temporary credentials whenever possible
- Rotate access keys regularly (90 days recommended)
- Use AWS Secrets Manager for application secrets
- Enable MFA for privileged accounts
- Use AWS SSO for centralized access management
- 切勿在源代码中硬编码凭证
- 为EC2、Lambda、ECS使用IAM角色
- 尽可能使用临时凭证
- 定期轮换访问密钥(建议90天)
- 使用AWS Secrets Manager管理应用密钥
- 为特权账户启用MFA
- 使用AWS SSO进行集中式访问管理
Regions and Endpoint Configuration
区域与端点配置
javascript
import { S3Client } from '@aws-sdk/client-s3';
// Specify region explicitly
const client = new S3Client({
region: 'us-west-2',
endpoint: 'https://s3.us-west-2.amazonaws.com' // Optional custom endpoint
});
// Use default region from environment/config
const defaultClient = new S3Client({}); // Uses AWS_REGION or default regionjavascript
import { S3Client } from '@aws-sdk/client-s3';
// 显式指定区域
const client = new S3Client({
region: 'us-west-2',
endpoint: 'https://s3.us-west-2.amazonaws.com' // 可选自定义端点
});
// 使用环境/配置中的默认区域
const defaultClient = new S3Client({}); // 使用AWS_REGION或默认区域S3 (Simple Storage Service)
S3(简单存储服务)
S3 is AWS's object storage service for storing and retrieving any amount of data from anywhere.
S3是AWS的对象存储服务,可在任意地点存储和检索任意数量的数据。
Core S3 Concepts
S3核心概念
Buckets
存储桶
- Globally unique names: Must be unique across all AWS accounts
- Regional resources: Created in a specific region
- Unlimited objects: No limit on number of objects
- Bucket policies: Resource-based access control
- Versioning: Keep multiple versions of objects
- Encryption: Server-side and client-side encryption
- 全局唯一名称:必须在所有AWS账户中唯一
- 区域资源:创建于特定区域
- 无对象数量限制:对象数量无上限
- 存储桶策略:基于资源的访问控制
- 版本控制:保留对象的多个版本
- 加密:服务端与客户端加密
Objects
对象
- Key-value store: Key is the object name, value is the data
- Metadata: System and user-defined metadata
- Size limit: 5TB per object
- Multipart upload: For objects > 100MB (required for > 5GB)
- Storage classes: Standard, IA, Glacier, etc.
- 键值存储:键为对象名称,值为数据
- 元数据:系统与用户自定义元数据
- 大小限制:单个对象最大5TB
- 分段上传:适用于大于100MB的对象(大于5GB时必填)
- 存储类别:标准、IA、Glacier等
S3 Storage Classes
S3存储类别
S3 Standard
- Frequently accessed data
- 99.99% availability
- Millisecond latency
S3 Intelligent-Tiering
- Automatic cost optimization
- Moves data between access tiers
S3 Standard-IA (Infrequent Access)
- Lower cost for infrequently accessed data
- Retrieval fees apply
S3 One Zone-IA
- Single AZ storage for less critical data
- 20% cheaper than Standard-IA
S3 Glacier
- Long-term archival
- Minutes to hours retrieval
- Very low cost
S3 Glacier Deep Archive
- Lowest cost storage
- 12-hour retrieval
- Ideal for compliance archives
S3标准
- 频繁访问的数据
- 99.99%可用性
- 毫秒级延迟
S3智能分层
- 自动成本优化
- 在访问层之间自动迁移数据
S3标准-IA(低频访问)
- 适用于低频访问数据,成本更低
- 收取检索费用
S3单区域-IA
- 单可用区存储,适用于非关键数据
- 比标准-IA便宜20%
S3 Glacier
- 长期归档
- 检索时间为数分钟至数小时
- 成本极低
S3 Glacier深度归档
- 成本最低的存储类别
- 检索时间12小时
- 适用于合规归档
S3 Operations
S3操作
Upload Objects
上传对象
javascript
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';
import { readFileSync } from 'fs';
const client = new S3Client({ region: 'us-east-1' });
// Simple upload
const uploadFile = async (bucketName, key, filePath) => {
const fileContent = readFileSync(filePath);
const command = new PutObjectCommand({
Bucket: bucketName,
Key: key,
Body: fileContent,
ContentType: 'image/jpeg', // Optional
Metadata: { // Optional custom metadata
'uploaded-by': 'user-123',
'upload-date': new Date().toISOString()
},
ServerSideEncryption: 'AES256', // Enable encryption
ACL: 'private' // Access control
});
const response = await client.send(command);
return response;
};javascript
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';
import { readFileSync } from 'fs';
const client = new S3Client({ region: 'us-east-1' });
// 简单上传
const uploadFile = async (bucketName, key, filePath) => {
const fileContent = readFileSync(filePath);
const command = new PutObjectCommand({
Bucket: bucketName,
Key: key,
Body: fileContent,
ContentType: 'image/jpeg', // 可选
Metadata: { // 可选自定义元数据
'uploaded-by': 'user-123',
'upload-date': new Date().toISOString()
},
ServerSideEncryption: 'AES256', // 启用加密
ACL: 'private' // 访问控制
});
const response = await client.send(command);
return response;
};Download Objects
下载对象
javascript
import { GetObjectCommand } from '@aws-sdk/client-s3';
import { writeFileSync } from 'fs';
const downloadFile = async (bucketName, key, destinationPath) => {
const command = new GetObjectCommand({
Bucket: bucketName,
Key: key
});
const response = await client.send(command);
// Convert stream to buffer
const chunks = [];
for await (const chunk of response.Body) {
chunks.push(chunk);
}
const buffer = Buffer.concat(chunks);
writeFileSync(destinationPath, buffer);
return response.Metadata;
};javascript
import { GetObjectCommand } from '@aws-sdk/client-s3';
import { writeFileSync } from 'fs';
const downloadFile = async (bucketName, key, destinationPath) => {
const command = new GetObjectCommand({
Bucket: bucketName,
Key: key
});
const response = await client.send(command);
// 将流转换为缓冲区
const chunks = [];
for await (const chunk of response.Body) {
chunks.push(chunk);
}
const buffer = Buffer.concat(chunks);
writeFileSync(destinationPath, buffer);
return response.Metadata;
};List Objects
列出对象
javascript
import { ListObjectsV2Command } from '@aws-sdk/client-s3';
const listObjects = async (bucketName, prefix = '') => {
const command = new ListObjectsV2Command({
Bucket: bucketName,
Prefix: prefix, // Filter by prefix
MaxKeys: 1000, // Max 1000 per request
Delimiter: '/' // Treat / as folder separator
});
const response = await client.send(command);
return response.Contents; // Array of objects
};
// Pagination for large buckets
const listAllObjects = async (bucketName) => {
let allObjects = [];
let continuationToken;
do {
const command = new ListObjectsV2Command({
Bucket: bucketName,
ContinuationToken: continuationToken
});
const response = await client.send(command);
allObjects = allObjects.concat(response.Contents || []);
continuationToken = response.NextContinuationToken;
} while (continuationToken);
return allObjects;
};javascript
import { ListObjectsV2Command } from '@aws-sdk/client-s3';
const listObjects = async (bucketName, prefix = '') => {
const command = new ListObjectsV2Command({
Bucket: bucketName,
Prefix: prefix, // 按前缀过滤
MaxKeys: 1000, // 每次请求最多1000个
Delimiter: '/' // 将/视为文件夹分隔符
});
const response = await client.send(command);
return response.Contents; // 对象数组
};
// 大型存储桶的分页查询
const listAllObjects = async (bucketName) => {
let allObjects = [];
let continuationToken;
do {
const command = new ListObjectsV2Command({
Bucket: bucketName,
ContinuationToken: continuationToken
});
const response = await client.send(command);
allObjects = allObjects.concat(response.Contents || []);
continuationToken = response.NextContinuationToken;
} while (continuationToken);
return allObjects;
};Delete Objects
删除对象
javascript
import { DeleteObjectCommand, DeleteObjectsCommand } from '@aws-sdk/client-s3';
// Delete single object
const deleteObject = async (bucketName, key) => {
const command = new DeleteObjectCommand({
Bucket: bucketName,
Key: key
});
await client.send(command);
};
// Delete multiple objects (up to 1000 at once)
const deleteMultipleObjects = async (bucketName, keys) => {
const command = new DeleteObjectsCommand({
Bucket: bucketName,
Delete: {
Objects: keys.map(key => ({ Key: key })),
Quiet: false // Return list of deleted objects
}
});
const response = await client.send(command);
return response.Deleted;
};javascript
import { DeleteObjectCommand, DeleteObjectsCommand } from '@aws-sdk/client-s3';
// 删除单个对象
const deleteObject = async (bucketName, key) => {
const command = new DeleteObjectCommand({
Bucket: bucketName,
Key: key
});
await client.send(command);
};
// 批量删除对象(最多一次1000个)
const deleteMultipleObjects = async (bucketName, keys) => {
const command = new DeleteObjectsCommand({
Bucket: bucketName,
Delete: {
Objects: keys.map(key => ({ Key: key })),
Quiet: false // 返回已删除对象列表
}
});
const response = await client.send(command);
return response.Deleted;
};Presigned URLs
预签名URL
Generate temporary URLs for secure file uploads/downloads without AWS credentials.
javascript
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
import { PutObjectCommand, GetObjectCommand } from '@aws-sdk/client-s3';
// Presigned URL for upload
const createUploadUrl = async (bucketName, key, expiresIn = 3600) => {
const command = new PutObjectCommand({
Bucket: bucketName,
Key: key,
ContentType: 'image/jpeg'
});
const url = await getSignedUrl(client, command, { expiresIn });
return url; // Client can PUT to this URL
};
// Presigned URL for download
const createDownloadUrl = async (bucketName, key, expiresIn = 3600) => {
const command = new GetObjectCommand({
Bucket: bucketName,
Key: key
});
const url = await getSignedUrl(client, command, { expiresIn });
return url; // Client can GET from this URL
};生成临时URL,无需AWS凭证即可安全地上传/下载文件。
javascript
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
import { PutObjectCommand, GetObjectCommand } from '@aws-sdk/client-s3';
// 用于上传的预签名URL
const createUploadUrl = async (bucketName, key, expiresIn = 3600) => {
const command = new PutObjectCommand({
Bucket: bucketName,
Key: key,
ContentType: 'image/jpeg'
});
const url = await getSignedUrl(client, command, { expiresIn });
return url; // 客户端可向该URL发送PUT请求
};
// 用于下载的预签名URL
const createDownloadUrl = async (bucketName, key, expiresIn = 3600) => {
const command = new GetObjectCommand({
Bucket: bucketName,
Key: key
});
const url = await getSignedUrl(client, command, { expiresIn });
return url; // 客户端可从该URL发送GET请求
};Multipart Upload
分段上传
For large files (> 100MB), use multipart upload for better performance and reliability.
javascript
import {
CreateMultipartUploadCommand,
UploadPartCommand,
CompleteMultipartUploadCommand,
AbortMultipartUploadCommand
} from '@aws-sdk/client-s3';
const multipartUpload = async (bucketName, key, fileBuffer, partSize = 5 * 1024 * 1024) => {
// 1. Initiate multipart upload
const createCommand = new CreateMultipartUploadCommand({
Bucket: bucketName,
Key: key
});
const { UploadId } = await client.send(createCommand);
try {
// 2. Upload parts
const parts = [];
const numParts = Math.ceil(fileBuffer.length / partSize);
for (let i = 0; i < numParts; i++) {
const start = i * partSize;
const end = Math.min(start + partSize, fileBuffer.length);
const partBody = fileBuffer.slice(start, end);
const uploadCommand = new UploadPartCommand({
Bucket: bucketName,
Key: key,
UploadId,
PartNumber: i + 1,
Body: partBody
});
const { ETag } = await client.send(uploadCommand);
parts.push({ PartNumber: i + 1, ETag });
}
// 3. Complete multipart upload
const completeCommand = new CompleteMultipartUploadCommand({
Bucket: bucketName,
Key: key,
UploadId,
MultipartUpload: { Parts: parts }
});
const result = await client.send(completeCommand);
return result;
} catch (error) {
// Abort on error to avoid storage charges for incomplete uploads
const abortCommand = new AbortMultipartUploadCommand({
Bucket: bucketName,
Key: key,
UploadId
});
await client.send(abortCommand);
throw error;
}
};对于大型文件(>100MB),使用分段上传可获得更好的性能和可靠性。
javascript
import {
CreateMultipartUploadCommand,
UploadPartCommand,
CompleteMultipartUploadCommand,
AbortMultipartUploadCommand
} from '@aws-sdk/client-s3';
const multipartUpload = async (bucketName, key, fileBuffer, partSize = 5 * 1024 * 1024) => {
// 1. 初始化分段上传
const createCommand = new CreateMultipartUploadCommand({
Bucket: bucketName,
Key: key
});
const { UploadId } = await client.send(createCommand);
try {
// 2. 上传分段
const parts = [];
const numParts = Math.ceil(fileBuffer.length / partSize);
for (let i = 0; i < numParts; i++) {
const start = i * partSize;
const end = Math.min(start + partSize, fileBuffer.length);
const partBody = fileBuffer.slice(start, end);
const uploadCommand = new UploadPartCommand({
Bucket: bucketName,
Key: key,
UploadId,
PartNumber: i + 1,
Body: partBody
});
const { ETag } = await client.send(uploadCommand);
parts.push({ PartNumber: i + 1, ETag });
}
// 3. 完成分段上传
const completeCommand = new CompleteMultipartUploadCommand({
Bucket: bucketName,
Key: key,
UploadId,
MultipartUpload: { Parts: parts }
});
const result = await client.send(completeCommand);
return result;
} catch (error) {
// 出错时中止上传,避免为未完成的上传收取存储费用
const abortCommand = new AbortMultipartUploadCommand({
Bucket: bucketName,
Key: key,
UploadId
});
await client.send(abortCommand);
throw error;
}
};Lambda
Lambda
AWS Lambda is a serverless compute service that runs code in response to events without provisioning servers.
AWS Lambda是无服务器计算服务,无需预置服务器即可响应事件运行代码。
Lambda Core Concepts
Lambda核心概念
Execution Model
执行模型
- Event-driven: Triggered by events from AWS services or HTTP requests
- Stateless: Each invocation is independent
- Concurrent execution: Automatically scales based on demand
- Timeout: 15-minute maximum execution time
- Memory: 128MB to 10GB (CPU scales with memory)
- 事件驱动:由AWS服务或HTTP请求触发
- 无状态:每次调用相互独立
- 并发执行:根据需求自动扩展
- 超时:最长执行时间15分钟
- 内存:128MB至10GB(CPU随内存扩展)
Handler Function
处理函数
javascript
// Lambda handler signature
export const handler = async (event, context) => {
// event: Input data (API request, S3 event, etc.)
// context: Runtime information (request ID, remaining time, etc.)
console.log('Event:', JSON.stringify(event, null, 2));
console.log('Request ID:', context.requestId);
console.log('Remaining time:', context.getRemainingTimeInMillis());
// Process event
const result = await processEvent(event);
// Return response
return {
statusCode: 200,
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(result)
};
};javascript
// Lambda处理函数签名
export const handler = async (event, context) => {
// event: 输入数据(API请求、S3事件等)
// context: 运行时信息(请求ID、剩余时间等)
console.log('Event:', JSON.stringify(event, null, 2));
console.log('Request ID:', context.requestId);
console.log('Remaining time:', context.getRemainingTimeInMillis());
// 处理事件
const result = await processEvent(event);
// 返回响应
return {
statusCode: 200,
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(result)
};
};Invocation Types
调用类型
Synchronous (RequestResponse)
- API Gateway, SDK invoke
- Caller waits for response
- Error returned to caller
Asynchronous (Event)
- S3, SNS, CloudWatch Events
- Lambda queues event and returns immediately
- Built-in retry (2 attempts)
- Dead letter queue for failures
Poll-based (Stream)
- DynamoDB Streams, Kinesis
- Lambda polls stream and invokes function
- Ordered processing within shard
同步(RequestResponse)
- API Gateway、SDK调用
- 调用方等待响应
- 错误返回给调用方
异步(Event)
- S3、SNS、CloudWatch事件
- Lambda将事件加入队列并立即返回
- 内置重试(2次尝试)
- 死信队列处理失败事件
轮询式(Stream)
- DynamoDB Streams、Kinesis
- Lambda轮询流并调用函数
- 分片内有序处理
Lambda Configuration
Lambda配置
javascript
// Using AWS SDK to create/update Lambda function
import {
LambdaClient,
CreateFunctionCommand,
UpdateFunctionCodeCommand,
UpdateFunctionConfigurationCommand
} from '@aws-sdk/client-lambda';
const lambdaClient = new LambdaClient({ region: 'us-east-1' });
const createFunction = async () => {
const command = new CreateFunctionCommand({
FunctionName: 'myFunction',
Runtime: 'nodejs20.x',
Role: 'arn:aws:iam::123456789012:role/lambda-execution-role',
Handler: 'index.handler',
Code: {
ZipFile: zipBuffer // Or S3Bucket/S3Key for S3-stored code
},
Environment: {
Variables: {
'BUCKET_NAME': 'my-bucket',
'TABLE_NAME': 'my-table'
}
},
MemorySize: 512, // MB
Timeout: 30, // seconds
Tags: {
'Environment': 'production',
'Team': 'backend'
}
});
const response = await lambdaClient.send(command);
return response.FunctionArn;
};javascript
// 使用AWS SDK创建/更新Lambda函数
import {
LambdaClient,
CreateFunctionCommand,
UpdateFunctionCodeCommand,
UpdateFunctionConfigurationCommand
} from '@aws-sdk/client-lambda';
const lambdaClient = new LambdaClient({ region: 'us-east-1' });
const createFunction = async () => {
const command = new CreateFunctionCommand({
FunctionName: 'myFunction',
Runtime: 'nodejs20.x',
Role: 'arn:aws:iam::123456789012:role/lambda-execution-role',
Handler: 'index.handler',
Code: {
ZipFile: zipBuffer // 或使用S3Bucket/S3Key指定存储在S3的代码
},
Environment: {
Variables: {
'BUCKET_NAME': 'my-bucket',
'TABLE_NAME': 'my-table'
}
},
MemorySize: 512, // MB
Timeout: 30, // 秒
Tags: {
'Environment': 'production',
'Team': 'backend'
}
});
const response = await lambdaClient.send(command);
return response.FunctionArn;
};Lambda Event Sources
Lambda事件源
API Gateway Integration
API Gateway集成
javascript
// Lambda function for API Gateway
export const handler = async (event) => {
// Parse request
const { httpMethod, path, queryStringParameters, body } = event;
const requestBody = body ? JSON.parse(body) : null;
// Route based on HTTP method and path
if (httpMethod === 'GET' && path === '/users') {
const users = await getUsers();
return {
statusCode: 200,
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(users)
};
}
if (httpMethod === 'POST' && path === '/users') {
const newUser = await createUser(requestBody);
return {
statusCode: 201,
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(newUser)
};
}
// Not found
return {
statusCode: 404,
body: JSON.stringify({ message: 'Not found' })
};
};javascript
// 用于API Gateway的Lambda函数
export const handler = async (event) => {
// 解析请求
const { httpMethod, path, queryStringParameters, body } = event;
const requestBody = body ? JSON.parse(body) : null;
// 根据HTTP方法和路径路由
if (httpMethod === 'GET' && path === '/users') {
const users = await getUsers();
return {
statusCode: 200,
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(users)
};
}
if (httpMethod === 'POST' && path === '/users') {
const newUser = await createUser(requestBody);
return {
statusCode: 201,
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(newUser)
};
}
// 未找到
return {
statusCode: 404,
body: JSON.stringify({ message: 'Not found' })
};
};S3 Event Integration
S3事件集成
javascript
// Lambda function triggered by S3 events
export const handler = async (event) => {
// Process each S3 event record
for (const record of event.Records) {
const bucket = record.s3.bucket.name;
const key = decodeURIComponent(record.s3.object.key.replace(/\+/g, ' '));
const eventName = record.eventName;
console.log(`Event: ${eventName}, Bucket: ${bucket}, Key: ${key}`);
if (eventName.startsWith('ObjectCreated:')) {
await processNewFile(bucket, key);
} else if (eventName.startsWith('ObjectRemoved:')) {
await handleFileDeleted(bucket, key);
}
}
return { statusCode: 200 };
};javascript
// 由S3事件触发的Lambda函数
export const handler = async (event) => {
// 处理每个S3事件记录
for (const record of event.Records) {
const bucket = record.s3.bucket.name;
const key = decodeURIComponent(record.s3.object.key.replace(/\+/g, ' '));
const eventName = record.eventName;
console.log(`Event: ${eventName}, Bucket: ${bucket}, Key: ${key}`);
if (eventName.startsWith('ObjectCreated:')) {
await processNewFile(bucket, key);
} else if (eventName.startsWith('ObjectRemoved:')) {
await handleFileDeleted(bucket, key);
}
}
return { statusCode: 200 };
};DynamoDB Streams Integration
DynamoDB Streams集成
javascript
// Lambda function for DynamoDB Streams
export const handler = async (event) => {
for (const record of event.Records) {
const { eventName, dynamodb } = record;
// INSERT, MODIFY, REMOVE
console.log(`Event: ${eventName}`);
if (eventName === 'INSERT') {
const newItem = AWS.DynamoDB.Converter.unmarshall(dynamodb.NewImage);
await handleNewItem(newItem);
}
if (eventName === 'MODIFY') {
const oldItem = AWS.DynamoDB.Converter.unmarshall(dynamodb.OldImage);
const newItem = AWS.DynamoDB.Converter.unmarshall(dynamodb.NewImage);
await handleItemUpdate(oldItem, newItem);
}
if (eventName === 'REMOVE') {
const oldItem = AWS.DynamoDB.Converter.unmarshall(dynamodb.OldImage);
await handleItemDeleted(oldItem);
}
}
};javascript
// 用于DynamoDB Streams的Lambda函数
export const handler = async (event) => {
for (const record of event.Records) {
const { eventName, dynamodb } = record;
// INSERT、MODIFY、REMOVE
console.log(`Event: ${eventName}`);
if (eventName === 'INSERT') {
const newItem = AWS.DynamoDB.Converter.unmarshall(dynamodb.NewImage);
await handleNewItem(newItem);
}
if (eventName === 'MODIFY') {
const oldItem = AWS.DynamoDB.Converter.unmarshall(dynamodb.OldImage);
const newItem = AWS.DynamoDB.Converter.unmarshall(dynamodb.NewImage);
await handleItemUpdate(oldItem, newItem);
}
if (eventName === 'REMOVE') {
const oldItem = AWS.DynamoDB.Converter.unmarshall(dynamodb.OldImage);
await handleItemDeleted(oldItem);
}
}
};Lambda Best Practices
Lambda最佳实践
Cold Start Optimization
- Keep deployment package small
- Minimize external dependencies
- Use provisioned concurrency for latency-sensitive functions
- Initialize SDK clients outside handler
Error Handling
javascript
export const handler = async (event) => {
try {
// Process event
const result = await processEvent(event);
return { statusCode: 200, body: JSON.stringify(result) };
} catch (error) {
console.error('Error processing event:', error);
// Log to CloudWatch
console.error('Error details:', {
message: error.message,
stack: error.stack,
event
});
// Return error response
return {
statusCode: 500,
body: JSON.stringify({
error: 'Internal server error',
requestId: context.requestId
})
};
}
};Environment Variables
javascript
// Access environment variables
const BUCKET_NAME = process.env.BUCKET_NAME;
const TABLE_NAME = process.env.TABLE_NAME;
const API_KEY = process.env.API_KEY; // Use Secrets Manager for sensitive data冷启动优化
- 保持部署包体积小
- 最小化外部依赖
- 对延迟敏感的函数使用预置并发
- 在处理函数外初始化SDK客户端
错误处理
javascript
export const handler = async (event) => {
try {
// 处理事件
const result = await processEvent(event);
return { statusCode: 200, body: JSON.stringify(result) };
} catch (error) {
console.error('Error processing event:', error);
// 记录到CloudWatch
console.error('Error details:', {
message: error.message,
stack: error.stack,
event
});
// 返回错误响应
return {
statusCode: 500,
body: JSON.stringify({
error: 'Internal server error',
requestId: context.requestId
})
};
}
};环境变量
javascript
// 访问环境变量
const BUCKET_NAME = process.env.BUCKET_NAME;
const TABLE_NAME = process.env.TABLE_NAME;
const API_KEY = process.env.API_KEY; // 敏感数据使用Secrets ManagerDynamoDB
DynamoDB
DynamoDB is a fully managed NoSQL database service for single-digit millisecond performance at any scale.
DynamoDB是全托管NoSQL数据库服务,可在任意规模下实现个位数毫秒级性能。
DynamoDB Core Concepts
DynamoDB核心概念
Tables and Items
表与项目
Table: Collection of items (like a table in SQL)
Item: Individual record (like a row), max 400KB
Attribute: Key-value pair (like a column)
Primary Key: Uniquely identifies each item
表:项目的集合(类似SQL中的表)
项目:单个记录(类似行),最大400KB
属性:键值对(类似列)
主键:唯一标识每个项目
Primary Key Types
主键类型
Partition Key (Simple Primary Key)
User Table:
- userId (Partition Key) -> "user-123"
- name -> "John Doe"
- email -> "john@example.com"Partition Key + Sort Key (Composite Primary Key)
Order Table:
- userId (Partition Key) -> "user-123"
- orderId (Sort Key) -> "order-456"
- total -> 99.99
- status -> "shipped"分区键(简单主键)
用户表:
- userId (分区键) -> "user-123"
- name -> "John Doe"
- email -> "john@example.com"分区键+排序键(复合主键)
订单表:
- userId (分区键) -> "user-123"
- orderId (排序键) -> "order-456"
- total -> 99.99
- status -> "shipped"Indexes
索引
Global Secondary Index (GSI)
- Different partition key and/or sort key
- Spans all partitions
- Eventually consistent
- Can be created/deleted anytime
Local Secondary Index (LSI)
- Same partition key, different sort key
- Scoped to partition
- Strongly or eventually consistent
- Must be created with table
全局二级索引(GSI)
- 不同的分区键和/或排序键
- 覆盖所有分区
- 最终一致性
- 可随时创建/删除
本地二级索引(LSI)
- 相同分区键,不同排序键
- 限定在分区内
- 强一致性或最终一致性
- 必须与表同时创建
DynamoDB Operations
DynamoDB操作
Put Item
写入项目
javascript
import { DynamoDBClient } from '@aws-sdk/client-dynamodb';
import { DynamoDBDocumentClient, PutCommand } from '@aws-sdk/lib-dynamodb';
const client = new DynamoDBClient({ region: 'us-east-1' });
const docClient = DynamoDBDocumentClient.from(client);
const putItem = async (tableName, item) => {
const command = new PutCommand({
TableName: tableName,
Item: item,
ConditionExpression: 'attribute_not_exists(userId)', // Prevent overwrite
ReturnValues: 'ALL_OLD' // Return previous item if existed
});
try {
const response = await docClient.send(command);
return response;
} catch (error) {
if (error.name === 'ConditionalCheckFailedException') {
console.log('Item already exists');
}
throw error;
}
};
// Example usage
await putItem('Users', {
userId: 'user-123',
name: 'John Doe',
email: 'john@example.com',
createdAt: new Date().toISOString(),
preferences: {
theme: 'dark',
notifications: true
}
});javascript
import { DynamoDBClient } from '@aws-sdk/client-dynamodb';
import { DynamoDBDocumentClient, PutCommand } from '@aws-sdk/lib-dynamodb';
const client = new DynamoDBClient({ region: 'us-east-1' });
const docClient = DynamoDBDocumentClient.from(client);
const putItem = async (tableName, item) => {
const command = new PutCommand({
TableName: tableName,
Item: item,
ConditionExpression: 'attribute_not_exists(userId)', // 防止覆盖
ReturnValues: 'ALL_OLD' // 若项目已存在则返回旧项目
});
try {
const response = await docClient.send(command);
return response;
} catch (error) {
if (error.name === 'ConditionalCheckFailedException') {
console.log('项目已存在');
}
throw error;
}
};
// 示例用法
await putItem('Users', {
userId: 'user-123',
name: 'John Doe',
email: 'john@example.com',
createdAt: new Date().toISOString(),
preferences: {
theme: 'dark',
notifications: true
}
});Get Item
获取项目
javascript
import { GetCommand } from '@aws-sdk/lib-dynamodb';
const getItem = async (tableName, key) => {
const command = new GetCommand({
TableName: tableName,
Key: key,
ConsistentRead: true, // Strong consistency (default: false)
ProjectionExpression: 'userId, #n, email', // Return specific attributes
ExpressionAttributeNames: {
'#n': 'name' // name is reserved word, use placeholder
}
});
const response = await docClient.send(command);
return response.Item;
};
// Example usage
const user = await getItem('Users', { userId: 'user-123' });javascript
import { GetCommand } from '@aws-sdk/lib-dynamodb';
const getItem = async (tableName, key) => {
const command = new GetCommand({
TableName: tableName,
Key: key,
ConsistentRead: true, // 强一致性(默认:false)
ProjectionExpression: 'userId, #n, email', // 返回特定属性
ExpressionAttributeNames: {
'#n': 'name' // name是保留字,使用占位符
}
});
const response = await docClient.send(command);
return response.Item;
};
// 示例用法
const user = await getItem('Users', { userId: 'user-123' });Update Item
更新项目
javascript
import { UpdateCommand } from '@aws-sdk/lib-dynamodb';
const updateItem = async (tableName, key, updates) => {
const command = new UpdateCommand({
TableName: tableName,
Key: key,
UpdateExpression: 'SET #n = :name, email = :email, updatedAt = :now',
ExpressionAttributeNames: {
'#n': 'name'
},
ExpressionAttributeValues: {
':name': updates.name,
':email': updates.email,
':now': new Date().toISOString()
},
ConditionExpression: 'attribute_exists(userId)', // Only update if exists
ReturnValues: 'ALL_NEW' // Return updated item
});
const response = await docClient.send(command);
return response.Attributes;
};
// Atomic counter increment
const incrementCounter = async (tableName, key, counterAttribute) => {
const command = new UpdateCommand({
TableName: tableName,
Key: key,
UpdateExpression: 'ADD #counter :inc',
ExpressionAttributeNames: {
'#counter': counterAttribute
},
ExpressionAttributeValues: {
':inc': 1
},
ReturnValues: 'UPDATED_NEW'
});
const response = await docClient.send(command);
return response.Attributes[counterAttribute];
};javascript
import { UpdateCommand } from '@aws-sdk/lib-dynamodb';
const updateItem = async (tableName, key, updates) => {
const command = new UpdateCommand({
TableName: tableName,
Key: key,
UpdateExpression: 'SET #n = :name, email = :email, updatedAt = :now',
ExpressionAttributeNames: {
'#n': 'name'
},
ExpressionAttributeValues: {
':name': updates.name,
':email': updates.email,
':now': new Date().toISOString()
},
ConditionExpression: 'attribute_exists(userId)', // 仅当项目存在时更新
ReturnValues: 'ALL_NEW' // 返回更新后的项目
});
const response = await docClient.send(command);
return response.Attributes;
};
// 原子计数器递增
const incrementCounter = async (tableName, key, counterAttribute) => {
const command = new UpdateCommand({
TableName: tableName,
Key: key,
UpdateExpression: 'ADD #counter :inc',
ExpressionAttributeNames: {
'#counter': counterAttribute
},
ExpressionAttributeValues: {
':inc': 1
},
ReturnValues: 'UPDATED_NEW'
});
const response = await docClient.send(command);
return response.Attributes[counterAttribute];
};Query
查询
Query items with same partition key (efficient).
javascript
import { QueryCommand } from '@aws-sdk/lib-dynamodb';
const queryItems = async (tableName, partitionKeyValue) => {
const command = new QueryCommand({
TableName: tableName,
KeyConditionExpression: 'userId = :userId AND orderId BETWEEN :start AND :end',
ExpressionAttributeValues: {
':userId': partitionKeyValue,
':start': 'order-100',
':end': 'order-200'
},
FilterExpression: 'orderStatus = :status', // Filter results (applied after query)
ExpressionAttributeValues: {
':status': 'completed'
},
Limit: 100, // Max items to return
ScanIndexForward: false // Sort descending (default: ascending)
});
const response = await docClient.send(command);
return response.Items;
};
// Pagination
const queryAllItems = async (tableName, partitionKeyValue) => {
let allItems = [];
let lastEvaluatedKey;
do {
const command = new QueryCommand({
TableName: tableName,
KeyConditionExpression: 'userId = :userId',
ExpressionAttributeValues: {
':userId': partitionKeyValue
},
ExclusiveStartKey: lastEvaluatedKey
});
const response = await docClient.send(command);
allItems = allItems.concat(response.Items);
lastEvaluatedKey = response.LastEvaluatedKey;
} while (lastEvaluatedKey);
return allItems;
};查询具有相同分区键的项目(高效)。
javascript
import { QueryCommand } from '@aws-sdk/lib-dynamodb';
const queryItems = async (tableName, partitionKeyValue) => {
const command = new QueryCommand({
TableName: tableName,
KeyConditionExpression: 'userId = :userId AND orderId BETWEEN :start AND :end',
ExpressionAttributeValues: {
':userId': partitionKeyValue,
':start': 'order-100',
':end': 'order-200'
},
FilterExpression: 'orderStatus = :status', // 过滤结果(查询后应用)
ExpressionAttributeValues: {
':status': 'completed'
},
Limit: 100, // 返回的最大项目数
ScanIndexForward: false // 降序排序(默认:升序)
});
const response = await docClient.send(command);
return response.Items;
};
// 分页查询
const queryAllItems = async (tableName, partitionKeyValue) => {
let allItems = [];
let lastEvaluatedKey;
do {
const command = new QueryCommand({
TableName: tableName,
KeyConditionExpression: 'userId = :userId',
ExpressionAttributeValues: {
':userId': partitionKeyValue
},
ExclusiveStartKey: lastEvaluatedKey
});
const response = await docClient.send(command);
allItems = allItems.concat(response.Items);
lastEvaluatedKey = response.LastEvaluatedKey;
} while (lastEvaluatedKey);
return allItems;
};Scan
扫描
Scan entire table (inefficient, avoid in production).
javascript
import { ScanCommand } from '@aws-sdk/lib-dynamodb';
const scanTable = async (tableName, filterExpression) => {
const command = new ScanCommand({
TableName: tableName,
FilterExpression: 'age > :minAge',
ExpressionAttributeValues: {
':minAge': 18
},
Limit: 1000
});
const response = await docClient.send(command);
return response.Items;
};
// Parallel scan for performance
const parallelScan = async (tableName, totalSegments = 4) => {
const scanSegment = async (segment) => {
const command = new ScanCommand({
TableName: tableName,
Segment: segment,
TotalSegments: totalSegments
});
const response = await docClient.send(command);
return response.Items;
};
// Scan all segments in parallel
const promises = [];
for (let i = 0; i < totalSegments; i++) {
promises.push(scanSegment(i));
}
const results = await Promise.all(promises);
return results.flat();
};扫描整个表(效率低,生产环境避免使用)。
javascript
import { ScanCommand } from '@aws-sdk/lib-dynamodb';
const scanTable = async (tableName, filterExpression) => {
const command = new ScanCommand({
TableName: tableName,
FilterExpression: 'age > :minAge',
ExpressionAttributeValues: {
':minAge': 18
},
Limit: 1000
});
const response = await docClient.send(command);
return response.Items;
};
// 并行扫描提升性能
const parallelScan = async (tableName, totalSegments = 4) => {
const scanSegment = async (segment) => {
const command = new ScanCommand({
TableName: tableName,
Segment: segment,
TotalSegments: totalSegments
});
const response = await docClient.send(command);
return response.Items;
};
// 并行扫描所有分段
const promises = [];
for (let i = 0; i < totalSegments; i++) {
promises.push(scanSegment(i));
}
const results = await Promise.all(promises);
return results.flat();
};Delete Item
删除项目
javascript
import { DeleteCommand } from '@aws-sdk/lib-dynamodb';
const deleteItem = async (tableName, key) => {
const command = new DeleteCommand({
TableName: tableName,
Key: key,
ConditionExpression: 'attribute_exists(userId)', // Only delete if exists
ReturnValues: 'ALL_OLD' // Return deleted item
});
const response = await docClient.send(command);
return response.Attributes;
};javascript
import { DeleteCommand } from '@aws-sdk/lib-dynamodb';
const deleteItem = async (tableName, key) => {
const command = new DeleteCommand({
TableName: tableName,
Key: key,
ConditionExpression: 'attribute_exists(userId)', // 仅当项目存在时删除
ReturnValues: 'ALL_OLD' // 返回已删除的项目
});
const response = await docClient.send(command);
return response.Attributes;
};Batch Operations
批量操作
javascript
import { BatchGetCommand, BatchWriteCommand } from '@aws-sdk/lib-dynamodb';
// Batch get (up to 100 items)
const batchGetItems = async (tableName, keys) => {
const command = new BatchGetCommand({
RequestItems: {
[tableName]: {
Keys: keys // Array of key objects
}
}
});
const response = await docClient.send(command);
return response.Responses[tableName];
};
// Batch write (up to 25 items)
const batchWriteItems = async (tableName, items) => {
const command = new BatchWriteCommand({
RequestItems: {
[tableName]: items.map(item => ({
PutRequest: { Item: item }
}))
}
});
await docClient.send(command);
};
// Batch delete
const batchDeleteItems = async (tableName, keys) => {
const command = new BatchWriteCommand({
RequestItems: {
[tableName]: keys.map(key => ({
DeleteRequest: { Key: key }
}))
}
});
await docClient.send(command);
};javascript
import { BatchGetCommand, BatchWriteCommand } from '@aws-sdk/lib-dynamodb';
// 批量获取(最多100个项目)
const batchGetItems = async (tableName, keys) => {
const command = new BatchGetCommand({
RequestItems: {
[tableName]: {
Keys: keys // 键对象数组
}
}
});
const response = await docClient.send(command);
return response.Responses[tableName];
};
// 批量写入(最多25个项目)
const batchWriteItems = async (tableName, items) => {
const command = new BatchWriteCommand({
RequestItems: {
[tableName]: items.map(item => ({
PutRequest: { Item: item }
}))
}
});
await docClient.send(command);
};
// 批量删除
const batchDeleteItems = async (tableName, keys) => {
const command = new BatchWriteCommand({
RequestItems: {
[tableName]: keys.map(key => ({
DeleteRequest: { Key: key }
}))
}
});
await docClient.send(command);
};DynamoDB Patterns
DynamoDB模式
Single-Table Design
单表设计
Use one table with overloaded keys for complex data models.
javascript
// User entity
{
PK: "USER#user-123",
SK: "METADATA",
type: "user",
name: "John Doe",
email: "john@example.com"
}
// User's order
{
PK: "USER#user-123",
SK: "ORDER#order-456",
type: "order",
total: 99.99,
status: "shipped"
}
// Access patterns:
// 1. Get user: PK = "USER#user-123", SK = "METADATA"
// 2. Get all user's orders: PK = "USER#user-123", SK begins_with "ORDER#"
// 3. Get specific order: PK = "USER#user-123", SK = "ORDER#order-456"使用一个表和重载键构建复杂数据模型。
javascript
// 用户实体
{
PK: "USER#user-123",
SK: "METADATA",
type: "user",
name: "John Doe",
email: "john@example.com"
}
// 用户订单
{
PK: "USER#user-123",
SK: "ORDER#order-456",
type: "order",
total: 99.99,
status: "shipped"
}
// 访问模式:
// 1. 获取用户: PK = "USER#user-123", SK = "METADATA"
// 2. 获取用户所有订单: PK = "USER#user-123", SK 以 "ORDER#" 开头
// 3. 获取特定订单: PK = "USER#user-123", SK = "ORDER#order-456"EC2 (Elastic Compute Cloud)
EC2(弹性计算云)
EC2 provides resizable compute capacity in the cloud with virtual machines (instances).
EC2在云中提供可调整的计算容量,即虚拟机(实例)。
EC2 Core Concepts
EC2核心概念
Instance Types
实例类型
General Purpose (T3, M6i)
- Balanced CPU, memory, and networking
- Web servers, development environments
Compute Optimized (C6i)
- High-performance processors
- Batch processing, gaming servers
Memory Optimized (R6i, X2idn)
- Large in-memory workloads
- Databases, caching layers
Storage Optimized (I4i, D3)
- High sequential read/write
- Data warehouses, distributed file systems
Accelerated Computing (P4, G5)
- GPU instances
- Machine learning, graphics rendering
通用型(T3、M6i)
- 平衡的CPU、内存和网络
- 适用于Web服务器、开发环境
计算优化型(C6i)
- 高性能处理器
- 适用于批处理、游戏服务器
内存优化型(R6i、X2idn)
- 大型内存内工作负载
- 适用于数据库、缓存层
存储优化型(I4i、D3)
- 高顺序读写性能
- 适用于数据仓库、分布式文件系统
加速计算型(P4、G5)
- GPU实例
- 适用于机器学习、图形渲染
AMI (Amazon Machine Image)
AMI(亚马逊机器镜像)
Pre-configured templates for instances containing:
- Operating system (Amazon Linux, Ubuntu, Windows, etc.)
- Application software
- Configuration settings
用于实例的预配置模板,包含:
- 操作系统(Amazon Linux、Ubuntu、Windows等)
- 应用软件
- 配置设置
EC2 Operations
EC2操作
javascript
import {
EC2Client,
RunInstancesCommand,
DescribeInstancesCommand,
StartInstancesCommand,
StopInstancesCommand,
TerminateInstancesCommand
} from '@aws-sdk/client-ec2';
const ec2Client = new EC2Client({ region: 'us-east-1' });
// Launch instance
const launchInstance = async () => {
const command = new RunInstancesCommand({
ImageId: 'ami-0c55b159cbfafe1f0', // Amazon Linux 2 AMI
InstanceType: 't3.micro',
MinCount: 1,
MaxCount: 1,
KeyName: 'my-key-pair',
SecurityGroupIds: ['sg-0123456789abcdef0'],
SubnetId: 'subnet-0123456789abcdef0',
IamInstanceProfile: {
Name: 'ec2-instance-profile'
},
UserData: Buffer.from(`#!/bin/bash
yum update -y
yum install -y httpd
systemctl start httpd
systemctl enable httpd
echo "Hello from EC2" > /var/www/html/index.html
`).toString('base64'),
TagSpecifications: [{
ResourceType: 'instance',
Tags: [
{ Key: 'Name', Value: 'WebServer' },
{ Key: 'Environment', Value: 'production' }
]
}]
});
const response = await ec2Client.send(command);
return response.Instances[0].InstanceId;
};
// Describe instances
const describeInstances = async (instanceIds) => {
const command = new DescribeInstancesCommand({
InstanceIds: instanceIds,
Filters: [
{ Name: 'instance-state-name', Values: ['running'] }
]
});
const response = await ec2Client.send(command);
return response.Reservations.flatMap(r => r.Instances);
};
// Stop instance
const stopInstance = async (instanceId) => {
const command = new StopInstancesCommand({
InstanceIds: [instanceId]
});
await ec2Client.send(command);
};
// Terminate instance
const terminateInstance = async (instanceId) => {
const command = new TerminateInstancesCommand({
InstanceIds: [instanceId]
});
await ec2Client.send(command);
};javascript
import {
EC2Client,
RunInstancesCommand,
DescribeInstancesCommand,
StartInstancesCommand,
StopInstancesCommand,
TerminateInstancesCommand
} from '@aws-sdk/client-ec2';
const ec2Client = new EC2Client({ region: 'us-east-1' });
// 启动实例
const launchInstance = async () => {
const command = new RunInstancesCommand({
ImageId: 'ami-0c55b159cbfafe1f0', // Amazon Linux 2 AMI
InstanceType: 't3.micro',
MinCount: 1,
MaxCount: 1,
KeyName: 'my-key-pair',
SecurityGroupIds: ['sg-0123456789abcdef0'],
SubnetId: 'subnet-0123456789abcdef0',
IamInstanceProfile: {
Name: 'ec2-instance-profile'
},
UserData: Buffer.from(`#!/bin/bash
yum update -y
yum install -y httpd
systemctl start httpd
systemctl enable httpd
echo "Hello from EC2" > /var/www/html/index.html
`).toString('base64'),
TagSpecifications: [{
ResourceType: 'instance',
Tags: [
{ Key: 'Name', Value: 'WebServer' },
{ Key: 'Environment', Value: 'production' }
]
}]
});
const response = await ec2Client.send(command);
return response.Instances[0].InstanceId;
};
// 描述实例
const describeInstances = async (instanceIds) => {
const command = new DescribeInstancesCommand({
InstanceIds: instanceIds,
Filters: [
{ Name: 'instance-state-name', Values: ['running'] }
]
});
const response = await ec2Client.send(command);
return response.Reservations.flatMap(r => r.Instances);
};
// 停止实例
const stopInstance = async (instanceId) => {
const command = new StopInstancesCommand({
InstanceIds: [instanceId]
});
await ec2Client.send(command);
};
// 终止实例
const terminateInstance = async (instanceId) => {
const command = new TerminateInstancesCommand({
InstanceIds: [instanceId]
});
await ec2Client.send(command);
};RDS (Relational Database Service)
RDS(关系型数据库服务)
RDS provides managed relational databases (PostgreSQL, MySQL, MariaDB, Oracle, SQL Server, Aurora).
RDS提供托管关系型数据库(PostgreSQL、MySQL、MariaDB、Oracle、SQL Server、Aurora)。
RDS Operations
RDS操作
javascript
import {
RDSClient,
CreateDBInstanceCommand,
DescribeDBInstancesCommand,
ModifyDBInstanceCommand,
DeleteDBInstanceCommand
} from '@aws-sdk/client-rds';
const rdsClient = new RDSClient({ region: 'us-east-1' });
// Create database instance
const createDatabase = async () => {
const command = new CreateDBInstanceCommand({
DBInstanceIdentifier: 'mydb',
DBInstanceClass: 'db.t3.micro',
Engine: 'postgres',
EngineVersion: '15.3',
MasterUsername: 'admin',
MasterUserPassword: 'SecurePassword123!',
AllocatedStorage: 20, // GB
StorageType: 'gp3',
BackupRetentionPeriod: 7, // days
MultiAZ: true, // High availability
PubliclyAccessible: false,
VpcSecurityGroupIds: ['sg-0123456789abcdef0'],
DBSubnetGroupName: 'my-db-subnet-group',
StorageEncrypted: true,
Tags: [
{ Key: 'Environment', Value: 'production' },
{ Key: 'Application', Value: 'api' }
]
});
const response = await rdsClient.send(command);
return response.DBInstance;
};
// Describe database
const describeDatabase = async (dbInstanceId) => {
const command = new DescribeDBInstancesCommand({
DBInstanceIdentifier: dbInstanceId
});
const response = await rdsClient.send(command);
return response.DBInstances[0];
};javascript
import {
RDSClient,
CreateDBInstanceCommand,
DescribeDBInstancesCommand,
ModifyDBInstanceCommand,
DeleteDBInstanceCommand
} from '@aws-sdk/client-rds';
const rdsClient = new RDSClient({ region: 'us-east-1' });
// 创建数据库实例
const createDatabase = async () => {
const command = new CreateDBInstanceCommand({
DBInstanceIdentifier: 'mydb',
DBInstanceClass: 'db.t3.micro',
Engine: 'postgres',
EngineVersion: '15.3',
MasterUsername: 'admin',
MasterUserPassword: 'SecurePassword123!',
AllocatedStorage: 20, // GB
StorageType: 'gp3',
BackupRetentionPeriod: 7, // 天
MultiAZ: true, // 高可用
PubliclyAccessible: false,
VpcSecurityGroupIds: ['sg-0123456789abcdef0'],
DBSubnetGroupName: 'my-db-subnet-group',
StorageEncrypted: true,
Tags: [
{ Key: 'Environment', Value: 'production' },
{ Key: 'Application', Value: 'api' }
]
});
const response = await rdsClient.send(command);
return response.DBInstance;
};
// 描述数据库
const describeDatabase = async (dbInstanceId) => {
const command = new DescribeDBInstancesCommand({
DBInstanceIdentifier: dbInstanceId
});
const response = await rdsClient.send(command);
return response.DBInstances[0];
};CloudFormation
CloudFormation
Infrastructure as Code (IaC) service for defining and provisioning AWS resources.
基础设施即代码(IaC)服务,用于定义和预置AWS资源。
CloudFormation Template
CloudFormation模板
yaml
AWSTemplateFormatVersion: '2010-09-09'
Description: 'Full-stack web application infrastructure'
Parameters:
Environment:
Type: String
Default: production
AllowedValues:
- development
- staging
- production
Resources:
# S3 Bucket for static assets
AssetsBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: !Sub '${AWS::StackName}-assets-${Environment}'
VersioningConfiguration:
Status: Enabled
PublicAccessBlockConfiguration:
BlockPublicAcls: true
BlockPublicPolicy: true
IgnorePublicAcls: true
RestrictPublicBuckets: true
BucketEncryption:
ServerSideEncryptionConfiguration:
- ServerSideEncryptionByDefault:
SSEAlgorithm: AES256
# DynamoDB Table
UsersTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: !Sub '${AWS::StackName}-users-${Environment}'
BillingMode: PAY_PER_REQUEST
AttributeDefinitions:
- AttributeName: userId
AttributeType: S
- AttributeName: email
AttributeType: S
KeySchema:
- AttributeName: userId
KeyType: HASH
GlobalSecondaryIndexes:
- IndexName: EmailIndex
KeySchema:
- AttributeName: email
KeyType: HASH
Projection:
ProjectionType: ALL
StreamSpecification:
StreamViewType: NEW_AND_OLD_IMAGES
# Lambda Execution Role
LambdaExecutionRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service: lambda.amazonaws.com
Action: sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
Policies:
- PolicyName: DynamoDBAccess
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:Query
Resource: !GetAtt UsersTable.Arn
# Lambda Function
ApiFunction:
Type: AWS::Lambda::Function
Properties:
FunctionName: !Sub '${AWS::StackName}-api-${Environment}'
Runtime: nodejs20.x
Handler: index.handler
Role: !GetAtt LambdaExecutionRole.Arn
Code:
ZipFile: |
exports.handler = async (event) => {
return {
statusCode: 200,
body: JSON.stringify({ message: 'Hello from Lambda!' })
};
};
Environment:
Variables:
TABLE_NAME: !Ref UsersTable
BUCKET_NAME: !Ref AssetsBucket
ENVIRONMENT: !Ref Environment
Timeout: 30
MemorySize: 512
# API Gateway
RestApi:
Type: AWS::ApiGateway::RestApi
Properties:
Name: !Sub '${AWS::StackName}-api-${Environment}'
Description: REST API for application
ApiResource:
Type: AWS::ApiGateway::Resource
Properties:
RestApiId: !Ref RestApi
ParentId: !GetAtt RestApi.RootResourceId
PathPart: users
ApiMethod:
Type: AWS::ApiGateway::Method
Properties:
RestApiId: !Ref RestApi
ResourceId: !Ref ApiResource
HttpMethod: GET
AuthorizationType: NONE
Integration:
Type: AWS_PROXY
IntegrationHttpMethod: POST
Uri: !Sub 'arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${ApiFunction.Arn}/invocations'
ApiDeployment:
Type: AWS::ApiGateway::Deployment
DependsOn: ApiMethod
Properties:
RestApiId: !Ref RestApi
StageName: !Ref Environment
LambdaApiPermission:
Type: AWS::Lambda::Permission
Properties:
FunctionName: !Ref ApiFunction
Action: lambda:InvokeFunction
Principal: apigateway.amazonaws.com
SourceArn: !Sub 'arn:aws:execute-api:${AWS::Region}:${AWS::AccountId}:${RestApi}/*'
Outputs:
ApiUrl:
Description: API Gateway URL
Value: !Sub 'https://${RestApi}.execute-api.${AWS::Region}.amazonaws.com/${Environment}'
Export:
Name: !Sub '${AWS::StackName}-api-url'
BucketName:
Description: S3 Bucket Name
Value: !Ref AssetsBucket
Export:
Name: !Sub '${AWS::StackName}-bucket-name'
TableName:
Description: DynamoDB Table Name
Value: !Ref UsersTable
Export:
Name: !Sub '${AWS::StackName}-table-name'yaml
AWSTemplateFormatVersion: '2010-09-09'
Description: 'Full-stack web application infrastructure'
Parameters:
Environment:
Type: String
Default: production
AllowedValues:
- development
- staging
- production
Resources:
# S3 Bucket for static assets
AssetsBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: !Sub '${AWS::StackName}-assets-${Environment}'
VersioningConfiguration:
Status: Enabled
PublicAccessBlockConfiguration:
BlockPublicAcls: true
BlockPublicPolicy: true
IgnorePublicAcls: true
RestrictPublicBuckets: true
BucketEncryption:
ServerSideEncryptionConfiguration:
- ServerSideEncryptionByDefault:
SSEAlgorithm: AES256
# DynamoDB Table
UsersTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: !Sub '${AWS::StackName}-users-${Environment}'
BillingMode: PAY_PER_REQUEST
AttributeDefinitions:
- AttributeName: userId
AttributeType: S
- AttributeName: email
AttributeType: S
KeySchema:
- AttributeName: userId
KeyType: HASH
GlobalSecondaryIndexes:
- IndexName: EmailIndex
KeySchema:
- AttributeName: email
KeyType: HASH
Projection:
ProjectionType: ALL
StreamSpecification:
StreamViewType: NEW_AND_OLD_IMAGES
# Lambda Execution Role
LambdaExecutionRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service: lambda.amazonaws.com
Action: sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
Policies:
- PolicyName: DynamoDBAccess
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:Query
Resource: !GetAtt UsersTable.Arn
# Lambda Function
ApiFunction:
Type: AWS::Lambda::Function
Properties:
FunctionName: !Sub '${AWS::StackName}-api-${Environment}'
Runtime: nodejs20.x
Handler: index.handler
Role: !GetAtt LambdaExecutionRole.Arn
Code:
ZipFile: |
exports.handler = async (event) => {
return {
statusCode: 200,
body: JSON.stringify({ message: 'Hello from Lambda!' })
};
};
Environment:
Variables:
TABLE_NAME: !Ref UsersTable
BUCKET_NAME: !Ref AssetsBucket
ENVIRONMENT: !Ref Environment
Timeout: 30
MemorySize: 512
# API Gateway
RestApi:
Type: AWS::ApiGateway::RestApi
Properties:
Name: !Sub '${AWS::StackName}-api-${Environment}'
Description: REST API for application
ApiResource:
Type: AWS::ApiGateway::Resource
Properties:
RestApiId: !Ref RestApi
ParentId: !GetAtt RestApi.RootResourceId
PathPart: users
ApiMethod:
Type: AWS::ApiGateway::Method
Properties:
RestApiId: !Ref RestApi
ResourceId: !Ref ApiResource
HttpMethod: GET
AuthorizationType: NONE
Integration:
Type: AWS_PROXY
IntegrationHttpMethod: POST
Uri: !Sub 'arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${ApiFunction.Arn}/invocations'
ApiDeployment:
Type: AWS::ApiGateway::Deployment
DependsOn: ApiMethod
Properties:
RestApiId: !Ref RestApi
StageName: !Ref Environment
LambdaApiPermission:
Type: AWS::Lambda::Permission
Properties:
FunctionName: !Ref ApiFunction
Action: lambda:InvokeFunction
Principal: apigateway.amazonaws.com
SourceArn: !Sub 'arn:aws:execute-api:${AWS::Region}:${AWS::AccountId}:${RestApi}/*'
Outputs:
ApiUrl:
Description: API Gateway URL
Value: !Sub 'https://${RestApi}.execute-api.${AWS::Region}.amazonaws.com/${Environment}'
Export:
Name: !Sub '${AWS::StackName}-api-url'
BucketName:
Description: S3 Bucket Name
Value: !Ref AssetsBucket
Export:
Name: !Sub '${AWS::StackName}-bucket-name'
TableName:
Description: DynamoDB Table Name
Value: !Ref UsersTable
Export:
Name: !Sub '${AWS::StackName}-table-name'CloudFormation Operations
CloudFormation操作
javascript
import {
CloudFormationClient,
CreateStackCommand,
DescribeStacksCommand,
UpdateStackCommand,
DeleteStackCommand
} from '@aws-sdk/client-cloudformation';
import { readFileSync } from 'fs';
const cfClient = new CloudFormationClient({ region: 'us-east-1' });
// Create stack
const createStack = async (stackName, templatePath, parameters = {}) => {
const templateBody = readFileSync(templatePath, 'utf8');
const command = new CreateStackCommand({
StackName: stackName,
TemplateBody: templateBody,
Parameters: Object.entries(parameters).map(([key, value]) => ({
ParameterKey: key,
ParameterValue: value
})),
Capabilities: ['CAPABILITY_IAM'],
Tags: [
{ Key: 'ManagedBy', Value: 'CloudFormation' },
{ Key: 'Application', Value: 'MyApp' }
]
});
const response = await cfClient.send(command);
return response.StackId;
};
// Get stack status
const getStackStatus = async (stackName) => {
const command = new DescribeStacksCommand({
StackName: stackName
});
const response = await cfClient.send(command);
const stack = response.Stacks[0];
return {
status: stack.StackStatus,
outputs: stack.Outputs || []
};
};javascript
import {
CloudFormationClient,
CreateStackCommand,
DescribeStacksCommand,
UpdateStackCommand,
DeleteStackCommand
} from '@aws-sdk/client-cloudformation';
import { readFileSync } from 'fs';
const cfClient = new CloudFormationClient({ region: 'us-east-1' });
// 创建栈
const createStack = async (stackName, templatePath, parameters = {}) => {
const templateBody = readFileSync(templatePath, 'utf8');
const command = new CreateStackCommand({
StackName: stackName,
TemplateBody: templateBody,
Parameters: Object.entries(parameters).map(([key, value]) => ({
ParameterKey: key,
ParameterValue: value
})),
Capabilities: ['CAPABILITY_IAM'],
Tags: [
{ Key: 'ManagedBy', Value: 'CloudFormation' },
{ Key: 'Application', Value: 'MyApp' }
]
});
const response = await cfClient.send(command);
return response.StackId;
};
// 获取栈状态
const getStackStatus = async (stackName) => {
const command = new DescribeStacksCommand({
StackName: stackName
});
const response = await cfClient.send(command);
const stack = response.Stacks[0];
return {
status: stack.StackStatus,
outputs: stack.Outputs || []
};
};Best Practices
最佳实践
Security
安全
IAM Best Practices
- Enable MFA for root and privileged accounts
- Use IAM roles instead of access keys
- Apply least privilege principle
- Rotate credentials regularly
- Use IAM Access Analyzer to identify overly permissive policies
- Enable CloudTrail for audit logging
Data Encryption
- Encrypt data at rest (S3, EBS, RDS, DynamoDB)
- Use SSL/TLS for data in transit
- Store secrets in AWS Secrets Manager or Parameter Store
- Use KMS for encryption key management
Network Security
- Use VPCs for network isolation
- Implement security groups and NACLs
- Enable VPC Flow Logs
- Use AWS WAF for web application protection
- Implement DDoS protection with AWS Shield
IAM最佳实践
- 为根账户和特权账户启用MFA
- 使用IAM角色而非访问密钥
- 应用最小权限原则
- 定期轮换凭证
- 使用IAM Access Analyzer识别过度宽松的策略
- 启用CloudTrail进行审计日志
数据加密
- 加密静态数据(S3、EBS、RDS、DynamoDB)
- 数据传输使用SSL/TLS
- 在AWS Secrets Manager或Parameter Store中存储密钥
- 使用KMS管理加密密钥
网络安全
- 使用VPC进行网络隔离
- 实施安全组和NACL
- 启用VPC流日志
- 使用AWS WAF保护Web应用
- 使用AWS Shield实现DDoS防护
Cost Optimization
成本优化
Compute
- Use Auto Scaling to match capacity to demand
- Choose appropriate instance types and sizes
- Use Spot Instances for fault-tolerant workloads
- Leverage Lambda for event-driven workloads
- Use Savings Plans and Reserved Instances for steady-state workloads
Storage
- Implement S3 lifecycle policies
- Use appropriate storage classes
- Enable S3 Intelligent-Tiering
- Delete unused EBS volumes and snapshots
- Use compression and deduplication
Database
- Right-size database instances
- Use read replicas to offload read traffic
- Enable DynamoDB auto-scaling
- Use Aurora Serverless for variable workloads
- Archive old data to cheaper storage
计算
- 使用自动扩缩容匹配容量与需求
- 选择合适的实例类型和规格
- 为容错型工作负载使用Spot实例
- 为事件驱动型工作负载使用Lambda
- 为稳定状态工作负载使用Savings Plans和预留实例
存储
- 实施S3生命周期策略
- 使用合适的存储类别
- 启用S3智能分层
- 删除未使用的EBS卷和快照
- 使用压缩和重复数据删除
数据库
- 合理调整数据库实例规格
- 使用只读副本分流读流量
- 启用DynamoDB自动扩缩容
- 为可变工作负载使用Aurora Serverless
- 将旧数据归档至更便宜的存储
Performance
性能
Application Design
- Implement caching (ElastiCache, CloudFront)
- Use content delivery networks (CloudFront)
- Optimize database queries and indexes
- Implement connection pooling
- Use async/parallel operations
Monitoring and Optimization
- Use CloudWatch for metrics and alarms
- Implement X-Ray for distributed tracing
- Set up performance budgets
- Conduct regular performance testing
- Use AWS Compute Optimizer recommendations
应用设计
- 实现缓存(ElastiCache、CloudFront)
- 使用内容分发网络(CloudFront)
- 优化数据库查询和索引
- 实现连接池
- 使用异步/并行操作
监控与优化
- 使用CloudWatch进行指标和告警
- 实现X-Ray分布式追踪
- 设置性能预算
- 定期进行性能测试
- 使用AWS Compute Optimizer建议
Reliability
可靠性
High Availability
- Deploy across multiple Availability Zones
- Use Auto Scaling for automatic recovery
- Implement health checks and automatic failover
- Use Route 53 for DNS-based failover
- Design for graceful degradation
Disaster Recovery
- Implement automated backups
- Test recovery procedures regularly
- Use multi-region replication for critical data
- Document recovery time objectives (RTO) and recovery point objectives (RPO)
- Implement chaos engineering practices
Error Handling
- Implement retry logic with exponential backoff
- Use dead letter queues for failed messages
- Set up CloudWatch alarms for errors
- Implement circuit breakers
- Log errors comprehensively
高可用性
- 在多个可用区部署
- 使用自动扩缩容实现自动恢复
- 实施健康检查和自动故障转移
- 使用Route 53进行基于DNS的故障转移
- 设计为优雅降级
灾难恢复
- 实施自动备份
- 定期测试恢复流程
- 为关键数据使用多区域复制
- 记录恢复时间目标(RTO)和恢复点目标(RPO)
- 实施混沌工程实践
错误处理
- 实现带指数退避的重试逻辑
- 使用死信队列处理失败消息
- 为错误设置CloudWatch告警
- 实现断路器
- 全面记录错误
Operational Excellence
运营卓越
Infrastructure as Code
- Version control all infrastructure code
- Use CloudFormation or CDK for resource provisioning
- Implement CI/CD for infrastructure changes
- Use stack sets for multi-account/region deployments
- Validate templates before deployment
Monitoring and Logging
- Centralize logs with CloudWatch Logs
- Set up custom metrics and dashboards
- Implement log aggregation and analysis
- Use CloudTrail for API auditing
- Set up alarms for critical metrics
Automation
- Automate deployments with CodePipeline
- Use Systems Manager for patch management
- Implement automated scaling policies
- Use Lambda for operational tasks
- Automate backup and recovery procedures
Skill Version: 1.0.0
Last Updated: October 2025
Skill Category: Cloud Infrastructure, Serverless, Database, DevOps
Compatible With: AWS SDK v3, CloudFormation, AWS CLI, Terraform
基础设施即代码
- 对所有基础设施代码进行版本控制
- 使用CloudFormation或CDK进行资源预置
- 为基础设施变更实施CI/CD
- 使用栈集进行多账户/区域部署
- 部署前验证模板
监控与日志
- 使用CloudWatch Logs集中日志
- 设置自定义指标和仪表板
- 实施日志聚合与分析
- 使用CloudTrail进行API审计
- 为关键指标设置告警
自动化
- 使用CodePipeline自动化部署
- 使用Systems Manager进行补丁管理
- 实施自动扩缩容策略
- 使用Lambda处理运维任务
- 自动化备份和恢复流程
技能版本: 1.0.0
最后更新: 2025年10月
技能分类: 云基础设施、无服务器、数据库、DevOps
兼容环境: AWS SDK v3、CloudFormation、AWS CLI、Terraform