aws-cloudformation-s3

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

AWS CloudFormation S3 Patterns

AWS CloudFormation S3 模板模式

Create production-ready Amazon S3 infrastructure using AWS CloudFormation templates. This skill covers S3 bucket configurations, bucket policies, versioning, lifecycle rules, and template structure best practices.
使用AWS CloudFormation模板创建可用于生产环境的Amazon S3基础设施。本内容涵盖S3存储桶配置、存储桶策略、版本控制、生命周期规则以及模板结构最佳实践。

When to Use

适用场景

Use this skill when:
  • Creating S3 buckets with custom configurations
  • Implementing bucket policies for access control
  • Configuring S3 versioning for data protection
  • Setting up lifecycle rules for data management
  • Creating Outputs for cross-stack references
  • Using Parameters with AWS-specific types
  • Organizing templates with Mappings and Conditions
  • Building reusable CloudFormation templates for S3
在以下场景中使用本技能:
  • 创建带有自定义配置的S3存储桶
  • 实现用于访问控制的存储桶策略
  • 配置S3版本控制以保护数据
  • 设置生命周期规则进行数据管理
  • 创建用于跨栈引用的Outputs
  • 使用AWS特定类型的Parameters
  • 使用Mappings和Conditions组织模板
  • 构建可复用的S3 CloudFormation模板

Quick Start

快速开始

Basic S3 Bucket

基础S3存储桶

yaml
AWSTemplateFormatVersion: 2010-09-09
Description: Simple S3 bucket with default settings

Resources:
  DataBucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: my-data-bucket
      Tags:
        - Key: Environment
          Value: production
        - Key: Project
          Value: my-project
yaml
AWSTemplateFormatVersion: 2010-09-09
Description: Simple S3 bucket with default settings

Resources:
  DataBucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: my-data-bucket
      Tags:
        - Key: Environment
          Value: production
        - Key: Project
          Value: my-project

S3 Bucket with Versioning and Logging

带版本控制与日志的S3存储桶

yaml
AWSTemplateFormatVersion: 2010-09-09
Description: S3 bucket with versioning and access logging

Parameters:
  BucketName:
    Type: String
    Description: Name of the S3 bucket

Resources:
  DataBucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: !Ref BucketName
      VersioningConfiguration:
        Status: Enabled
      LoggingConfiguration:
        DestinationBucketName: !Ref AccessLogBucket
        LogFilePrefix: logs/
      Tags:
        - Key: Name
          Value: !Ref BucketName

  AccessLogBucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: !Sub ${BucketName}-logs
      AccessControl: LogDeliveryWrite

Outputs:
  BucketName:
    Description: Name of the S3 bucket
    Value: !Ref DataBucket

  BucketArn:
    Description: ARN of the S3 bucket
    Value: !GetAtt DataBucket.Arn
yaml
AWSTemplateFormatVersion: 2010-09-09
Description: S3 bucket with versioning and access logging

Parameters:
  BucketName:
    Type: String
    Description: Name of the S3 bucket

Resources:
  DataBucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: !Ref BucketName
      VersioningConfiguration:
        Status: Enabled
      LoggingConfiguration:
        DestinationBucketName: !Ref AccessLogBucket
        LogFilePrefix: logs/
      Tags:
        - Key: Name
          Value: !Ref BucketName

  AccessLogBucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: !Sub ${BucketName}-logs
      AccessControl: LogDeliveryWrite

Outputs:
  BucketName:
    Description: Name of the S3 bucket
    Value: !Ref DataBucket

  BucketArn:
    Description: ARN of the S3 bucket
    Value: !GetAtt DataBucket.Arn

Template Structure

模板结构

Template Sections Overview

模板章节概述

AWS CloudFormation templates are JSON or YAML files with specific sections. Each section serves a purpose in defining your infrastructure.
yaml
AWSTemplateFormatVersion: 2010-09-09  # Required - template version
Description: Optional description string  # Optional description
AWS CloudFormation模板是包含特定章节的JSON或YAML文件。每个章节在定义基础设施时都有其特定用途。
yaml
AWSTemplateFormatVersion: 2010-09-09  # 必填 - 模板版本
Description: Optional description string  # 可选描述

Section order matters for readability but CloudFormation accepts any order

章节顺序影响可读性,但CloudFormation接受任意顺序

Mappings: {} # Static configuration tables Metadata: {} # Additional information about resources Parameters: {} # Input values for customization Rules: {} # Parameter validation rules Conditions: {} # Conditional resource creation Transform: {} # Macro processing (e.g., AWS::Serverless) Resources: {} # AWS resources to create (REQUIRED) Outputs: {} # Return values after stack creation
undefined
Mappings: {} # 静态配置表 Metadata: {} # 资源的附加信息 Parameters: {} # 用于自定义的输入值 Rules: {} # 参数验证规则 Conditions: {} # 条件化资源创建 Transform: {} # 宏处理(例如AWS::Serverless) Resources: {} # 要创建的AWS资源(必填) Outputs: {} # 栈创建后的返回值
undefined

Format Version

格式版本

The
AWSTemplateFormatVersion
identifies the template version. Current version is
2010-09-09
.
yaml
AWSTemplateFormatVersion: 2010-09-09
Description: My S3 CloudFormation Template
AWSTemplateFormatVersion
用于标识模板版本,当前版本为
2010-09-09
yaml
AWSTemplateFormatVersion: 2010-09-09
Description: My S3 CloudFormation Template

Description

描述

Add a description to document the template's purpose. Must appear after the format version.
yaml
AWSTemplateFormatVersion: 2010-09-09
Description: >
  This template creates an S3 bucket with versioning enabled
  for data protection. It includes:
  - Bucket with versioning configuration
  - Lifecycle rules for data retention
  - Server access logging
添加描述以记录模板的用途,必须出现在格式版本之后。
yaml
AWSTemplateFormatVersion: 2010-09-09
Description: >
  本模板创建一个启用版本控制的S3存储桶
  以实现数据保护,包含以下内容:
  - 带版本控制配置的存储桶
  - 用于数据保留的生命周期规则
  - 服务器访问日志

Metadata

元数据

Use
Metadata
for additional information about resources or parameters.
yaml
Metadata:
  AWS::CloudFormation::Interface:
    ParameterGroups:
      - Label:
          default: Bucket Configuration
        Parameters:
          - BucketName
          - EnableVersioning
      - Label:
          default: Lifecycle Rules
        Parameters:
          - RetentionDays
    ParameterLabels:
      BucketName:
        default: Bucket Name
      EnableVersioning:
        default: Enable Versioning
使用
Metadata
添加关于资源或参数的附加信息。
yaml
Metadata:
  AWS::CloudFormation::Interface:
    ParameterGroups:
      - Label:
          default: Bucket Configuration
        Parameters:
          - BucketName
          - EnableVersioning
      - Label:
          default: Lifecycle Rules
        Parameters:
          - RetentionDays
    ParameterLabels:
      BucketName:
        default: Bucket Name
      EnableVersioning:
        default: Enable Versioning

Resources Section

资源章节

The
Resources
section is the only required section. It defines AWS resources to provision.
yaml
Resources:
  DataBucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: my-data-bucket
      VersioningConfiguration:
        Status: Enabled
      PublicAccessBlockConfiguration:
        BlockPublicAcls: true
        BlockPublicPolicy: true
        IgnorePublicAcls: true
        RestrictPublicBuckets: true
Resources
是唯一必填的章节,用于定义要部署的AWS资源。
yaml
Resources:
  DataBucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: my-data-bucket
      VersioningConfiguration:
        Status: Enabled
      PublicAccessBlockConfiguration:
        BlockPublicAcls: true
        BlockPublicPolicy: true
        IgnorePublicAcls: true
        RestrictPublicBuckets: true

Parameters

参数

Parameter Types

参数类型

Use AWS-specific parameter types for validation and easier selection in the console.
yaml
Parameters:
  ExistingBucketName:
    Type: AWS::S3::Bucket::Name
    Description: Select an existing S3 bucket

  BucketNamePrefix:
    Type: String
    Description: Prefix for new bucket names
使用AWS特定的参数类型进行验证,并在控制台中更易选择。
yaml
Parameters:
  ExistingBucketName:
    Type: AWS::S3::Bucket::Name
    Description: Select an existing S3 bucket

  BucketNamePrefix:
    Type: String
    Description: Prefix for new bucket names

SSM Parameter Types

SSM参数类型

Reference Systems Manager parameters for dynamic values.
yaml
Parameters:
  LatestBucketPolicy:
    Type: AWS::SSM::Parameter::Value<String>
    Description: Latest bucket policy from SSM
    Default: /s3/bucket-policy/latest
引用Systems Manager参数以获取动态值。
yaml
Parameters:
  LatestBucketPolicy:
    Type: AWS::SSM::Parameter::Value<String>
    Description: Latest bucket policy from SSM
    Default: /s3/bucket-policy/latest

Parameter Constraints

参数约束

Add constraints to validate parameter values.
yaml
Parameters:
  BucketName:
    Type: String
    Description: Name of the S3 bucket
    Default: my-bucket
    AllowedPattern: ^[a-z0-9][a-z0-9-]*[a-z0-9]$
    ConstraintDescription: Bucket names must be lowercase, numbers, or hyphens

  RetentionDays:
    Type: Number
    Description: Number of days to retain objects
    Default: 30
    MinValue: 1
    MaxValue: 365
    ConstraintDescription: Must be between 1 and 365 days

  Environment:
    Type: String
    Description: Deployment environment
    Default: development
    AllowedValues:
      - development
      - staging
      - production
    ConstraintDescription: Must be development, staging, or production
添加约束以验证参数值。
yaml
Parameters:
  BucketName:
    Type: String
    Description: Name of the S3 bucket
    Default: my-bucket
    AllowedPattern: ^[a-z0-9][a-z0-9-]*[a-z0-9]$
    ConstraintDescription: Bucket names must be lowercase, numbers, or hyphens

  RetentionDays:
    Type: Number
    Description: Number of days to retain objects
    Default: 30
    MinValue: 1
    MaxValue: 365
    ConstraintDescription: Must be between 1 and 365 days

  Environment:
    Type: String
    Description: Deployment environment
    Default: development
    AllowedValues:
      - development
      - staging
      - production
    ConstraintDescription: Must be development, staging, or production

Mappings

映射

Use
Mappings
for static configuration data based on regions or other factors.
yaml
Mappings:
  RegionConfig:
    us-east-1:
      BucketPrefix: us-east-1
    us-west-2:
      BucketPrefix: us-west-2
    eu-west-1:
      BucketPrefix: eu-west-1

  EnvironmentSettings:
    development:
      VersioningStatus: Suspended
      RetentionDays: 7
    staging:
      VersioningStatus: Enabled
      RetentionDays: 30
    production:
      VersioningStatus: Enabled
      RetentionDays: 90

Resources:
  DataBucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: !Sub ${BucketPrefix}-${Environment}-data
      VersioningConfiguration:
        Status: !FindInMap [EnvironmentSettings, !Ref Environment, VersioningStatus]
使用
Mappings
存储基于区域或其他因素的静态配置数据。
yaml
Mappings:
  RegionConfig:
    us-east-1:
      BucketPrefix: us-east-1
    us-west-2:
      BucketPrefix: us-west-2
    eu-west-1:
      BucketPrefix: eu-west-1

  EnvironmentSettings:
    development:
      VersioningStatus: Suspended
      RetentionDays: 7
    staging:
      VersioningStatus: Enabled
      RetentionDays: 30
    production:
      VersioningStatus: Enabled
      RetentionDays: 90

Resources:
  DataBucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: !Sub ${BucketPrefix}-${Environment}-data
      VersioningConfiguration:
        Status: !FindInMap [EnvironmentSettings, !Ref Environment, VersioningStatus]

Conditions

条件

Use
Conditions
to conditionally create resources based on parameters.
yaml
Parameters:
  EnableVersioning:
    Type: String
    Default: true
    AllowedValues:
      - true
      - false

  Environment:
    Type: String
    Default: development
    AllowedValues:
      - development
      - staging
      - production

  CreateLifecycleRule:
    Type: String
    Default: true
    AllowedValues:
      - true
      - false

Conditions:
  ShouldEnableVersioning: !Equals [!Ref EnableVersioning, true]
  IsProduction: !Equals [!Ref Environment, production]
  ShouldCreateLifecycle: !Equals [!Ref CreateLifecycleRule, true]

Resources:
  DataBucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: !Sub ${Environment}-data-bucket
      VersioningConfiguration:
        Status: !If [ShouldEnableVersioning, Enabled, Suspended]

  LifecycleRule:
    Type: AWS::S3::Bucket
    Condition: ShouldCreateLifecycle
    Properties:
      BucketName: !Sub ${Environment}-lifecycle-bucket
      LifecycleConfiguration:
        Rules:
          - Status: Enabled
            ExpirationInDays: !If
              - IsProduction
              - 90
              - 30
            NoncurrentVersionExpirationInDays: 30
使用
Conditions
根据参数值条件化创建资源。
yaml
Parameters:
  EnableVersioning:
    Type: String
    Default: true
    AllowedValues:
      - true
      - false

  Environment:
    Type: String
    Default: development
    AllowedValues:
      - development
      - staging
      - production

  CreateLifecycleRule:
    Type: String
    Default: true
    AllowedValues:
      - true
      - false

Conditions:
  ShouldEnableVersioning: !Equals [!Ref EnableVersioning, true]
  IsProduction: !Equals [!Ref Environment, production]
  ShouldCreateLifecycle: !Equals [!Ref CreateLifecycleRule, true]

Resources:
  DataBucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: !Sub ${Environment}-data-bucket
      VersioningConfiguration:
        Status: !If [ShouldEnableVersioning, Enabled, Suspended]

  LifecycleRule:
    Type: AWS::S3::Bucket
    Condition: ShouldCreateLifecycle
    Properties:
      BucketName: !Sub ${Environment}-lifecycle-bucket
      LifecycleConfiguration:
        Rules:
          - Status: Enabled
            ExpirationInDays: !If
              - IsProduction
              - 90
              - 30
            NoncurrentVersionExpirationInDays: 30

Transform

转换

Use
Transform
for macros like AWS::Serverless for SAM templates.
yaml
AWSTemplateFormatVersion: 2010-09-09
Transform: AWS::Serverless-2016-10-31
Description: SAM template with S3 bucket trigger

Resources:
  ThumbnailFunction:
    Type: AWS::Serverless::Function
    Properties:
      Handler: index.handler
      Runtime: python3.9
      CodeUri: function/
      Events:
        ImageUpload:
          Type: S3
          Properties:
            Bucket: !Ref ImageBucket
            Events: s3:ObjectCreated:*

  ImageBucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: !Sub ${AWS::StackName}-images
使用
Transform
处理宏,例如用于SAM模板的AWS::Serverless。
yaml
AWSTemplateFormatVersion: 2010-09-09
Transform: AWS::Serverless-2016-10-31
Description: SAM template with S3 bucket trigger

Resources:
  ThumbnailFunction:
    Type: AWS::Serverless::Function
    Properties:
      Handler: index.handler
      Runtime: python3.9
      CodeUri: function/
      Events:
        ImageUpload:
          Type: S3
          Properties:
            Bucket: !Ref ImageBucket
            Events: s3:ObjectCreated:*

  ImageBucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: !Sub ${AWS::StackName}-images

Outputs and Cross-Stack References

输出与跨栈引用

Basic Outputs

基础输出

yaml
Outputs:
  BucketName:
    Description: Name of the S3 bucket
    Value: !Ref DataBucket

  BucketArn:
    Description: ARN of the S3 bucket
    Value: !GetAtt DataBucket.Arn

  BucketDomainName:
    Description: Domain name of the S3 bucket
    Value: !GetAtt DataBucket.DomainName

  BucketWebsiteURL:
    Description: Website URL for the S3 bucket
    Value: !GetAtt DataBucket.WebsiteURL
yaml
Outputs:
  BucketName:
    Description: Name of the S3 bucket
    Value: !Ref DataBucket

  BucketArn:
    Description: ARN of the S3 bucket
    Value: !GetAtt DataBucket.Arn

  BucketDomainName:
    Description: Domain name of the S3 bucket
    Value: !GetAtt DataBucket.DomainName

  BucketWebsiteURL:
    Description: Website URL for the S3 bucket
    Value: !GetAtt DataBucket.WebsiteURL

Exporting Values for Cross-Stack References

导出值用于跨栈引用

Export values so other stacks can import them.
yaml
Outputs:
  BucketName:
    Description: Bucket name for other stacks
    Value: !Ref DataBucket
    Export:
      Name: !Sub ${AWS::StackName}-BucketName

  BucketArn:
    Description: Bucket ARN for other stacks
    Value: !GetAtt DataBucket.Arn
    Export:
      Name: !Sub ${AWS::StackName}-BucketArn

  BucketRegion:
    Description: Bucket region
    Value: !Ref AWS::Region
    Export:
      Name: !Sub ${AWS::StackName}-BucketRegion
导出值以便其他栈可以导入使用。
yaml
Outputs:
  BucketName:
    Description: Bucket name for other stacks
    Value: !Ref DataBucket
    Export:
      Name: !Sub ${AWS::StackName}-BucketName

  BucketArn:
    Description: Bucket ARN for other stacks
    Value: !GetAtt DataBucket.Arn
    Export:
      Name: !Sub ${AWS::StackName}-BucketArn

  BucketRegion:
    Description: Bucket region
    Value: !Ref AWS::Region
    Export:
      Name: !Sub ${AWS::StackName}-BucketRegion

Importing Values in Another Stack

在另一个栈中导入值

yaml
Parameters:
  DataBucketName:
    Type: AWS::S3::Bucket::Name
    Description: Data bucket name from data stack
    # User selects from exported values in console

  # Or use Fn::ImportValue for programmatic access
Resources:
  BucketAccessRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Allow
            Principal:
              Service: lambda.amazonaws.com
            Action: sts:AssumeRole
      Policies:
        - PolicyName: S3Access
          PolicyDocument:
            Statement:
              - Effect: Allow
                Action:
                  - s3:GetObject
                  - s3:PutObject
                Resource: !Sub
                  - ${BucketArn}/*
                  - BucketArn: !ImportValue data-stack-BucketArn
yaml
Parameters:
  DataBucketName:
    Type: AWS::S3::Bucket::Name
    Description: Data bucket name from data stack
    # 用户可在控制台中从导出值中选择

  # 或使用Fn::ImportValue进行程序化访问
Resources:
  BucketAccessRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Allow
            Principal:
              Service: lambda.amazonaws.com
            Action: sts:AssumeRole
      Policies:
        - PolicyName: S3Access
          PolicyDocument:
            Statement:
              - Effect: Allow
                Action:
                  - s3:GetObject
                  - s3:PutObject
                Resource: !Sub
                  - ${BucketArn}/*
                  - BucketArn: !ImportValue data-stack-BucketArn

Cross-Stack Reference Pattern

跨栈引用模式

Create a dedicated data storage stack that exports values:
yaml
undefined
创建一个专用的数据存储栈并导出值:
yaml
undefined

storage-stack.yaml

storage-stack.yaml

AWSTemplateFormatVersion: 2010-09-09 Description: S3 storage infrastructure stack
Resources: DataBucket: Type: AWS::S3::Bucket Properties: BucketName: !Sub ${AWS::StackName}-data VersioningConfiguration: Status: Enabled PublicAccessBlockConfiguration: BlockPublicAcls: true BlockPublicPolicy: true IgnorePublicAcls: true RestrictPublicBuckets: true
LogBucket: Type: AWS::S3::Bucket Properties: BucketName: !Sub ${AWS::StackName}-logs AccessControl: LogDeliveryWrite
Outputs: DataBucketName: Value: !Ref DataBucket Export: Name: !Sub ${AWS::StackName}-DataBucketName
DataBucketArn: Value: !GetAtt DataBucket.Arn Export: Name: !Sub ${AWS::StackName}-DataBucketArn
LogBucketName: Value: !Ref LogBucket Export: Name: !Sub ${AWS::StackName}-LogBucketName

Application stack imports these values:

```yaml
AWSTemplateFormatVersion: 2010-09-09 Description: S3 storage infrastructure stack
Resources: DataBucket: Type: AWS::S3::Bucket Properties: BucketName: !Sub ${AWS::StackName}-data VersioningConfiguration: Status: Enabled PublicAccessBlockConfiguration: BlockPublicAcls: true BlockPublicPolicy: true IgnorePublicAcls: true RestrictPublicBuckets: true
LogBucket: Type: AWS::S3::Bucket Properties: BucketName: !Sub ${AWS::StackName}-logs AccessControl: LogDeliveryWrite
Outputs: DataBucketName: Value: !Ref DataBucket Export: Name: !Sub ${AWS::StackName}-DataBucketName
DataBucketArn: Value: !GetAtt DataBucket.Arn Export: Name: !Sub ${AWS::StackName}-DataBucketArn
LogBucketName: Value: !Ref LogBucket Export: Name: !Sub ${AWS::StackName}-LogBucketName

应用栈导入这些值:

```yaml

application-stack.yaml

application-stack.yaml

AWSTemplateFormatVersion: 2010-09-09 Description: Application stack that imports from storage
Parameters: StorageStackName: Type: String Description: Name of the storage stack Default: storage-stack
Resources: ApplicationBucket: Type: AWS::S3::Bucket Properties: BucketName: !Sub ${AWS::StackName}-application CorsConfiguration: CorsRules: - AllowedHeaders: - "*" AllowedMethods: - GET - PUT - POST AllowedOrigins: - !Ref ApplicationDomain MaxAge: 3600
undefined
AWSTemplateFormatVersion: 2010-09-09 Description: Application stack that imports from storage
Parameters: StorageStackName: Type: String Description: Name of the storage stack Default: storage-stack
Resources: ApplicationBucket: Type: AWS::S3::Bucket Properties: BucketName: !Sub ${AWS::StackName}-application CorsConfiguration: CorsRules: - AllowedHeaders: - "*" AllowedMethods: - GET - PUT - POST AllowedOrigins: - !Ref ApplicationDomain MaxAge: 3600
undefined

S3 Bucket Configuration

S3存储桶配置

Bucket with Public Access Block

带公共访问阻止的存储桶

yaml
Resources:
  SecureBucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: my-secure-bucket
      PublicAccessBlockConfiguration:
        BlockPublicAcls: true
        BlockPublicPolicy: true
        IgnorePublicAcls: true
        RestrictPublicBuckets: true
yaml
Resources:
  SecureBucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: my-secure-bucket
      PublicAccessBlockConfiguration:
        BlockPublicAcls: true
        BlockPublicPolicy: true
        IgnorePublicAcls: true
        RestrictPublicBuckets: true

Bucket with Versioning

带版本控制的存储桶

yaml
Resources:
  VersionedBucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: my-versioned-bucket
      VersioningConfiguration:
        Status: Enabled
        # Use MFADelete to require MFA for version deletion
        # MFADelete: Enabled  # Optional, requires MFA
yaml
Resources:
  VersionedBucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: my-versioned-bucket
      VersioningConfiguration:
        Status: Enabled
        # 使用MFADelete要求删除版本时需MFA验证
        # MFADelete: Enabled  # 可选,需要MFA

Bucket with Lifecycle Rules

带生命周期规则的存储桶

yaml
Resources:
  LifecycleBucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: my-lifecycle-bucket
      LifecycleConfiguration:
        Rules:
          # Expire objects after 30 days
          - Id: ExpireOldObjects
            Status: Enabled
            ExpirationInDays: 30
            NoncurrentVersionExpirationInDays: 7
          # Archive to Glacier after 90 days
          - Id: ArchiveToGlacier
            Status: Enabled
            Transitions:
              - Days: 90
                StorageClass: GLACIER
              - Days: 365
                StorageClass: DEEP_ARCHIVE
            NoncurrentVersionTransitions:
              - NoncurrentDays: 30
                StorageClass: GLACIER
yaml
Resources:
  LifecycleBucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: my-lifecycle-bucket
      LifecycleConfiguration:
        Rules:
          # 30天后过期对象
          - Id: ExpireOldObjects
            Status: Enabled
            ExpirationInDays: 30
            NoncurrentVersionExpirationInDays: 7
          # 90天后归档到Glacier
          - Id: ArchiveToGlacier
            Status: Enabled
            Transitions:
              - Days: 90
                StorageClass: GLACIER
              - Days: 365
                StorageClass: DEEP_ARCHIVE
            NoncurrentVersionTransitions:
              - NoncurrentDays: 30
                StorageClass: GLACIER

Bucket with Cross-Region Replication

带跨区域复制的存储桶

yaml
Resources:
  SourceBucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: my-source-bucket
      VersioningConfiguration:
        Status: Enabled
      ReplicationConfiguration:
        Role: !GetAtt ReplicationRole.Arn
        Rules:
          - Id: ReplicateToDestRegion
            Status: Enabled
            Destination:
              Bucket: !Sub arn:aws:s3:::my-dest-bucket-${AWS::Region}
              StorageClass: STANDARD_IA
              EncryptionConfiguration:
                ReplicaKmsKeyID: !Ref DestKMSKey

  ReplicationRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Allow
            Principal:
              Service: s3.amazonaws.com
            Action: sts:AssumeRole
yaml
Resources:
  SourceBucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: my-source-bucket
      VersioningConfiguration:
        Status: Enabled
      ReplicationConfiguration:
        Role: !GetAtt ReplicationRole.Arn
        Rules:
          - Id: ReplicateToDestRegion
            Status: Enabled
            Destination:
              Bucket: !Sub arn:aws:s3:::my-dest-bucket-${AWS::Region}
              StorageClass: STANDARD_IA
              EncryptionConfiguration:
                ReplicaKmsKeyID: !Ref DestKMSKey

  ReplicationRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Allow
            Principal:
              Service: s3.amazonaws.com
            Action: sts:AssumeRole

Bucket Policies

存储桶策略

Bucket Policy for Private Access

私有访问的存储桶策略

yaml
Resources:
  PrivateBucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: my-private-bucket
      PublicAccessBlockConfiguration:
        BlockPublicAcls: true
        BlockPublicPolicy: true
        IgnorePublicAcls: true
        RestrictPublicBuckets: true

  BucketPolicy:
    Type: AWS::S3::BucketPolicy
    Properties:
      Bucket: !Ref PrivateBucket
      PolicyDocument:
        Statement:
          - Sid: DenyPublicRead
            Effect: Deny
            Principal: "*"
            Action:
              - s3:GetObject
            Resource: !Sub ${PrivateBucket.Arn}/*
            Condition:
              Bool:
                aws:SecureTransport: false
yaml
Resources:
  PrivateBucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: my-private-bucket
      PublicAccessBlockConfiguration:
        BlockPublicAcls: true
        BlockPublicPolicy: true
        IgnorePublicAcls: true
        RestrictPublicBuckets: true

  BucketPolicy:
    Type: AWS::S3::BucketPolicy
    Properties:
      Bucket: !Ref PrivateBucket
      PolicyDocument:
        Statement:
          - Sid: DenyPublicRead
            Effect: Deny
            Principal: "*"
            Action:
              - s3:GetObject
            Resource: !Sub ${PrivateBucket.Arn}/*
            Condition:
              Bool:
                aws:SecureTransport: false

Bucket Policy for CloudFront OAI

用于CloudFront OAI的存储桶策略

yaml
Resources:
  StaticWebsiteBucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: my-static-website
      WebsiteConfiguration:
        IndexDocument: index.html
        ErrorDocument: error.html

  BucketPolicy:
    Type: AWS::S3::BucketPolicy
    Properties:
      Bucket: !Ref StaticWebsiteBucket
      PolicyDocument:
        Statement:
          - Sid: CloudFrontReadAccess
            Effect: Allow
            Principal:
              CanonicalUser: !GetAtt CloudFrontOAI.S3CanonicalUserId
            Action: s3:GetObject
            Resource: !Sub ${StaticWebsiteBucket.Arn}/*

  CloudFrontOAI:
    Type: AWS::CloudFront::CloudFrontOriginAccessIdentity
    Properties:
      CloudFrontOriginAccessIdentityConfig:
        Comment: !Sub ${AWS::StackName}-oai
yaml
Resources:
  StaticWebsiteBucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: my-static-website
      WebsiteConfiguration:
        IndexDocument: index.html
        ErrorDocument: error.html

  BucketPolicy:
    Type: AWS::S3::BucketPolicy
    Properties:
      Bucket: !Ref StaticWebsiteBucket
      PolicyDocument:
        Statement:
          - Sid: CloudFrontReadAccess
            Effect: Allow
            Principal:
              CanonicalUser: !GetAtt CloudFrontOAI.S3CanonicalUserId
            Action: s3:GetObject
            Resource: !Sub ${StaticWebsiteBucket.Arn}/*

  CloudFrontOAI:
    Type: AWS::CloudFront::CloudFrontOriginAccessIdentity
    Properties:
      CloudFrontOriginAccessIdentityConfig:
        Comment: !Sub ${AWS::StackName}-oai

Bucket Policy for VPC Endpoint

用于VPC端点的存储桶策略

yaml
Resources:
  PrivateBucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: my-private-bucket

  BucketPolicy:
    Type: AWS::S3::BucketPolicy
    Properties:
      Bucket: !Ref PrivateBucket
      PolicyDocument:
        Statement:
          - Sid: AllowVPCEndpoint
            Effect: Allow
            Principal: "*"
            Action: s3:GetObject
            Resource: !Sub ${PrivateBucket.Arn}/*
            Condition:
              StringEquals:
                aws:sourceVpce: !Ref VPCEndpointId
yaml
Resources:
  PrivateBucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: my-private-bucket

  BucketPolicy:
    Type: AWS::S3::BucketPolicy
    Properties:
      Bucket: !Ref PrivateBucket
      PolicyDocument:
        Statement:
          - Sid: AllowVPCEndpoint
            Effect: Allow
            Principal: "*"
            Action: s3:GetObject
            Resource: !Sub ${PrivateBucket.Arn}/*
            Condition:
              StringEquals:
                aws:sourceVpce: !Ref VPCEndpointId

Complete S3 Bucket Example

完整S3存储桶示例

yaml
AWSTemplateFormatVersion: 2010-09-09
Description: Production-ready S3 bucket with versioning, logging, and lifecycle

Parameters:
  BucketName:
    Type: String
    Description: Name of the S3 bucket

  Environment:
    Type: String
    Default: production
    AllowedValues:
      - development
      - staging
      - production

  EnableVersioning:
    Type: String
    Default: true
    AllowedValues:
      - true
      - false

  RetentionDays:
    Type: Number
    Default: 90
    Description: Days to retain objects

Conditions:
  ShouldEnableVersioning: !Equals [!Ref EnableVersioning, true]
  IsProduction: !Equals [!Ref Environment, production]

Resources:
  DataBucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: !Ref BucketName
      VersioningConfiguration:
        Status: !If [ShouldEnableVersioning, Enabled, Suspended]
      PublicAccessBlockConfiguration:
        BlockPublicAcls: true
        BlockPublicPolicy: true
        IgnorePublicAcls: true
        RestrictPublicBuckets: true
      LoggingConfiguration:
        DestinationBucketName: !Ref AccessLogBucket
        LogFilePrefix: !Sub ${BucketName}/logs/
      LifecycleConfiguration:
        Rules:
          - Id: StandardLifecycle
            Status: Enabled
            ExpirationInDays: !Ref RetentionDays
            NoncurrentVersionExpirationInDays: 30
            Transitions:
              - Days: 30
                StorageClass: STANDARD_IA
              - Days: 90
                StorageClass: GLACIER
      Tags:
        - Key: Environment
          Value: !Ref Environment
        - Key: Name
          Value: !Ref BucketName

  AccessLogBucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: !Sub ${BucketName}-logs
      AccessControl: LogDeliveryWrite
      PublicAccessBlockConfiguration:
        BlockPublicAcls: true
        BlockPublicPolicy: true
        IgnorePublicAcls: true
        RestrictPublicBuckets: true
      LifecycleConfiguration:
        Rules:
          - Id: DeleteLogsAfter30Days
            Status: Enabled
            ExpirationInDays: 30

Outputs:
  BucketName:
    Description: Name of the S3 bucket
    Value: !Ref DataBucket

  BucketArn:
    Description: ARN of the S3 bucket
    Value: !GetAtt DataBucket.Arn

  BucketDomainName:
    Description: Domain name of the S3 bucket
    Value: !GetAtt DataBucket.DomainName

  BucketWebsiteURL:
    Description: Website URL for the S3 bucket
    Value: !GetAtt DataBucket.WebsiteURL

  LogBucketName:
    Description: Name of the access log bucket
    Value: !Ref AccessLogBucket
yaml
AWSTemplateFormatVersion: 2010-09-09
Description: Production-ready S3 bucket with versioning, logging, and lifecycle

Parameters:
  BucketName:
    Type: String
    Description: Name of the S3 bucket

  Environment:
    Type: String
    Default: production
    AllowedValues:
      - development
      - staging
      - production

  EnableVersioning:
    Type: String
    Default: true
    AllowedValues:
      - true
      - false

  RetentionDays:
    Type: Number
    Default: 90
    Description: Days to retain objects

Conditions:
  ShouldEnableVersioning: !Equals [!Ref EnableVersioning, true]
  IsProduction: !Equals [!Ref Environment, production]

Resources:
  DataBucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: !Ref BucketName
      VersioningConfiguration:
        Status: !If [ShouldEnableVersioning, Enabled, Suspended]
      PublicAccessBlockConfiguration:
        BlockPublicAcls: true
        BlockPublicPolicy: true
        IgnorePublicAcls: true
        RestrictPublicBuckets: true
      LoggingConfiguration:
        DestinationBucketName: !Ref AccessLogBucket
        LogFilePrefix: !Sub ${BucketName}/logs/
      LifecycleConfiguration:
        Rules:
          - Id: StandardLifecycle
            Status: Enabled
            ExpirationInDays: !Ref RetentionDays
            NoncurrentVersionExpirationInDays: 30
            Transitions:
              - Days: 30
                StorageClass: STANDARD_IA
              - Days: 90
                StorageClass: GLACIER
      Tags:
        - Key: Environment
          Value: !Ref Environment
        - Key: Name
          Value: !Ref BucketName

  AccessLogBucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: !Sub ${BucketName}-logs
      AccessControl: LogDeliveryWrite
      PublicAccessBlockConfiguration:
        BlockPublicAcls: true
        BlockPublicPolicy: true
        IgnorePublicAcls: true
        RestrictPublicBuckets: true
      LifecycleConfiguration:
        Rules:
          - Id: DeleteLogsAfter30Days
            Status: Enabled
            ExpirationInDays: 30

Outputs:
  BucketName:
    Description: Name of the S3 bucket
    Value: !Ref DataBucket

  BucketArn:
    Description: ARN of the S3 bucket
    Value: !GetAtt DataBucket.Arn

  BucketDomainName:
    Description: Domain name of the S3 bucket
    Value: !GetAtt DataBucket.DomainName

  BucketWebsiteURL:
    Description: Website URL for the S3 bucket
    Value: !GetAtt DataBucket.WebsiteURL

  LogBucketName:
    Description: Name of the access log bucket
    Value: !Ref AccessLogBucket

CloudFormation Best Practices

CloudFormation最佳实践

Stack Policies

栈策略

Stack Policies protect stack resources from unintentional updates that could cause disruption or data loss. Use them to prevent accidental modifications to critical resources.
栈策略可保护栈资源免受可能导致中断或数据丢失的意外更新,用于防止对关键资源的意外修改。

Setting a Stack Policy

设置栈策略

yaml
{
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "Update:*",
      "Principal": "*",
      "Resource": "*"
    },
    {
      "Effect": "Deny",
      "Action": [
        "Update:Replace",
        "Update:Delete"
      ],
      "Principal": "*",
      "Resource": "LogicalResourceId/DataBucket"
    },
    {
      "Effect": "Deny",
      "Action": "Update:*",
      "Principal": "*",
      "Resource": "LogicalResourceId/AccessLogBucket",
      "Condition": {
        "StringEquals": {
          "ResourceType": ["AWS::S3::Bucket"]
        }
      }
    }
  ]
}
yaml
{
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "Update:*",
      "Principal": "*",
      "Resource": "*"
    },
    {
      "Effect": "Deny",
      "Action": [
        "Update:Replace",
        "Update:Delete"
      ],
      "Principal": "*",
      "Resource": "LogicalResourceId/DataBucket"
    },
    {
      "Effect": "Deny",
      "Action": "Update:*",
      "Principal": "*",
      "Resource": "LogicalResourceId/AccessLogBucket",
      "Condition": {
        "StringEquals": {
          "ResourceType": ["AWS::S3::Bucket"]
        }
      }
    }
  ]
}

Applying Stack Policy via AWS CLI

通过AWS CLI应用栈策略

bash
aws cloudformation set-stack-policy \
  --stack-name my-s3-stack \
  --stack-policy-body file://stack-policy.json
bash
aws cloudformation set-stack-policy \
  --stack-name my-s3-stack \
  --stack-policy-body file://stack-policy.json

Stack Policy for Production Environment

生产环境的栈策略

json
{
  "Statement": [
    {
      "Effect": "Allow",
      "Action": ["Update:Modify", "Update:Replace", "Update:Delete"],
      "Principal": "*",
      "Resource": "*",
      "Condition": {
        "StringEquals": {
          "ResourceType": ["AWS::S3::Bucket"]
        }
      }
    },
    {
      "Effect": "Deny",
      "Action": "Update:Delete",
      "Principal": "*",
      "Resource": "*"
    },
    {
      "Effect": "Allow",
      "Action": "Update:*",
      "Principal": "AWS": ["arn:aws:iam::123456789012:role/AdminRole"],
      "Resource": "*"
    }
  ]
}
json
{
  "Statement": [
    {
      "Effect": "Allow",
      "Action": ["Update:Modify", "Update:Replace", "Update:Delete"],
      "Principal": "*",
      "Resource": "*",
      "Condition": {
        "StringEquals": {
          "ResourceType": ["AWS::S3::Bucket"]
        }
      }
    },
    {
      "Effect": "Deny",
      "Action": "Update:Delete",
      "Principal": "*",
      "Resource": "*"
    },
    {
      "Effect": "Allow",
      "Action": "Update:*",
      "Principal": "AWS": ["arn:aws:iam::123456789012:role/AdminRole"],
      "Resource": "*"
    }
  ]
}

Termination Protection

终止保护

Termination Protection prevents accidental deletion of CloudFormation stacks. Always enable it for production stacks.
终止保护可防止意外删除CloudFormation栈,生产环境的栈应始终启用该功能。

Enabling Termination Protection

启用终止保护

bash
undefined
bash
undefined

Enable termination protection when creating a stack

创建栈时启用终止保护

aws cloudformation create-stack
--stack-name my-s3-stack
--template-body file://template.yaml
--enable-termination-protection
aws cloudformation create-stack
--stack-name my-s3-stack
--template-body file://template.yaml
--enable-termination-protection

Enable termination protection on existing stack

为现有栈启用终止保护

aws cloudformation update-termination-protection
--stack-name my-s3-stack
--enable-termination-protection
aws cloudformation update-termination-protection
--stack-name my-s3-stack
--enable-termination-protection

Disable termination protection

禁用终止保护

aws cloudformation update-termination-protection
--stack-name my-s3-stack
--no-enable-termination-protection
undefined
aws cloudformation update-termination-protection
--stack-name my-s3-stack
--no-enable-termination-protection
undefined

Termination Protection in SDK (Python)

使用SDK(Python)启用终止保护

python
import boto3

def enable_termination_protection(stack_name):
    cfn = boto3.client('cloudformation')
    try:
        cfn.update_termination_protection(
            StackName=stack_name,
            EnableTerminationProtection=True
        )
        print(f"Termination protection enabled for stack: {stack_name}")
    except cfn.exceptions.TerminationProtectionError as e:
        if "already" in str(e).lower():
            print(f"Termination protection already enabled for stack: {stack_name}")
        else:
            raise
python
import boto3

def enable_termination_protection(stack_name):
    cfn = boto3.client('cloudformation')
    try:
        cfn.update_termination_protection(
            StackName=stack_name,
            EnableTerminationProtection=True
        )
        print(f"Termination protection enabled for stack: {stack_name}")
    except cfn.exceptions.TerminationProtectionError as e:
        if "already" in str(e).lower():
            print(f"Termination protection already enabled for stack: {stack_name}")
        else:
            raise

Verification Script

验证脚本

bash
#!/bin/bash
bash
#!/bin/bash

verify-termination-protection.sh

verify-termination-protection.sh

STACK_NAME=$1
if [ -z "$STACK_NAME" ]; then echo "Usage: $0 <stack-name>" exit 1 fi
STATUS=$(aws cloudformation describe-stacks
--stack-name $STACK_NAME
--query 'Stacks[0].TerminationProtection'
--output text)
if [ "$STATUS" = "True" ]; then echo "Termination protection is ENABLED for $STACK_NAME" exit 0 else echo "WARNING: Termination protection is DISABLED for $STACK_NAME" exit 1 fi
undefined
STACK_NAME=$1
if [ -z "$STACK_NAME" ]; then echo "Usage: $0 <stack-name>" exit 1 fi
STATUS=$(aws cloudformation describe-stacks
--stack-name $STACK_NAME
--query 'Stacks[0].TerminationProtection'
--output text)
if [ "$STATUS" = "True" ]; then echo "Termination protection is ENABLED for $STACK_NAME" exit 0 else echo "WARNING: Termination protection is DISABLED for $STACK_NAME" exit 1 fi
undefined

Drift Detection

漂移检测

Drift Detection identifies differences between the actual infrastructure and the CloudFormation template. Run it regularly to ensure compliance.
漂移检测可识别实际基础设施与CloudFormation模板之间的差异,应定期运行以确保合规性。

Detecting Drift

检测漂移

bash
undefined
bash
undefined

Detect drift on a single stack

检测单个栈的漂移

aws cloudformation detect-drift
--stack-name my-s3-stack
aws cloudformation detect-drift
--stack-name my-s3-stack

Detect drift and get detailed results

检测漂移并获取详细结果

STACK_NAME="my-s3-stack"
STACK_NAME="my-s3-stack"

Start drift detection

启动漂移检测

aws cloudformation detect-drift
--stack-name $STACK_NAME
aws cloudformation detect-drift
--stack-name $STACK_NAME

Wait for drift detection to complete

等待漂移检测完成

aws cloudformation wait stack-drift-detection-complete
--stack-name $STACK_NAME
aws cloudformation wait stack-drift-detection-complete
--stack-name $STACK_NAME

Get drift detection status

获取漂移检测状态

STATUS=$(aws cloudformation describe-stack-drift-detection-status
--stack-name $STACK_NAME
--query 'StackDriftStatus'
--output text)
echo "Stack drift status: $STATUS"
STATUS=$(aws cloudformation describe-stack-drift-detection-status
--stack-name $STACK_NAME
--query 'StackDriftStatus'
--output text)
echo "Stack drift status: $STATUS"

Get detailed drift information

获取详细漂移信息

if [ "$STATUS" = "DRIFTED" ]; then aws cloudformation describe-stack-resource-drifts
--stack-name $STACK_NAME
--query 'StackResourceDrifts[*].[LogicalResourceId,ResourceType,DriftStatus,PropertyDifferences]'
--output table fi
undefined
if [ "$STATUS" = "DRIFTED" ]; then aws cloudformation describe-stack-resource-drifts
--stack-name $STACK_NAME
--query 'StackResourceDrifts[*].[LogicalResourceId,ResourceType,DriftStatus,PropertyDifferences]'
--output table fi
undefined

Drift Detection Script with Reporting

带报告的漂移检测脚本

bash
#!/bin/bash
bash
#!/bin/bash

detect-drift.sh

detect-drift.sh

STACK_NAME=$1 REPORT_FILE="drift-report-${STACK_NAME}-$(date +%Y%m%d).json"
if [ -z "$STACK_NAME" ]; then echo "Usage: $0 <stack-name> [report-file]" exit 1 fi
if [ -n "$2" ]; then REPORT_FILE=$2 fi
echo "Starting drift detection for stack: $STACK_NAME"
STACK_NAME=$1 REPORT_FILE="drift-report-${STACK_NAME}-$(date +%Y%m%d).json"
if [ -z "$STACK_NAME" ]; then echo "Usage: $0 <stack-name> [report-file]" exit 1 fi
if [ -n "$2" ]; then REPORT_FILE=$2 fi
echo "Starting drift detection for stack: $STACK_NAME"

Start drift detection

启动漂移检测

DETECTION_ID=$(aws cloudformation detect-drift
--stack-name $STACK_NAME
--query 'Id'
--output text)
echo "Drift detection initiated. Detection ID: $DETECTION_ID"
DETECTION_ID=$(aws cloudformation detect-drift
--stack-name $STACK_NAME
--query 'Id'
--output text)
echo "Drift detection initiated. Detection ID: $DETECTION_ID"

Wait for completion

等待完成

echo "Waiting for drift detection to complete..." aws cloudformation wait stack-drift-detection-complete
--stack-name $STACK_NAME 2>/dev/null || true
echo "Waiting for drift detection to complete..." aws cloudformation wait stack-drift-detection-complete
--stack-name $STACK_NAME 2>/dev/null || true

Get detection status

获取检测状态

DRIFT_STATUS=$(aws cloudformation describe-stack-drift-detection-status
--stack-name $STACK_NAME
--query 'StackDriftStatus'
--output text 2>/dev/null)
echo "Drift status: $DRIFT_STATUS"
DRIFT_STATUS=$(aws cloudformation describe-stack-drift-detection-status
--stack-name $STACK_NAME
--query 'StackDriftStatus'
--output text 2>/dev/null)
echo "Drift status: $DRIFT_STATUS"

Get detailed results

获取详细结果

if [ "$DRIFT_STATUS" = "DRIFTED" ]; then echo "Resources with drift detected:" aws cloudformation describe-stack-resource-drifts
--stack-name $STACK_NAME
--output json > "$REPORT_FILE"
echo "Drift report saved to: $REPORT_FILE"

# Display summary
aws cloudformation describe-stack-resource-drifts \
  --stack-name $STACK_NAME \
  --query 'StackResourceDrifts[?DriftStatus==`MODIFIED`].[LogicalResourceId,ResourceType,PropertyDifferences[].PropertyName]' \
  --output table
else echo "No drift detected. Stack is in sync with template." echo "{}" > "$REPORT_FILE" fi
undefined
if [ "$DRIFT_STATUS" = "DRIFTED" ]; then echo "Resources with drift detected:" aws cloudformation describe-stack-resource-drifts
--stack-name $STACK_NAME
--output json > "$REPORT_FILE"
echo "Drift report saved to: $REPORT_FILE"

# 显示摘要
aws cloudformation describe-stack-resource-drifts \
  --stack-name $STACK_NAME \
  --query 'StackResourceDrifts[?DriftStatus==`MODIFIED`].[LogicalResourceId,ResourceType,PropertyDifferences[].PropertyName]' \
  --output table
else echo "No drift detected. Stack is in sync with template." echo "{}" > "$REPORT_FILE" fi
undefined

Drift Detection for Multiple Stacks

多栈漂移检测

python
import boto3
from datetime import datetime

def detect_drift_all_stacks(prefix="prod-"):
    cfn = boto3.client('cloudformation')
    s3 = boto3.client('s3')

    # List all stacks with prefix
    stacks = cfn.list_stacks(
        StackStatusFilter=['CREATE_COMPLETE', 'UPDATE_COMPLETE']
    )['StackSummaries']

    target_stacks = [s for s in stacks if s['StackName'].startswith(prefix)]

    drift_results = []

    for stack in target_stacks:
        stack_name = stack['StackName']
        print(f"Checking drift for: {stack_name}")

        # Start drift detection
        response = cfn.detect_drift(StackName=stack_name)
        detection_id = response['Id']

        # Wait for completion (simplified - in production use waiter)
        waiter = cfn.get_waiter('stack_drift_detection_complete')
        waiter.wait(StackName=stack_name)

        # Get status
        status = cfn.describe_stack_drift_detection_status(
            StackName=stack_name
        )

        drift_results.append({
            'stack_name': stack_name,
            'drift_status': status['StackDriftStatus'],
            'detection_time': datetime.utcnow().isoformat()
        })

        if status['StackDriftStatus'] == 'DRIFTED':
            # Get detailed drift info
            resources = cfn.describe_stack_resource_drifts(
                StackName=stack_name
            )['StackResourceDrifts']
            drift_results[-1]['drifted_resources'] = [
                {
                    'logical_id': r['LogicalResourceId'],
                    'type': r['ResourceType'],
                    'status': r['DriftStatus']
                } for r in resources
            ]

    return drift_results
python
import boto3
from datetime import datetime

def detect_drift_all_stacks(prefix="prod-"):
    cfn = boto3.client('cloudformation')
    s3 = boto3.client('s3')

    # 列出所有带指定前缀的栈
    stacks = cfn.list_stacks(
        StackStatusFilter=['CREATE_COMPLETE', 'UPDATE_COMPLETE']
    )['StackSummaries']

    target_stacks = [s for s in stacks if s['StackName'].startswith(prefix)]

    drift_results = []

    for stack in target_stacks:
        stack_name = stack['StackName']
        print(f"Checking drift for: {stack_name}")

        # 启动漂移检测
        response = cfn.detect_drift(StackName=stack_name)
        detection_id = response['Id']

        # 等待完成(简化版,生产环境应使用waiter)
        waiter = cfn.get_waiter('stack_drift_detection_complete')
        waiter.wait(StackName=stack_name)

        # 获取状态
        status = cfn.describe_stack_drift_detection_status(
            StackName=stack_name
        )

        drift_results.append({
            'stack_name': stack_name,
            'drift_status': status['StackDriftStatus'],
            'detection_time': datetime.utcnow().isoformat()
        })

        if status['StackDriftStatus'] == 'DRIFTED':
            # 获取详细漂移信息
            resources = cfn.describe_stack_resource_drifts(
                StackName=stack_name
            )['StackResourceDrifts']
            drift_results[-1]['drifted_resources'] = [
                {
                    'logical_id': r['LogicalResourceId'],
                    'type': r['ResourceType'],
                    'status': r['DriftStatus']
                } for r in resources
            ]

    return drift_results

Change Sets

变更集

Change Sets preview changes before applying them. Always use them for production deployments to review impact.
变更集可在应用前预览更改,生产环境部署时应始终使用变更集以评估影响。

Creating and Executing a Change Set

创建并执行变更集

bash
#!/bin/bash
bash
#!/bin/bash

deploy-with-changeset.sh

deploy-with-changeset.sh

STACK_NAME=$1 TEMPLATE_FILE=$2 CHANGESET_NAME="${STACK_NAME}-changeset-$(date +%Y%m%d%H%M%S)"
if [ -z "$STACK_NAME" ] || [ -z "$TEMPLATE_FILE" ]; then echo "Usage: $0 <stack-name> <template-file>" exit 1 fi
echo "Creating change set for stack: $STACK_NAME"
STACK_NAME=$1 TEMPLATE_FILE=$2 CHANGESET_NAME="${STACK_NAME}-changeset-$(date +%Y%m%d%H%M%S)"
if [ -z "$STACK_NAME" ] || [ -z "$TEMPLATE_FILE" ]; then echo "Usage: $0 <stack-name> <template-file>" exit 1 fi
echo "Creating change set for stack: $STACK_NAME"

Create change set

创建变更集

aws cloudformation create-change-set
--stack-name $STACK_NAME
--template-body file://$TEMPLATE_FILE
--change-set-name $CHANGESET_NAME
--capabilities CAPABILITY_IAM
--change-set-type UPDATE
echo "Change set created: $CHANGESET_NAME"
aws cloudformation create-change-set
--stack-name $STACK_NAME
--template-body file://$TEMPLATE_FILE
--change-set-name $CHANGESET_NAME
--capabilities CAPABILITY_IAM
--change-set-type UPDATE
echo "Change set created: $CHANGESET_NAME"

Wait for change set creation

等待变更集创建完成

aws cloudformation wait change-set-create-complete
--stack-name $STACK_NAME
--change-set-name $CHANGESET_NAME
aws cloudformation wait change-set-create-complete
--stack-name $STACK_NAME
--change-set-name $CHANGESET_NAME

Display changes

显示更改

echo "" echo "=== Change Set Summary ===" aws cloudformation describe-change-set
--stack-name $STACK_NAME
--change-set-name $CHANGESET_NAME
--query '[ChangeSetName,Status,ChangeSetStatus,StatusReason]'
--output table
echo "" echo "=== Detailed Changes ===" aws cloudformation list-change-sets
--stack-name $STACK_NAME
--query "Summaries[?ChangeSetName=='$CHANGESET_NAME'].[Changes]"
--output text | python3 -m json.tool 2>/dev/null ||
aws cloudformation describe-change-set
--stack-name $STACK_NAME
--change-set-name $CHANGESET_NAME
--query 'Changes[*].ResourceChange'
--output table
echo "" echo "=== Change Set Summary ===" aws cloudformation describe-change-set
--stack-name $STACK_NAME
--change-set-name $CHANGESET_NAME
--query '[ChangeSetName,Status,ChangeSetStatus,StatusReason]'
--output table
echo "" echo "=== Detailed Changes ===" aws cloudformation list-change-sets
--stack-name $STACK_NAME
--query "Summaries[?ChangeSetName=='$CHANGESET_NAME'].[Changes]"
--output text | python3 -m json.tool 2>/dev/null ||
aws cloudformation describe-change-set
--stack-name $STACK_NAME
--change-set-name $CHANGESET_NAME
--query 'Changes[*].ResourceChange'
--output table

Prompt for execution

提示执行

echo "" read -p "Execute this change set? (yes/no): " CONFIRM
if [ "$CONFIRM" = "yes" ]; then echo "Executing change set..." aws cloudformation execute-change-set
--stack-name $STACK_NAME
--change-set-name $CHANGESET_NAME
echo "Waiting for stack update to complete..."
aws cloudformation wait stack-update-complete \
  --stack-name $STACK_NAME

echo "Stack update complete!"
else echo "Change set execution cancelled." echo "To execute later, run:" echo "aws cloudformation execute-change-set --stack-name $STACK_NAME --change-set-name $CHANGESET_NAME" fi
undefined
echo "" read -p "Execute this change set? (yes/no): " CONFIRM
if [ "$CONFIRM" = "yes" ]; then echo "Executing change set..." aws cloudformation execute-change-set
--stack-name $STACK_NAME
--change-set-name $CHANGESET_NAME
echo "Waiting for stack update to complete..."
aws cloudformation wait stack-update-complete \
  --stack-name $STACK_NAME

echo "Stack update complete!"
else echo "Change set execution cancelled." echo "To execute later, run:" echo "aws cloudformation execute-change-set --stack-name $STACK_NAME --change-set-name $CHANGESET_NAME" fi
undefined

Change Set with Parameter Overrides

带参数覆盖的变更集

bash
undefined
bash
undefined

Create change set with parameters

创建带参数的变更集

aws cloudformation create-change-set
--stack-name my-s3-stack
--template-body file://template.yaml
--change-set-name my-changeset
--parameters
ParameterKey=BucketName,ParameterValue=my-new-bucket
ParameterKey=Environment,ParameterValue=production
--capabilities CAPABILITY_IAM
aws cloudformation create-change-set
--stack-name my-s3-stack
--template-body file://template.yaml
--change-set-name my-changeset
--parameters
ParameterKey=BucketName,ParameterValue=my-new-bucket
ParameterKey=Environment,ParameterValue=production
--capabilities CAPABILITY_IAM

Generate change set from existing stack

从现有栈生成变更集

aws cloudformation create-change-set
--stack-name my-s3-stack
--template-body file://new-template.yaml
--change-set-name migrate-to-new-template
--change-set-type IMPORT
--resources-to-import "["DataBucket", "AccessLogBucket"]"
undefined
aws cloudformation create-change-set
--stack-name my-s3-stack
--template-body file://new-template.yaml
--change-set-name migrate-to-new-template
--change-set-type IMPORT
--resources-to-import "["DataBucket", "AccessLogBucket"]"
undefined

Change Set Preview Script

变更集预览脚本

python
import boto3

def preview_changes(stack_name, template_body, parameters=None):
    cfn = boto3.client('cloudformation')
    changeset_name = f"{stack_name}-preview-{int(__import__('time').time())}"

    try:
        # Create change set
        kwargs = {
            'StackName': stack_name,
            'TemplateBody': template_body,
            'ChangeSetName': changeset_name,
            'ChangeSetType': 'UPDATE'
        }

        if parameters:
            kwargs['Parameters'] = parameters

        response = cfn.create_change_set(**kwargs)

        # Wait for creation
        waiter = cfn.get_waiter('change_set_create_complete')
        waiter.wait(StackName=stack_name, ChangeSetName=changeset_name)

        # Get change set description
        changeset = cfn.describe_change_set(
            StackName=stack_name,
            ChangeSetName=changeset_name
        )

        print(f"Change Set: {changeset['ChangeSetName']}")
        print(f"Status: {changeset['Status']}")
        print(f"Number of changes: {len(changeset.get('Changes', []))}")

        # Display each change
        for change in changeset.get('Changes', []):
            resource = change['ResourceChange']
            print(f"\n{resource['Action']} {resource['LogicalResourceId']} ({resource['ResourceType']})")

            if resource.get('Replacement') == 'True':
                print(f"  - This resource will be REPLACED (potential downtime)")

            for detail in resource.get('Details', []):
                print(f"  - {detail['Attribute']}: {detail['Name']}")

        return changeset

    except cfn.exceptions.AlreadyExistsException:
        print(f"Change set already exists")
        return None
    finally:
        # Clean up change set
        try:
            cfn.delete_change_set(
                StackName=stack_name,
                ChangeSetName=changeset_name
            )
        except Exception:
            pass
python
import boto3

def preview_changes(stack_name, template_body, parameters=None):
    cfn = boto3.client('cloudformation')
    changeset_name = f"{stack_name}-preview-{int(__import__('time').time())}"

    try:
        # 创建变更集
        kwargs = {
            'StackName': stack_name,
            'TemplateBody': template_body,
            'ChangeSetName': changeset_name,
            'ChangeSetType': 'UPDATE'
        }

        if parameters:
            kwargs['Parameters'] = parameters

        response = cfn.create_change_set(**kwargs)

        # 等待创建完成
        waiter = cfn.get_waiter('change_set_create_complete')
        waiter.wait(StackName=stack_name, ChangeSetName=changeset_name)

        # 获取变更集描述
        changeset = cfn.describe_change_set(
            StackName=stack_name,
            ChangeSetName=changeset_name
        )

        print(f"Change Set: {changeset['ChangeSetName']}")
        print(f"Status: {changeset['Status']}")
        print(f"Number of changes: {len(changeset.get('Changes', []))}")

        # 显示每个变更
        for change in changeset.get('Changes', []):
            resource = change['ResourceChange']
            print(f"\n{resource['Action']} {resource['LogicalResourceId']} ({resource['ResourceType']})")

            if resource.get('Replacement') == 'True':
                print(f"  - This resource will be REPLACED (potential downtime)")

            for detail in resource.get('Details', []):
                print(f"  - {detail['Attribute']}: {detail['Name']}")

        return changeset

    except cfn.exceptions.AlreadyExistsException:
        print(f"Change set already exists")
        return None
    finally:
        # 清理变更集
        try:
            cfn.delete_change_set(
                StackName=stack_name,
                ChangeSetName=changeset_name
            )
        except Exception:
            pass

Change Set Best Practices

变更集最佳实践

bash
undefined
bash
undefined

Best practices for change sets

变更集最佳实践

1. Always use descriptive change set names

1. 始终使用描述性的变更集名称

CHANGESET_NAME="update-bucket-config-$(date +%Y%m%d)"
CHANGESET_NAME="update-bucket-config-$(date +%Y%m%d)"

2. Use appropriate change set type

2. 使用合适的变更集类型

aws cloudformation create-change-set
--stack-name my-stack
--change-set-type UPDATE
--template-body file://template.yaml
aws cloudformation create-change-set
--stack-name my-stack
--change-set-type UPDATE
--template-body file://template.yaml

3. Review changes before execution

3. 执行前查看变更

aws cloudformation describe-change-set
--stack-name my-stack
--change-set-name $CHANGESET_NAME
--query 'Changes[].ResourceChange'
aws cloudformation describe-change-set
--stack-name my-stack
--change-set-name $CHANGESET_NAME
--query 'Changes[].ResourceChange'

4. Use capabilities flag when needed

4. 必要时使用capabilities标志

aws cloudformation create-change-set
--stack-name my-stack
--capabilities CAPABILITY_IAM CAPABILITY_NAMED_IAM
--template-body file://template.yaml
aws cloudformation create-change-set
--stack-name my-stack
--capabilities CAPABILITY_IAM CAPABILITY_NAMED_IAM
--template-body file://template.yaml

5. Set execution role for controlled deployments

5. 设置执行角色以实现受控部署

aws cloudformation create-change-set
--stack-name my-stack
--execution-role-name arn:aws:iam::123456789012:role/CloudFormationExecutionRole
--template-body file://template.yaml
undefined
aws cloudformation create-change-set
--stack-name my-stack
--execution-role-name arn:aws:iam::123456789012:role/CloudFormationExecutionRole
--template-body file://template.yaml
undefined

Related Files

相关文件

For detailed resource reference information, see:
  • reference.md - Complete AWS::S3::Bucket and AWS::S3::BucketPolicy properties
For comprehensive examples, see:
  • examples.md - Real-world S3 patterns and use cases
如需详细的资源参考信息,请查看:
  • reference.md - AWS::S3::Bucket和AWS::S3::BucketPolicy的完整属性说明
如需全面示例,请查看:
  • examples.md - 真实场景下的S3模式与用例