aws-sdk-java-v2-s3

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

AWS SDK for Java 2.x - Amazon S3

AWS SDK for Java 2.x - Amazon S3

When to Use

适用场景

Use this skill when:
  • Creating, listing, or deleting S3 buckets with proper configuration
  • Uploading or downloading objects from S3 with metadata and encryption
  • Working with multipart uploads for large files (>100MB) with error handling
  • Generating presigned URLs for temporary access to S3 objects
  • Copying or moving objects between S3 buckets with metadata preservation
  • Setting object metadata, storage classes, and access controls
  • Implementing S3 Transfer Manager for optimized file transfers
  • Integrating S3 with Spring Boot applications for cloud storage
  • Setting up S3 event notifications for object lifecycle management
  • Managing bucket policies, CORS configuration, and access controls
  • Implementing retry mechanisms and error handling for S3 operations
  • Testing S3 integrations with LocalStack for development environments
当你需要以下操作时,可以使用本技能:
  • 创建、列出或删除配置合规的S3存储桶
  • 上传或下载带有元数据和加密设置的S3对象
  • 处理大文件(>100MB)的分段上传并实现错误处理
  • 生成用于临时访问S3对象的预签名URL
  • 在S3存储桶之间复制或移动对象并保留元数据
  • 设置对象元数据、存储类别和访问控制
  • 实现S3 Transfer Manager以优化文件传输
  • 将S3与Spring Boot应用集成以实现云存储功能
  • 配置S3事件通知以进行对象生命周期管理
  • 管理存储桶策略、CORS配置和访问控制
  • 为S3操作实现重试机制和错误处理
  • 在开发环境中使用LocalStack测试S3集成

Dependencies

依赖项

xml
<dependency>
    <groupId>software.amazon.awssdk</groupId>
    <artifactId>s3</artifactId>
    <version>2.20.0</version> // Use the latest stable version
</dependency>

<!-- For S3 Transfer Manager -->
<dependency>
    <groupId>software.amazon.awssdk</groupId>
    <artifactId>s3-transfer-manager</artifactId>
    <version>2.20.0</version> // Use the latest stable version
</dependency>

<!-- For async operations -->
<dependency>
    <groupId>software.amazon.awssdk</groupId>
    <artifactId>netty-nio-client</artifactId>
    <version>2.20.0</version> // Use the latest stable version
</dependency>
xml
<dependency>
    <groupId>software.amazon.awssdk</groupId>
    <artifactId>s3</artifactId>
    <version>2.20.0</version> // Use the latest stable version
</dependency>

<!-- For S3 Transfer Manager -->
<dependency>
    <groupId>software.amazon.awssdk</groupId>
    <artifactId>s3-transfer-manager</artifactId>
    <version>2.20.0</version> // Use the latest stable version
</dependency>

<!-- For async operations -->
<dependency>
    <groupId>software.amazon.awssdk</groupId>
    <artifactId>netty-nio-client</artifactId>
    <version>2.20.0</version> // Use the latest stable version
</dependency>

Client Setup

客户端配置

Basic Synchronous Client

基础同步客户端

java
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3.S3Client;

S3Client s3Client = S3Client.builder()
    .region(Region.US_EAST_1)
    .build();
java
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3.S3Client;

S3Client s3Client = S3Client.builder()
    .region(Region.US_EAST_1)
    .build();

Basic Asynchronous Client

基础异步客户端

java
import software.amazon.awssdk.services.s3.S3AsyncClient;

S3AsyncClient s3AsyncClient = S3AsyncClient.builder()
    .region(Region.US_EAST_1)
    .build();
java
import software.amazon.awssdk.services.s3.S3AsyncClient;

S3AsyncClient s3AsyncClient = S3AsyncClient.builder()
    .region(Region.US_EAST_1)
    .build();

Configured Client with Retry Logic

带重试逻辑的配置化客户端

java
import software.amazon.awssdk.http.apache.ApacheHttpClient;
import software.amazon.awssdk.core.retry.RetryPolicy;
import software.amazon.awssdk.core.retry.backoff.ExponentialRetryBackoff;
import java.time.Duration;

S3Client s3Client = S3Client.builder()
    .region(Region.US_EAST_1)
    .httpClientBuilder(ApacheHttpClient.builder()
        .maxConnections(200)
        .connectionTimeout(Duration.ofSeconds(5)))
    .overrideConfiguration(b -> b
        .apiCallTimeout(Duration.ofSeconds(60))
        .apiCallAttemptTimeout(Duration.ofSeconds(30))
        .retryPolicy(RetryPolicy.builder()
            .numRetries(3)
            .retryBackoffStrategy(ExponentialRetryBackoff.builder()
                .baseDelay(Duration.ofSeconds(1))
                .maxBackoffTime(Duration.ofSeconds(30))
                .build())
            .build()))
    .build();
java
import software.amazon.awssdk.http.apache.ApacheHttpClient;
import software.amazon.awssdk.core.retry.RetryPolicy;
import software.amazon.awssdk.core.retry.backoff.ExponentialRetryBackoff;
import java.time.Duration;

S3Client s3Client = S3Client.builder()
    .region(Region.US_EAST_1)
    .httpClientBuilder(ApacheHttpClient.builder()
        .maxConnections(200)
        .connectionTimeout(Duration.ofSeconds(5)))
    .overrideConfiguration(b -> b
        .apiCallTimeout(Duration.ofSeconds(60))
        .apiCallAttemptTimeout(Duration.ofSeconds(30))
        .retryPolicy(RetryPolicy.builder()
            .numRetries(3)
            .retryBackoffStrategy(ExponentialRetryBackoff.builder()
                .baseDelay(Duration.ofSeconds(1))
                .maxBackoffTime(Duration.ofSeconds(30))
                .build())
            .build()))
    .build();

Basic Bucket Operations

基础存储桶操作

Create Bucket

创建存储桶

java
import software.amazon.awssdk.services.s3.model.*;
import java.util.concurrent.CompletableFuture;

public void createBucket(S3Client s3Client, String bucketName) {
    try {
        CreateBucketRequest request = CreateBucketRequest.builder()
            .bucket(bucketName)
            .build();

        s3Client.createBucket(request);

        // Wait until bucket is ready
        HeadBucketRequest waitRequest = HeadBucketRequest.builder()
            .bucket(bucketName)
            .build();

        s3Client.waiter().waitUntilBucketExists(waitRequest);
        System.out.println("Bucket created successfully: " + bucketName);

    } catch (S3Exception e) {
        System.err.println("Error creating bucket: " + e.awsErrorDetails().errorMessage());
        throw e;
    }
}
java
import software.amazon.awssdk.services.s3.model.*;
import java.util.concurrent.CompletableFuture;

public void createBucket(S3Client s3Client, String bucketName) {
    try {
        CreateBucketRequest request = CreateBucketRequest.builder()
            .bucket(bucketName)
            .build();

        s3Client.createBucket(request);

        // Wait until bucket is ready
        HeadBucketRequest waitRequest = HeadBucketRequest.builder()
            .bucket(bucketName)
            .build();

        s3Client.waiter().waitUntilBucketExists(waitRequest);
        System.out.println("Bucket created successfully: " + bucketName);

    } catch (S3Exception e) {
        System.err.println("Error creating bucket: " + e.awsErrorDetails().errorMessage());
        throw e;
    }
}

List All Buckets

列出所有存储桶

java
public List<String> listAllBuckets(S3Client s3Client) {
    ListBucketsResponse response = s3Client.listBuckets();

    return response.buckets().stream()
        .map(Bucket::name)
        .collect(Collectors.toList());
}
java
public List<String> listAllBuckets(S3Client s3Client) {
    ListBucketsResponse response = s3Client.listBuckets();

    return response.buckets().stream()
        .map(Bucket::name)
        .collect(Collectors.toList());
}

Check if Bucket Exists

检查存储桶是否存在

java
public boolean bucketExists(S3Client s3Client, String bucketName) {
    try {
        HeadBucketRequest request = HeadBucketRequest.builder()
            .bucket(bucketName)
            .build();

        s3Client.headBucket(request);
        return true;

    } catch (NoSuchBucketException e) {
        return false;
    }
}
java
public boolean bucketExists(S3Client s3Client, String bucketName) {
    try {
        HeadBucketRequest request = HeadBucketRequest.builder()
            .bucket(bucketName)
            .build();

        s3Client.headBucket(request);
        return true;

    } catch (NoSuchBucketException e) {
        return false;
    }
}

Basic Object Operations

基础对象操作

Upload File to S3

上传文件至S3

java
import software.amazon.awssdk.core.sync.RequestBody;
import java.nio.file.Paths;

public void uploadFile(S3Client s3Client, String bucketName, String key, String filePath) {
    PutObjectRequest request = PutObjectRequest.builder()
        .bucket(bucketName)
        .key(key)
        .build();

    s3Client.putObject(request, RequestBody.fromFile(Paths.get(filePath)));
    System.out.println("File uploaded: " + key);
}
java
import software.amazon.awssdk.core.sync.RequestBody;
import java.nio.file.Paths;

public void uploadFile(S3Client s3Client, String bucketName, String key, String filePath) {
    PutObjectRequest request = PutObjectRequest.builder()
        .bucket(bucketName)
        .key(key)
        .build();

    s3Client.putObject(request, RequestBody.fromFile(Paths.get(filePath)));
    System.out.println("File uploaded: " + key);
}

Download File from S3

从S3下载文件

java
import software.amazon.awssdk.core.ResponseInputStream;
import software.amazon.awssdk.services.s3.model.GetObjectResponse;
import java.nio.file.Paths;

public void downloadFile(S3Client s3Client, String bucketName, String key, String destPath) {
    GetObjectRequest request = GetObjectRequest.builder()
        .bucket(bucketName)
        .key(key)
        .build();

    s3Client.getObject(request, Paths.get(destPath));
    System.out.println("File downloaded: " + destPath);
}
java
import software.amazon.awssdk.core.ResponseInputStream;
import software.amazon.awssdk.services.s3.model.GetObjectResponse;
import java.nio.file.Paths;

public void downloadFile(S3Client s3Client, String bucketName, String key, String destPath) {
    GetObjectRequest request = GetObjectRequest.builder()
        .bucket(bucketName)
        .key(key)
        .build();

    s3Client.getObject(request, Paths.get(destPath));
    System.out.println("File downloaded: " + destPath);
}

Get Object Metadata

获取对象元数据

java
public Map<String, String> getObjectMetadata(S3Client s3Client, String bucketName, String key) {
    HeadObjectRequest request = HeadObjectRequest.builder()
        .bucket(bucketName)
        .key(key)
        .build();

    HeadObjectResponse response = s3Client.headObject(request);
    return response.metadata();
}
java
public Map<String, String> getObjectMetadata(S3Client s3Client, String bucketName, String key) {
    HeadObjectRequest request = HeadObjectRequest.builder()
        .bucket(bucketName)
        .key(key)
        .build();

    HeadObjectResponse response = s3Client.headObject(request);
    return response.metadata();
}

Advanced Object Operations

高级对象操作

Upload with Metadata and Encryption

带元数据和加密的上传

java
public void uploadWithMetadata(S3Client s3Client, String bucketName, String key,
                                String filePath, Map<String, String> metadata) {
    PutObjectRequest request = PutObjectRequest.builder()
        .bucket(bucketName)
        .key(key)
        .metadata(metadata)
        .contentType("application/pdf")
        .serverSideEncryption(ServerSideEncryption.AES256)
        .storageClass(StorageClass.STANDARD_IA)
        .build();

    PutObjectResponse response = s3Client.putObject(request,
        RequestBody.fromFile(Paths.get(filePath)));

    System.out.println("Upload completed. ETag: " + response.eTag());
}
java
public void uploadWithMetadata(S3Client s3Client, String bucketName, String key,
                                String filePath, Map<String, String> metadata) {
    PutObjectRequest request = PutObjectRequest.builder()
        .bucket(bucketName)
        .key(key)
        .metadata(metadata)
        .contentType("application/pdf")
        .serverSideEncryption(ServerSideEncryption.AES256)
        .storageClass(StorageClass.STANDARD_IA)
        .build();

    PutObjectResponse response = s3Client.putObject(request,
        RequestBody.fromFile(Paths.get(filePath)));

    System.out.println("Upload completed. ETag: " + response.eTag());
}

Copy Object Between Buckets

在存储桶之间复制对象

java
public void copyObject(S3Client s3Client, String sourceBucket, String sourceKey,
                       String destBucket, String destKey) {
    CopyObjectRequest request = CopyObjectRequest.builder()
        .sourceBucket(sourceBucket)
        .sourceKey(sourceKey)
        .destinationBucket(destBucket)
        .destinationKey(destKey)
        .build();

    s3Client.copyObject(request);
    System.out.println("Object copied: " + sourceKey + " -> " + destKey);
}
java
public void copyObject(S3Client s3Client, String sourceBucket, String sourceKey,
                       String destBucket, String destKey) {
    CopyObjectRequest request = CopyObjectRequest.builder()
        .sourceBucket(sourceBucket)
        .sourceKey(sourceKey)
        .destinationBucket(destBucket)
        .destinationKey(destKey)
        .build();

    s3Client.copyObject(request);
    System.out.println("Object copied: " + sourceKey + " -> " + destKey);
}

Delete Multiple Objects

删除多个对象

java
public void deleteMultipleObjects(S3Client s3Client, String bucketName, List<String> keys) {
    List<ObjectIdentifier> objectIds = keys.stream()
        .map(key -> ObjectIdentifier.builder().key(key).build())
        .collect(Collectors.toList());

    Delete delete = Delete.builder()
        .objects(objectIds)
        .build();

    DeleteObjectsRequest request = DeleteObjectsRequest.builder()
        .bucket(bucketName)
        .delete(delete)
        .build();

    DeleteObjectsResponse response = s3Client.deleteObjects(request);

    response.deleted().forEach(deleted ->
        System.out.println("Deleted: " + deleted.key()));

    response.errors().forEach(error ->
        System.err.println("Failed to delete " + error.key() + ": " + error.message()));
}
java
public void deleteMultipleObjects(S3Client s3Client, String bucketName, List<String> keys) {
    List<ObjectIdentifier> objectIds = keys.stream()
        .map(key -> ObjectIdentifier.builder().key(key).build())
        .collect(Collectors.toList());

    Delete delete = Delete.builder()
        .objects(objectIds)
        .build();

    DeleteObjectsRequest request = DeleteObjectsRequest.builder()
        .bucket(bucketName)
        .delete(delete)
        .build();

    DeleteObjectsResponse response = s3Client.deleteObjects(request);

    response.deleted().forEach(deleted ->
        System.out.println("Deleted: " + deleted.key()));

    response.errors().forEach(error ->
        System.err.println("Failed to delete " + error.key() + ": " + error.message()));
}

Presigned URLs

预签名URL

Generate Download URL

生成下载URL

java
import software.amazon.awssdk.services.s3.presigner.S3Presigner;
import software.amazon.awssdk.services.s3.presigner.model.*;
import java.time.Duration;

public String generateDownloadUrl(String bucketName, String key) {
    try (S3Presigner presigner = S3Presigner.builder()
            .region(Region.US_EAST_1)
            .build()) {

        GetObjectRequest getObjectRequest = GetObjectRequest.builder()
            .bucket(bucketName)
            .key(key)
            .build();

        GetObjectPresignRequest presignRequest = GetObjectPresignRequest.builder()
            .signatureDuration(Duration.ofMinutes(10))
            .getObjectRequest(getObjectRequest)
            .build();

        PresignedGetObjectRequest presignedRequest = presigner.presignGetObject(presignRequest);

        return presignedRequest.url().toString();
    }
}
java
import software.amazon.awssdk.services.s3.presigner.S3Presigner;
import software.amazon.awssdk.services.s3.presigner.model.*;
import java.time.Duration;

public String generateDownloadUrl(String bucketName, String key) {
    try (S3Presigner presigner = S3Presigner.builder()
            .region(Region.US_EAST_1)
            .build()) {

        GetObjectRequest getObjectRequest = GetObjectRequest.builder()
            .bucket(bucketName)
            .key(key)
            .build();

        GetObjectPresignRequest presignRequest = GetObjectPresignRequest.builder()
            .signatureDuration(Duration.ofMinutes(10))
            .getObjectRequest(getObjectRequest)
            .build();

        PresignedGetObjectRequest presignedRequest = presigner.presignGetObject(presignRequest);

        return presignedRequest.url().toString();
    }
}

Generate Upload URL

生成上传URL

java
public String generateUploadUrl(String bucketName, String key) {
    try (S3Presigner presigner = S3Presigner.create()) {

        PutObjectRequest putObjectRequest = PutObjectRequest.builder()
            .bucket(bucketName)
            .key(key)
            .build();

        PutObjectPresignRequest presignRequest = PutObjectPresignRequest.builder()
            .signatureDuration(Duration.ofMinutes(5))
            .putObjectRequest(putObjectRequest)
            .build();

        PresignedPutObjectRequest presignedRequest = presigner.presignPutObject(presignRequest);

        return presignedRequest.url().toString();
    }
}
java
public String generateUploadUrl(String bucketName, String key) {
    try (S3Presigner presigner = S3Presigner.create()) {

        PutObjectRequest putObjectRequest = PutObjectRequest.builder()
            .bucket(bucketName)
            .key(key)
            .build();

        PutObjectPresignRequest presignRequest = PutObjectPresignRequest.builder()
            .signatureDuration(Duration.ofMinutes(5))
            .putObjectRequest(putObjectRequest)
            .build();

        PresignedPutObjectRequest presignedRequest = presigner.presignPutObject(presignRequest);

        return presignedRequest.url().toString();
    }
}

S3 Transfer Manager

S3 Transfer Manager

Upload with Transfer Manager

使用Transfer Manager上传

java
import software.amazon.awssdk.transfer.s3.*;
import software.amazon.awssdk.transfer.s3.model.*;

public void uploadWithTransferManager(String bucketName, String key, String filePath) {
    try (S3TransferManager transferManager = S3TransferManager.create()) {

        UploadFileRequest uploadRequest = UploadFileRequest.builder()
            .putObjectRequest(req -> req
                .bucket(bucketName)
                .key(key))
            .source(Paths.get(filePath))
            .build();

        FileUpload upload = transferManager.uploadFile(uploadRequest);

        // Monitor progress
        upload.progressFuture().thenAccept(progress -> {
            System.out.println("Upload progress: " + progress.progressPercent() + "%");
        });

        CompletedFileUpload result = upload.completionFuture().join();

        System.out.println("Upload complete. ETag: " + result.response().eTag());
    }
}
java
import software.amazon.awssdk.transfer.s3.*;
import software.amazon.awssdk.transfer.s3.model.*;

public void uploadWithTransferManager(String bucketName, String key, String filePath) {
    try (S3TransferManager transferManager = S3TransferManager.create()) {

        UploadFileRequest uploadRequest = UploadFileRequest.builder()
            .putObjectRequest(req -> req
                .bucket(bucketName)
                .key(key))
            .source(Paths.get(filePath))
            .build();

        FileUpload upload = transferManager.uploadFile(uploadRequest);

        // Monitor progress
        upload.progressFuture().thenAccept(progress -> {
            System.out.println("Upload progress: " + progress.progressPercent() + "%");
        });

        CompletedFileUpload result = upload.completionFuture().join();

        System.out.println("Upload complete. ETag: " + result.response().eTag());
    }
}

Download with Transfer Manager

使用Transfer Manager下载

java
public void downloadWithTransferManager(String bucketName, String key, String destPath) {
    try (S3TransferManager transferManager = S3TransferManager.create()) {

        DownloadFileRequest downloadRequest = DownloadFileRequest.builder()
            .getObjectRequest(req -> req
                .bucket(bucketName)
                .key(key))
            .destination(Paths.get(destPath))
            .build();

        FileDownload download = transferManager.downloadFile(downloadRequest);

        CompletedFileDownload result = download.completionFuture().join();

        System.out.println("Download complete. Size: " + result.response().contentLength());
    }
}
java
public void downloadWithTransferManager(String bucketName, String key, String destPath) {
    try (S3TransferManager transferManager = S3TransferManager.create()) {

        DownloadFileRequest downloadRequest = DownloadFileRequest.builder()
            .getObjectRequest(req -> req
                .bucket(bucketName)
                .key(key))
            .destination(Paths.get(destPath))
            .build();

        FileDownload download = transferManager.downloadFile(downloadRequest);

        CompletedFileDownload result = download.completionFuture().join();

        System.out.println("Download complete. Size: " + result.response().contentLength());
    }
}

Spring Boot Integration

Spring Boot集成

Configuration Properties

配置属性

java
import org.springframework.boot.context.properties.ConfigurationProperties;

@ConfigurationProperties(prefix = "aws.s3")
public class S3Properties {
    private String accessKey;
    private String secretKey;
    private String region = "us-east-1";
    private String endpoint;
    private String defaultBucket;
    private boolean asyncEnabled = false;
    private boolean transferManagerEnabled = true;

    // Getters and setters
    public String getAccessKey() { return accessKey; }
    public void setAccessKey(String accessKey) { this.accessKey = accessKey; }
    // ... other getters and setters
}
java
import org.springframework.boot.context.properties.ConfigurationProperties;

@ConfigurationProperties(prefix = "aws.s3")
public class S3Properties {
    private String accessKey;
    private String secretKey;
    private String region = "us-east-1";
    private String endpoint;
    private String defaultBucket;
    private boolean asyncEnabled = false;
    private boolean transferManagerEnabled = true;

    // Getters and setters
    public String getAccessKey() { return accessKey; }
    public void setAccessKey(String accessKey) { this.accessKey = accessKey; }
    // ... other getters and setters
}

S3 Configuration Class

S3配置类

java
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import software.amazon.awssdk.auth.credentials.AwsBasicCredentials;
import software.amazon.awssdk.auth.credentials.StaticCredentialsProvider;
import software.amazon.awssdk.services.s3.S3Client;
import software.amazon.awssdk.services.s3.S3AsyncClient;
import software.amazon.awssdk.regions.Region;
import java.net.URI;

@Configuration
public class S3Configuration {

    private final S3Properties properties;

    public S3Configuration(S3Properties properties) {
        this.properties = properties;
    }

    @Bean
    public S3Client s3Client() {
        S3Client.Builder builder = S3Client.builder()
            .region(Region.of(properties.getRegion()));

        if (properties.getAccessKey() != null && properties.getSecretKey() != null) {
            builder.credentialsProvider(StaticCredentialsProvider.create(
                AwsBasicCredentials.create(
                    properties.getAccessKey(),
                    properties.getSecretKey())));
        }

        if (properties.getEndpoint() != null) {
            builder.endpointOverride(URI.create(properties.getEndpoint()));
        }

        return builder.build();
    }

    @Bean
    public S3AsyncClient s3AsyncClient() {
        S3AsyncClient.Builder builder = S3AsyncClient.builder()
            .region(Region.of(properties.getRegion()));

        if (properties.getAccessKey() != null && properties.getSecretKey() != null) {
            builder.credentialsProvider(StaticCredentialsProvider.create(
                AwsBasicCredentials.create(
                    properties.getAccessKey(),
                    properties.getSecretKey())));
        }

        if (properties.getEndpoint() != null) {
            builder.endpointOverride(URI.create(properties.getEndpoint()));
        }

        return builder.build();
    }

    @Bean
    public S3TransferManager s3TransferManager() {
        return S3TransferManager.builder()
            .s3Client(s3Client())
            .build();
    }
}
java
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import software.amazon.awssdk.auth.credentials.AwsBasicCredentials;
import software.amazon.awssdk.auth.credentials.StaticCredentialsProvider;
import software.amazon.awssdk.services.s3.S3Client;
import software.amazon.awssdk.services.s3.S3AsyncClient;
import software.amazon.awssdk.regions.Region;
import java.net.URI;

@Configuration
public class S3Configuration {

    private final S3Properties properties;

    public S3Configuration(S3Properties properties) {
        this.properties = properties;
    }

    @Bean
    public S3Client s3Client() {
        S3Client.Builder builder = S3Client.builder()
            .region(Region.of(properties.getRegion()));

        if (properties.getAccessKey() != null && properties.getSecretKey() != null) {
            builder.credentialsProvider(StaticCredentialsProvider.create(
                AwsBasicCredentials.create(
                    properties.getAccessKey(),
                    properties.getSecretKey())));
        }

        if (properties.getEndpoint() != null) {
            builder.endpointOverride(URI.create(properties.getEndpoint()));
        }

        return builder.build();
    }

    @Bean
    public S3AsyncClient s3AsyncClient() {
        S3AsyncClient.Builder builder = S3AsyncClient.builder()
            .region(Region.of(properties.getRegion()));

        if (properties.getAccessKey() != null && properties.getSecretKey() != null) {
            builder.credentialsProvider(StaticCredentialsProvider.create(
                AwsBasicCredentials.create(
                    properties.getAccessKey(),
                    properties.getSecretKey())));
        }

        if (properties.getEndpoint() != null) {
            builder.endpointOverride(URI.create(properties.getEndpoint()));
        }

        return builder.build();
    }

    @Bean
    public S3TransferManager s3TransferManager() {
        return S3TransferManager.builder()
            .s3Client(s3Client())
            .build();
    }
}

S3 Service

S3服务类

java
import org.springframework.stereotype.Service;
import software.amazon.awssdk.transfer.s3.S3TransferManager;
import software.amazon.awssdk.services.s3.model.*;
import java.nio.file.*;
import java.util.*;
import java.util.concurrent.CompletableFuture;

@Service
@RequiredArgsConstructor
public class S3Service {

    private final S3Client s3Client;
    private final S3AsyncClient s3AsyncClient;
    private final S3TransferManager transferManager;
    private final S3Properties properties;

    public CompletableFuture<Void> uploadFileAsync(String key, Path file) {
        PutObjectRequest request = PutObjectRequest.builder()
            .bucket(properties.getDefaultBucket())
            .key(key)
            .build();

        return CompletableFuture.runAsync(() -> {
            s3Client.putObject(request, RequestBody.fromFile(file));
        });
    }

    public CompletableFuture<byte[]> downloadFileAsync(String key) {
        GetObjectRequest request = GetObjectRequest.builder()
            .bucket(properties.getDefaultBucket())
            .key(key)
            .build();

        return CompletableFuture.supplyAsync(() -> {
            try (ResponseInputStream<GetObjectResponse> response = s3Client.getObject(request)) {
                return response.readAllBytes();
            } catch (IOException e) {
                throw new RuntimeException("Failed to read S3 object", e);
            }
        });
    }

    public CompletableFuture<String> generatePresignedUrl(String key, Duration duration) {
        return CompletableFuture.supplyAsync(() -> {
            try (S3Presigner presigner = S3Presigner.builder()
                    .region(Region.of(properties.getRegion()))
                    .build()) {

                GetObjectRequest getRequest = GetObjectRequest.builder()
                    .bucket(properties.getDefaultBucket())
                    .key(key)
                    .build();

                GetObjectPresignRequest presignRequest = GetObjectPresignRequest.builder()
                    .signatureDuration(duration)
                    .getObjectRequest(getRequest)
                    .build();

                return presigner.presignGetObject(presignRequest).url().toString();
            }
        });
    }

    public Flux<S3Object> listObjects(String prefix) {
        ListObjectsV2Request request = ListObjectsV2Request.builder()
            .bucket(properties.getDefaultBucket())
            .prefix(prefix)
            .build();

        return Flux.create(sink -> {
            s3Client.listObjectsV2Paginator(request)
                .contents()
                .forEach(sink::next);
            sink.complete();
        });
    }
}
java
import org.springframework.stereotype.Service;
import software.amazon.awssdk.transfer.s3.S3TransferManager;
import software.amazon.awssdk.services.s3.model.*;
import java.nio.file.*;
import java.util.*;
import java.util.concurrent.CompletableFuture;

@Service
@RequiredArgsConstructor
public class S3Service {

    private final S3Client s3Client;
    private final S3AsyncClient s3AsyncClient;
    private final S3TransferManager transferManager;
    private final S3Properties properties;

    public CompletableFuture<Void> uploadFileAsync(String key, Path file) {
        PutObjectRequest request = PutObjectRequest.builder()
            .bucket(properties.getDefaultBucket())
            .key(key)
            .build();

        return CompletableFuture.runAsync(() -> {
            s3Client.putObject(request, RequestBody.fromFile(file));
        });
    }

    public CompletableFuture<byte[]> downloadFileAsync(String key) {
        GetObjectRequest request = GetObjectRequest.builder()
            .bucket(properties.getDefaultBucket())
            .key(key)
            .build();

        return CompletableFuture.supplyAsync(() -> {
            try (ResponseInputStream<GetObjectResponse> response = s3Client.getObject(request)) {
                return response.readAllBytes();
            } catch (IOException e) {
                throw new RuntimeException("Failed to read S3 object", e);
            }
        });
    }

    public CompletableFuture<String> generatePresignedUrl(String key, Duration duration) {
        return CompletableFuture.supplyAsync(() -> {
            try (S3Presigner presigner = S3Presigner.builder()
                    .region(Region.of(properties.getRegion()))
                    .build()) {

                GetObjectRequest getRequest = GetObjectRequest.builder()
                    .bucket(properties.getDefaultBucket())
                    .key(key)
                    .build();

                GetObjectPresignRequest presignRequest = GetObjectPresignRequest.builder()
                    .signatureDuration(duration)
                    .getObjectRequest(getRequest)
                    .build();

                return presigner.presignGetObject(presignRequest).url().toString();
            }
        });
    }

    public Flux<S3Object> listObjects(String prefix) {
        ListObjectsV2Request request = ListObjectsV2Request.builder()
            .bucket(properties.getDefaultBucket())
            .prefix(prefix)
            .build();

        return Flux.create(sink -> {
            s3Client.listObjectsV2Paginator(request)
                .contents()
                .forEach(sink::next);
            sink.complete();
        });
    }
}

Examples

示例

Basic File Upload Example

基础文件上传示例

java
public class S3UploadExample {
    public static void main(String[] args) {
        // Initialize client
        S3Client s3Client = S3Client.builder()
            .region(Region.US_EAST_1)
            .build();

        String bucketName = "my-example-bucket";
        String filePath = "document.pdf";
        String key = "uploads/document.pdf";

        // Create bucket if it doesn't exist
        if (!bucketExists(s3Client, bucketName)) {
            createBucket(s3Client, bucketName);
        }

        // Upload file
        Map<String, String> metadata = Map.of(
            "author", "John Doe",
            "content-type", "application/pdf",
            "upload-date", java.time.LocalDate.now().toString()
        );

        uploadWithMetadata(s3Client, bucketName, key, filePath, metadata);

        // Generate presigned URL
        String downloadUrl = generateDownloadUrl(bucketName, key);
        System.out.println("Download URL: " + downloadUrl);

        // Close client
        s3Client.close();
    }
}
java
public class S3UploadExample {
    public static void main(String[] args) {
        // Initialize client
        S3Client s3Client = S3Client.builder()
            .region(Region.US_EAST_1)
            .build();

        String bucketName = "my-example-bucket";
        String filePath = "document.pdf";
        String key = "uploads/document.pdf";

        // Create bucket if it doesn't exist
        if (!bucketExists(s3Client, bucketName)) {
            createBucket(s3Client, bucketName);
        }

        // Upload file
        Map<String, String> metadata = Map.of(
            "author", "John Doe",
            "content-type", "application/pdf",
            "upload-date", java.time.LocalDate.now().toString()
        );

        uploadWithMetadata(s3Client, bucketName, key, filePath, metadata);

        // Generate presigned URL
        String downloadUrl = generateDownloadUrl(bucketName, key);
        System.out.println("Download URL: " + downloadUrl);

        // Close client
        s3Client.close();
    }
}

Batch File Processing Example

批量文件处理示例

java
import java.nio.file.*;
import java.util.stream.*;

public class S3BatchProcessing {
    public void processDirectoryUpload(S3Client s3Client, String bucketName, String directoryPath) {
        try (Stream<Path> paths = Files.walk(Paths.get(directoryPath))) {
            List<CompletableFuture<Void>> futures = paths
                .filter(Files::isRegularFile)
                .map(path -> {
                    String key = bucketName + "/" + path.getFileName().toString();
                    return CompletableFuture.runAsync(() -> {
                        uploadFile(s3Client, bucketName, key, path.toString());
                    });
                })
                .collect(Collectors.toList());

            // Wait for all uploads to complete
            CompletableFuture.allOf(
                futures.toArray(new CompletableFuture[0])
            ).join();

            System.out.println("All files uploaded successfully");
        } catch (IOException e) {
            throw new RuntimeException("Failed to process directory", e);
        }
    }
}
java
import java.nio.file.*;
import java.util.stream.*;

public class S3BatchProcessing {
    public void processDirectoryUpload(S3Client s3Client, String bucketName, String directoryPath) {
        try (Stream<Path> paths = Files.walk(Paths.get(directoryPath))) {
            List<CompletableFuture<Void>> futures = paths
                .filter(Files::isRegularFile)
                .map(path -> {
                    String key = bucketName + "/" + path.getFileName().toString();
                    return CompletableFuture.runAsync(() -> {
                        uploadFile(s3Client, bucketName, key, path.toString());
                    });
                })
                .collect(Collectors.toList());

            // Wait for all uploads to complete
            CompletableFuture.allOf(
                futures.toArray(new CompletableFuture[0])
            ).join();

            System.out.println("All files uploaded successfully");
        } catch (IOException e) {
            throw new RuntimeException("Failed to process directory", e);
        }
    }
}

Best Practices

最佳实践

Performance Optimization

性能优化

  1. Use S3 Transfer Manager: Automatically handles multipart uploads, parallel transfers, and progress tracking for files >100MB
  2. Reuse S3 Client: Clients are thread-safe and should be reused throughout the application lifecycle
  3. Enable async operations: Use S3AsyncClient for I/O-bound operations to improve throughput
  4. Configure proper timeouts: Set appropriate timeouts for large file operations
  5. Use connection pooling: Configure HTTP client for optimal connection management
  1. 使用S3 Transfer Manager:自动处理分段上传、并行传输及文件大小>100MB时的进度跟踪
  2. 复用S3客户端:客户端是线程安全的,应在应用生命周期内复用
  3. 启用异步操作:使用S3AsyncClient处理I/O密集型操作以提升吞吐量
  4. 配置合理超时时间:为大文件操作设置合适的超时时间
  5. 使用连接池:配置HTTP客户端以实现最优连接管理

Security Considerations

安全考量

  1. Use temporary credentials: Always use IAM roles or AWS STS for short-lived access tokens
  2. Enable server-side encryption: Use AES-256 or AWS KMS for sensitive data
  3. Implement access controls: Use bucket policies and IAM roles instead of access keys in production
  4. Validate object metadata: Sanitize user-provided metadata to prevent header injection
  5. Use presigned URLs: Avoid exposing credentials by using temporary access URLs
  1. 使用临时凭证:始终使用IAM角色或AWS STS获取短期访问令牌
  2. 启用服务器端加密:对敏感数据使用AES-256或AWS KMS加密
  3. 实现访问控制:生产环境中使用存储桶策略和IAM角色替代访问密钥
  4. 验证对象元数据:清理用户提供的元数据以防止头部注入攻击
  5. 使用预签名URL:通过临时访问URL避免暴露凭证

Error Handling

错误处理

  1. Implement retry logic: Network operations should have exponential backoff retry strategies
  2. Handle throttling: Implement proper handling of 429 Too Many Requests responses
  3. Validate object existence: Check if objects exist before operations that require them
  4. Clean up failed operations: Abort multipart uploads that fail
  5. Log appropriately: Log successful operations and errors for monitoring
  1. 实现重试逻辑:网络操作应采用指数退避重试策略
  2. 处理限流:正确处理429 Too Many Requests响应
  3. 验证对象存在性:在执行依赖对象的操作前检查对象是否存在
  4. 清理失败操作:中止失败的分段上传
  5. 合理记录日志:记录成功操作和错误信息以便监控

Cost Optimization

成本优化

  1. Use appropriate storage classes: Choose STANDARD, STANDARD_IA, INTELLIGENT_TIERING based on access patterns
  2. Implement lifecycle policies: Automatically transition or expire objects
  3. Enable object versioning: For important data that needs retention
  4. Monitor usage: Track data transfer and storage costs
  5. Minimize API calls: Use batch operations when possible
  1. 选择合适的存储类别:根据访问模式选择STANDARD、STANDARD_IA或INTELLIGENT_TIERING
  2. 实现生命周期策略:自动转换或过期对象
  3. 启用对象版本控制:对需要保留的重要数据启用版本控制
  4. 监控使用情况:跟踪数据传输和存储成本
  5. 减少API调用:尽可能使用批量操作

Constraints and Limitations

约束与限制

  • File size limits: Single PUT operations limited to 5GB; use multipart uploads for larger files
  • Batch operations: Maximum 1000 objects per DeleteObjects operation
  • Metadata size: User-defined metadata limited to 2KB
  • Concurrent transfers: Transfer Manager handles up to 100 concurrent transfers by default
  • Region consistency: Cross-region operations may incur additional costs and latency
  • S3 eventual consistency: New objects might not be immediately visible after upload
  • 文件大小限制:单个PUT操作最大支持5GB;更大文件需使用分段上传
  • 批量操作限制:DeleteObjects操作每次最多处理1000个对象
  • 元数据大小限制:用户自定义元数据最大为2KB
  • 并发传输限制:Transfer Manager默认支持最多100个并发传输
  • 跨区域一致性:跨区域操作可能产生额外成本和延迟
  • S3最终一致性:新上传的对象可能无法立即被访问

References

参考资料

For more detailed information, see:
如需更多详细信息,请参阅:

Related Skills

相关技能

  • aws-sdk-java-v2-core
    - Core AWS SDK patterns and configuration
  • spring-boot-dependency
    -injection - Spring dependency injection patterns
  • unit-test-service-layer
    - Testing service layer patterns
  • unit-test-wiremock-rest-api
    - Testing external API integrations
  • aws-sdk-java-v2-core
    - 核心AWS SDK模式与配置
  • spring-boot-dependency
    -injection - Spring依赖注入模式
  • unit-test-service-layer
    - 服务层测试模式
  • unit-test-wiremock-rest-api
    - 外部API集成测试