gcp-gke-deployment-strategies

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

GKE Deployment Strategies

GKE 部署策略

Purpose

目的

Deploy applications to GKE with zero-downtime updates using rolling deployments and health checks. This skill covers deployment configuration, monitoring rollout progress, rollback procedures, and Spring Boot health probe integration.
在GKE上通过滚动部署和健康检查实现应用的零停机更新。本技能涵盖部署配置、监控发布进度、回滚流程以及Spring Boot健康探针集成。

When to Use

适用场景

Use this skill when you need to:
  • Deploy a new version of an application to GKE
  • Configure rolling update strategies for zero-downtime deployments
  • Set up liveness and readiness probes for Spring Boot apps
  • Monitor rollout progress and verify deployment health
  • Roll back failed deployments
  • Implement blue-green deployment patterns
  • Debug deployment issues
Trigger phrases: "deploy to GKE", "rolling update", "rollback deployment", "configure health probes", "zero-downtime deployment"
当你需要以下操作时使用本技能:
  • 在GKE上部署应用新版本
  • 配置滚动更新策略以实现零停机部署
  • 为Spring Boot应用设置存活和就绪探针
  • 监控发布进度并验证部署健康状态
  • 回滚失败的部署
  • 实现蓝绿部署模式
  • 调试部署问题
触发短语:"部署到GKE"、"滚动更新"、"回滚部署"、"配置健康探针"、"零停机部署"

Table of Contents

目录

Quick Start

快速开始

Standard zero-downtime rolling update:
bash
undefined
标准零停机滚动更新:
bash
undefined

1. Configure rolling update strategy

1. Configure rolling update strategy

kubectl apply -f deployment.yaml # With maxSurge: 50%, maxUnavailable: 0%
kubectl apply -f deployment.yaml # With maxSurge: 50%, maxUnavailable: 0%

2. Update image

2. Update image

kubectl set image deployment/supplier-charges-hub
supplier-charges-hub-container=new-image:v2.0.0
-n wtr-supplier-charges
kubectl set image deployment/supplier-charges-hub
supplier-charges-hub-container=new-image:v2.0.0
-n wtr-supplier-charges

3. Monitor rollout

3. Monitor rollout

kubectl rollout status deployment/supplier-charges-hub
-n wtr-supplier-charges
kubectl rollout status deployment/supplier-charges-hub
-n wtr-supplier-charges

4. Verify (or rollback if needed)

4. Verify (or rollback if needed)

kubectl rollout undo deployment/supplier-charges-hub
-n wtr-supplier-charges
undefined
kubectl rollout undo deployment/supplier-charges-hub
-n wtr-supplier-charges
undefined

Instructions

操作步骤

Step 1: Configure Rolling Update Strategy

步骤1:配置滚动更新策略

Set up zero-downtime deployments with proper surge and unavailability settings:
yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: supplier-charges-hub
spec:
  replicas: 2
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 50%          # Can create 1 extra pod (2 * 50% = 1)
      maxUnavailable: 0%     # Zero downtime - no pods removed until new ones ready
  minReadySeconds: 10        # Wait 10s after pod is ready before proceeding
  progressDeadlineSeconds: 300  # Fail rollout if not complete in 5 min
  revisionHistoryLimit: 3    # Keep last 3 revisions for rollback
  selector:
    matchLabels:
      app: supplier-charges-hub
  template:
    metadata:
      labels:
        app: supplier-charges-hub
    spec:
      containers:
      - name: supplier-charges-hub-container
        image: europe-west2-docker.pkg.dev/.../supplier-charges-hub:latest
        livenessProbe:
          httpGet:
            path: /actuator/health/liveness
            port: 8080
          initialDelaySeconds: 20
          periodSeconds: 15
          failureThreshold: 3
        readinessProbe:
          httpGet:
            path: /actuator/health/readiness
            port: 8080
          initialDelaySeconds: 20
          periodSeconds: 15
          failureThreshold: 3
Strategy Explanation:
  • maxSurge: 50%
    - Allows 1 extra pod during rollout (temporary spike in resources)
  • maxUnavailable: 0%
    - No pods removed until replacement is ready (zero downtime)
  • minReadySeconds: 10
    - Prevents premature progression if pod is flaky
  • progressDeadlineSeconds: 300
    - Detects stuck rollouts after 5 minutes
通过合理的 surge 和不可用性设置实现零停机部署:
yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: supplier-charges-hub
spec:
  replicas: 2
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 50%          # Can create 1 extra pod (2 * 50% = 1)
      maxUnavailable: 0%     # Zero downtime - no pods removed until new ones ready
  minReadySeconds: 10        # Wait 10s after pod is ready before proceeding
  progressDeadlineSeconds: 300  # Fail rollout if not complete in 5 min
  revisionHistoryLimit: 3    # Keep last 3 revisions for rollback
  selector:
    matchLabels:
      app: supplier-charges-hub
  template:
    metadata:
      labels:
        app: supplier-charges-hub
    spec:
      containers:
      - name: supplier-charges-hub-container
        image: europe-west2-docker.pkg.dev/.../supplier-charges-hub:latest
        livenessProbe:
          httpGet:
            path: /actuator/health/liveness
            port: 8080
          initialDelaySeconds: 20
          periodSeconds: 15
          failureThreshold: 3
        readinessProbe:
          httpGet:
            path: /actuator/health/readiness
            port: 8080
          initialDelaySeconds: 20
          periodSeconds: 15
          failureThreshold: 3
策略说明:
  • maxSurge: 50%
    - 允许在发布期间创建1个额外Pod(资源临时峰值)
  • maxUnavailable: 0%
    - 直到新Pod就绪才会移除旧Pod(零停机)
  • minReadySeconds: 10
    - 防止Pod不稳定时过早推进发布
  • progressDeadlineSeconds: 300
    - 5分钟内未完成则判定发布失败

Step 2: Configure Spring Boot Health Probes

步骤2:配置Spring Boot健康探针

Enable Spring Boot Actuator health endpoints that Kubernetes will check:
yaml
undefined
启用Kubernetes将检查的Spring Boot Actuator健康端点:
yaml
undefined

application.yml

application.yml

management: endpoint: health: probes: enabled: true show-details: always endpoints: web: exposure: include: health,info,metrics,prometheus health: livenessState: enabled: true readinessState: enabled: true

**Health Endpoint Distinctions:**

| Probe | Path | Purpose | Failure Action |
|-------|------|---------|----------------|
| **Liveness** | `/actuator/health/liveness` | Is the app broken? | Restart pod |
| **Readiness** | `/actuator/health/readiness` | Can the app serve requests? | Stop traffic |
| **Startup** | `/actuator/health/liveness` | Slow startup complete? | Wait before liveness checks |
management: endpoint: health: probes: enabled: true show-details: always endpoints: web: exposure: include: health,info,metrics,prometheus health: livenessState: enabled: true readinessState: enabled: true

**健康端点区别:**

| 探针 | 路径 | 用途 | 失败动作 |
|-------|------|---------|----------------|
| **存活探针** | `/actuator/health/liveness` | 应用是否崩溃? | 重启Pod |
| **就绪探针** | `/actuator/health/readiness` | 应用能否处理请求? | 停止流量 |
| **启动探针** | `/actuator/health/liveness` | 慢启动是否完成? | 在存活检查前等待 |

Step 3: Deploy Application

步骤3:部署应用

Apply your deployment manifest:
bash
kubectl apply -f deployment.yaml -n wtr-supplier-charges
Kubernetes will immediately start the rollout with your configured strategy.
应用部署清单:
bash
kubectl apply -f deployment.yaml -n wtr-supplier-charges
Kubernetes将立即按照你配置的策略开始发布。

Step 4: Monitor Rollout Progress

步骤4:监控发布进度

Track the deployment update in real-time:
bash
undefined
实时跟踪部署更新:
bash
undefined

Watch rollout status (blocks until complete)

监控发布状态(阻塞直到完成)

kubectl rollout status deployment/supplier-charges-hub
-n wtr-supplier-charges
--timeout=5m
kubectl rollout status deployment/supplier-charges-hub
-n wtr-supplier-charges
--timeout=5m

Or check status without waiting

或无需等待直接查看状态

kubectl get deployment supplier-charges-hub
-n wtr-supplier-charges
-o wide

**Expected Output During Rollout:**
NAME READY UP-TO-DATE AVAILABLE AGE supplier-charges-hub 2/2 1 2 5m

Shows: 1 new pod being created, 2 old pods still serving traffic

undefined
kubectl get deployment supplier-charges-hub
-n wtr-supplier-charges
-o wide

**发布期间预期输出:**
NAME READY UP-TO-DATE AVAILABLE AGE supplier-charges-hub 2/2 1 2 5m

说明:正在创建1个新Pod,2个旧Pod仍在处理流量

undefined

Step 5: Verify Health Checks Passing

步骤5:验证健康检查通过

Check that pods are actually ready:
bash
undefined
确认Pod确实就绪:
bash
undefined

View detailed pod status

查看详细Pod状态

kubectl get pods -n wtr-supplier-charges -o wide
kubectl get pods -n wtr-supplier-charges -o wide

Check health probe status

检查健康探针状态

kubectl describe pod <pod-name> -n wtr-supplier-charges | grep -A 5 "Readiness"
kubectl describe pod <pod-name> -n wtr-supplier-charges | grep -A 5 "Readiness"

Test health endpoint manually

手动测试健康端点

kubectl exec deployment/supplier-charges-hub -n wtr-supplier-charges --
curl -s localhost:8080/actuator/health/readiness | jq .
undefined
kubectl exec deployment/supplier-charges-hub -n wtr-supplier-charges --
curl -s localhost:8080/actuator/health/readiness | jq .
undefined

Step 6: View Rollout History

步骤6:查看发布历史

Track previous deployments for rollback capability:
bash
undefined
跟踪之前的部署以便回滚:
bash
undefined

List all revisions

列出所有版本

kubectl rollout history deployment/supplier-charges-hub
-n wtr-supplier-charges
kubectl rollout history deployment/supplier-charges-hub
-n wtr-supplier-charges

Details of specific revision

查看特定版本详情

kubectl rollout history deployment/supplier-charges-hub
-n wtr-supplier-charges
--revision=1
undefined
kubectl rollout history deployment/supplier-charges-hub
-n wtr-supplier-charges
--revision=1
undefined

Step 7: Rollback If Needed

步骤7:按需回滚

If deployment fails or has issues, quickly rollback:
bash
undefined
如果部署失败或出现问题,快速回滚:
bash
undefined

Rollback to previous version

回滚到上一个版本

kubectl rollout undo deployment/supplier-charges-hub
-n wtr-supplier-charges
kubectl rollout undo deployment/supplier-charges-hub
-n wtr-supplier-charges

Rollback to specific revision

回滚到特定版本

kubectl rollout undo deployment/supplier-charges-hub
-n wtr-supplier-charges
--to-revision=2
kubectl rollout undo deployment/supplier-charges-hub
-n wtr-supplier-charges
--to-revision=

Monitor rollback status

监控回滚状态

kubectl rollout status deployment/supplier-charges-hub
-n wtr-supplier-charges
undefined
kubectl rollout status deployment/supplier-charges-hub
-n wtr-supplier-charges
undefined

Examples

示例

Example 1: Complete Deployment with Rolling Update

示例1:完整滚动更新部署

yaml
undefined
yaml
undefined

deployment.yaml

deployment.yaml

apiVersion: apps/v1 kind: Deployment metadata: name: supplier-charges-hub namespace: wtr-supplier-charges spec: replicas: 2 strategy: type: RollingUpdate rollingUpdate: maxSurge: 50% maxUnavailable: 0% minReadySeconds: 10 progressDeadlineSeconds: 300 revisionHistoryLimit: 3 selector: matchLabels: app: supplier-charges-hub template: metadata: labels: app: supplier-charges-hub annotations: prometheus.io/scrape: "true" prometheus.io/port: "8080" prometheus.io/path: "/actuator/prometheus" spec: serviceAccountName: app-runtime containers: - name: supplier-charges-hub-container image: europe-west2-docker.pkg.dev/ecp-artifact-registry/wtr-supplier-charges-container-images/supplier-charges-hub:v1.2.3 imagePullPolicy: IfNotPresent ports: - name: http containerPort: 8080 protocol: TCP env: - name: SPRING_PROFILES_ACTIVE value: "labs" livenessProbe: httpGet: path: /actuator/health/liveness port: http scheme: HTTP initialDelaySeconds: 20 periodSeconds: 15 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 3 readinessProbe: httpGet: path: /actuator/health/readiness port: http scheme: HTTP initialDelaySeconds: 20 periodSeconds: 15 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 3 resources: requests: cpu: 1000m memory: 2Gi limits: cpu: 1000m memory: 2Gi securityContext: runAsNonRoot: true allowPrivilegeEscalation: false - name: cloud-sql-proxy image: gcr.io/cloud-sql-connectors/cloud-sql-proxy:2.11.4 args: - "--structured-logs" - "--port=5432" - "--auto-iam-authn" - "$(DB_CONNECTION_NAME)" env: - name: DB_CONNECTION_NAME valueFrom: configMapKeyRef: name: db-config key: DB_CONNECTION_NAME resources: requests: cpu: 100m memory: 100Mi limits: cpu: 250m memory: 256Mi securityContext: runAsNonRoot: true allowPrivilegeEscalation: false

Deploy it:
```bash
kubectl apply -f deployment.yaml
kubectl rollout status deployment/supplier-charges-hub -n wtr-supplier-charges
apiVersion: apps/v1 kind: Deployment metadata: name: supplier-charges-hub namespace: wtr-supplier-charges spec: replicas: 2 strategy: type: RollingUpdate rollingUpdate: maxSurge: 50% maxUnavailable: 0% minReadySeconds: 10 progressDeadlineSeconds: 300 revisionHistoryLimit: 3 selector: matchLabels: app: supplier-charges-hub template: metadata: labels: app: supplier-charges-hub annotations: prometheus.io/scrape: "true" prometheus.io/port: "8080" prometheus.io/path: "/actuator/prometheus" spec: serviceAccountName: app-runtime containers: - name: supplier-charges-hub-container image: europe-west2-docker.pkg.dev/ecp-artifact-registry/wtr-supplier-charges-container-images/supplier-charges-hub:v1.2.3 imagePullPolicy: IfNotPresent ports: - name: http containerPort: 8080 protocol: TCP env: - name: SPRING_PROFILES_ACTIVE value: "labs" livenessProbe: httpGet: path: /actuator/health/liveness port: http scheme: HTTP initialDelaySeconds: 20 periodSeconds: 15 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 3 readinessProbe: httpGet: path: /actuator/health/readiness port: http scheme: HTTP initialDelaySeconds: 20 periodSeconds: 15 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 3 resources: requests: cpu: 1000m memory: 2Gi limits: cpu: 1000m memory: 2Gi securityContext: runAsNonRoot: true allowPrivilegeEscalation: false - name: cloud-sql-proxy image: gcr.io/cloud-sql-connectors/cloud-sql-proxy:2.11.4 args: - "--structured-logs" - "--port=5432" - "--auto-iam-authn" - "$(DB_CONNECTION_NAME)" env: - name: DB_CONNECTION_NAME valueFrom: configMapKeyRef: name: db-config key: DB_CONNECTION_NAME resources: requests: cpu: 100m memory: 100Mi limits: cpu: 250m memory: 256Mi securityContext: runAsNonRoot: true allowPrivilegeEscalation: false

部署命令:
```bash
kubectl apply -f deployment.yaml
kubectl rollout status deployment/supplier-charges-hub -n wtr-supplier-charges

Example 2: Automated Deployment Update

示例2:自动化部署更新

bash
#!/bin/bash
bash
#!/bin/bash

Update deployment with automated rollback on failure

Update deployment with automated rollback on failure

DEPLOYMENT="supplier-charges-hub" NAMESPACE="wtr-supplier-charges" IMAGE="europe-west2-docker.pkg.dev/ecp-artifact-registry/wtr-supplier-charges-container-images/supplier-charges-hub:${1:-latest}"
echo "Deploying: $IMAGE"
DEPLOYMENT="supplier-charges-hub" NAMESPACE="wtr-supplier-charges" IMAGE="europe-west2-docker.pkg.dev/ecp-artifact-registry/wtr-supplier-charges-container-images/supplier-charges-hub:${1:-latest}"
echo "Deploying: $IMAGE"

Update image

Update image

kubectl set image deployment/$DEPLOYMENT
supplier-charges-hub-container=$IMAGE
-n $NAMESPACE
kubectl set image deployment/$DEPLOYMENT
supplier-charges-hub-container=$IMAGE
-n $NAMESPACE

Wait for rollout with timeout

Wait for rollout with timeout

if kubectl rollout status deployment/$DEPLOYMENT
-n $NAMESPACE
--timeout=5m; then echo "Deployment successful!" exit 0 else echo "Deployment failed! Rolling back..." kubectl rollout undo deployment/$DEPLOYMENT -n $NAMESPACE kubectl rollout status deployment/$DEPLOYMENT -n $NAMESPACE exit 1 fi
undefined
if kubectl rollout status deployment/$DEPLOYMENT
-n $NAMESPACE
--timeout=5m; then echo "Deployment successful!" exit 0 else echo "Deployment failed! Rolling back..." kubectl rollout undo deployment/$DEPLOYMENT -n $NAMESPACE kubectl rollout status deployment/$DEPLOYMENT -n $NAMESPACE exit 1 fi
undefined

Example 3: Blue-Green Deployment (Advanced)

示例3:蓝绿部署(进阶)

bash
#!/bin/bash
bash
#!/bin/bash

Blue-green deployment for zero-risk updates

Blue-green deployment for zero-risk updates

BLUE_VERSION="v1.2.2" GREEN_VERSION="v1.2.3" SERVICE="supplier-charges-hub" NAMESPACE="wtr-supplier-charges"
echo "Deploying GREEN version: $GREEN_VERSION"
BLUE_VERSION="v1.2.2" GREEN_VERSION="v1.2.3" SERVICE="supplier-charges-hub" NAMESPACE="wtr-supplier-charges"
echo "Deploying GREEN version: $GREEN_VERSION"

Deploy green version (separate deployment)

Deploy green version (separate deployment)

kubectl apply -f deployment-green.yaml
kubectl apply -f deployment-green.yaml

Verify green is healthy

Verify green is healthy

echo "Waiting for green deployment to be ready..." kubectl rollout status deployment/supplier-charges-hub-green
-n $NAMESPACE
--timeout=5m
echo "Waiting for green deployment to be ready..." kubectl rollout status deployment/supplier-charges-hub-green
-n $NAMESPACE
--timeout=5m

Test green version via separate service/ingress

Test green version via separate service/ingress

echo "Testing green version..." GREEN_POD=$(kubectl get pods -l version=green -n $NAMESPACE -o jsonpath='{.items[0].metadata.name}') kubectl exec $GREEN_POD -n $NAMESPACE --
curl -s localhost:8080/actuator/health/readiness
echo "Testing green version..." GREEN_POD=$(kubectl get pods -l version=green -n $NAMESPACE -o jsonpath='{.items[0].metadata.name}') kubectl exec $GREEN_POD -n $NAMESPACE --
curl -s localhost:8080/actuator/health/readiness

Switch traffic to green by updating service selector

Switch traffic to green by updating service selector

echo "Switching traffic from BLUE to GREEN..." kubectl patch service $SERVICE
-n $NAMESPACE
-p '{"spec":{"selector":{"version":"green"}}}'
echo "Switching traffic from BLUE to GREEN..." kubectl patch service $SERVICE
-n $NAMESPACE
-p '{"spec":{"selector":{"version":"green"}}}'

Verify

Verify

echo "Verifying GREEN is serving traffic..." kubectl get endpoints $SERVICE -n $NAMESPACE
echo "Verifying GREEN is serving traffic..." kubectl get endpoints $SERVICE -n $NAMESPACE

Keep blue around for quick rollback

Keep blue around for quick rollback

echo "Blue version $BLUE_VERSION still available for immediate rollback"
undefined
echo "Blue version $BLUE_VERSION still available for immediate rollback"
undefined

Example 4: Health Probe Debugging

示例4:健康探针调试

bash
#!/bin/bash
bash
#!/bin/bash

Debug health check issues

Debug health check issues

POD="$1" NAMESPACE="wtr-supplier-charges"
if [ -z "$POD" ]; then POD=$(kubectl get pods -n $NAMESPACE -o jsonpath='{.items[0].metadata.name}') fi
echo "=== Health Probe Configuration ===" kubectl describe pod $POD -n $NAMESPACE | grep -A 15 "Probes"
echo "" echo "=== Testing Liveness Probe Endpoint ===" kubectl exec $POD -n $NAMESPACE --
curl -v http://localhost:8080/actuator/health/liveness
echo "" echo "=== Testing Readiness Probe Endpoint ===" kubectl exec $POD -n $NAMESPACE --
curl -v http://localhost:8080/actuator/health/readiness
echo "" echo "=== Full Health Status ===" kubectl exec $POD -n $NAMESPACE --
curl -s http://localhost:8080/actuator/health | jq .
undefined
POD="$1" NAMESPACE="wtr-supplier-charges"
if [ -z "$POD" ]; then POD=$(kubectl get pods -n $NAMESPACE -o jsonpath='{.items[0].metadata.name}') fi
echo "=== Health Probe Configuration ===" kubectl describe pod $POD -n $NAMESPACE | grep -A 15 "Probes"
echo "" echo "=== Testing Liveness Probe Endpoint ===" kubectl exec $POD -n $NAMESPACE --
curl -v http://localhost:8080/actuator/health/liveness
echo "" echo "=== Testing Readiness Probe Endpoint ===" kubectl exec $POD -n $NAMESPACE --
curl -v http://localhost:8080/actuator/health/readiness
echo "" echo "=== Full Health Status ===" kubectl exec $POD -n $NAMESPACE --
curl -s http://localhost:8080/actuator/health | jq .
undefined

Requirements

要求

  • GKE cluster with running pods
  • Spring Boot application with Actuator enabled (endpoints exposed)
  • kubectl
    access to the cluster
  • Deployment resource already created
  • Health endpoints accessible on port 8080 (customizable)
  • 运行中的GKE集群
  • 启用Actuator的Spring Boot应用(已暴露端点)
  • 具有集群访问权限的kubectl
  • 已创建的Deployment资源
  • 健康端点在8080端口可访问(可自定义)

See Also

相关链接

  • gcp-gke-cluster-setup - Understand cluster configuration
  • gcp-gke-troubleshooting - Debug deployment issues
  • gcp-gke-monitoring-observability - Monitor deployments
  • gcp-gke-cluster-setup - 了解集群配置
  • gcp-gke-troubleshooting - 调试部署问题
  • gcp-gke-monitoring-observability - 监控部署