https://drive.google.com/file/d/1qKdVhVyAGGDDu2tDHUG0w-p19PFrTH_4/view?usp=sharing

๐Ÿ” YAML Breakdown

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-deploy
  annotations:
    kubernetes.io/change-cause: "changing version to new"
spec:
  revisionHistoryLimit: 5        # โ† Keep last 5 ReplicaSets
  replicas: 10                   # โ† Run 10 Pods
  minReadySeconds: 10            # โ† Wait 10s after Pod is ready before marking update successful
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1        # โ† Max 1 Pod unavailable during update
      maxSurge: 1              # โ† Max 1 extra Pod during update
  selector:
    matchLabels:
      app: hello-world
  template:
    metadata:
      labels:
        app: hello-world
    spec:
      containers:
      - name: hello-pod
        image: lovelearnlinux/webserver:v1
        readinessProbe:          # โ† Critical for zero-downtime
          exec: { command: ["cat", "/var/www/html/index.html"] }
          initialDelaySeconds: 10
          periodSeconds: 10
        livenessProbe:           # โ† Restart if app crashes
          exec: { command: ["cat", "/var/www/html/index.html"] }
          failureThreshold: 6    # โ† Allow 6 failures before restart
        resources:
          requests:
            cpu: "50m"
            memory: "100Mi"
          limits:
            cpu: "100m"
            memory: "128Mi"


๐Ÿ“Œ Key Strategy Parameters Explained

Parameter Meaning Your Value Effect
maxUnavailable: 1 Max Pods unavailable during update 1 Always keep 9/10 Pods serving traffic
maxSurge: 1 Max extra Pods during update 1 Temporarily run 11 Pods (10 old + 1 new โ†’ then scale down)
minReadySeconds: 10 Wait after Pod is ready 10s Ensures Pod is stable before proceeding

๐ŸŽฏ Rolling Update Flow (10 replicas):

  1. Start with 10 v1 Pods
  2. Create 1 v2 Pod โ†’ total = 11 (maxSurge: 1)
  3. Wait for v2 Pod to be ready + 10s (minReadySeconds)
  4. Delete 1 v1 Pod โ†’ total = 10 (maxUnavailable: 1 โ†’ 9 available during delete)
  5. Repeat until all 10 are v2

โœ… Result: Zero downtime, controlled pace, minimal resource overhead


๐Ÿงช k3s Lab: Observe Rolling Update Strategy

๐Ÿ”ง Step 1: Deploy v1

# Apply Deployment
kubectl apply -f deployment-with-strategy.yaml

# Wait for all Pods ready
kubectl get pods -l app=hello-world -w

๐Ÿ”ง Step 2: Trigger Update to v2

# Update image to v2
kubectl set image deployment/hello-deploy hello-pod=lovelearnlinux/webserver:v2 \\\\
  --record  # โ† Records change-cause automatically

# OR use annotate (as in your comment)
kubectl annotate deployment hello-deploy \\\\
  kubernetes.io/change-cause="image changed to v2" --overwrite

๐Ÿ”ง Step 3: Watch the Rolling Update

# Watch Pods during update
kubectl get pods -l app=hello-world -w

# โœ… Expected behavior:
# - Total Pods fluctuates between 10 and 11
# - At least 9 Pods always in "Running" state
# - New Pods have image v2, old have v1

# Check rollout status
kubectl rollout status deployment hello-deploy

๐Ÿ”ง Step 4: Inspect History & Revisions

# View rollout history
kubectl rollout history deployment hello-deploy

# View details of a revision
kubectl rollout history deployment hello-deploy --revision=2

# Rollback if needed
kubectl rollout undo deployment hello-deploy

๐Ÿ” revisionHistoryLimit: 5 means only last 5 ReplicaSets are kept โ†’ saves etcd space.