https://drive.google.com/file/d/1Ev4w574TWSojotAOdazZDyMGkcKmkrPA/view?usp=sharing

πŸ” YAML Breakdown

apiVersion: apps/v1
kind: Deployment
metadata:
  name: webapp
  labels:
    app: nginx
spec:
  replicas: 1                    # ← Only 1 replica (critical!)
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: lovelearnlinux/webserver:v1
        volumeMounts:
        - mountPath: "/var/www/html"
          name: webroot
      volumes:
        - name: webroot
          persistentVolumeClaim:
            claimName: myclaim    # ← References existing PVC

πŸ”‘ Key Insight:

This Deployment mounts a PVC (myclaim) to persist web content.

βœ… Safe only because replicas: 1.

⚠️ Critical Warning:

Deployments + PVCs = Dangerous with replicas > 1!


πŸ“Œ When Is This Safe?

Scenario Safe? Why
replicas: 1 + RWO PVC βœ… Yes Only one Pod uses the PVC
replicas: 1 + RWX PVC βœ… Yes NFS supports shared access
replicas > 1 + RWO PVC ❌ No Scheduling fails if Pods land on different nodes
replicas > 1 + RWX PVC ⚠️ Risky All Pods write to same directory β†’ data collision

🎯 Your YAML is safe because replicas: 1.


πŸ§ͺ k3s Lab: Test Deployment with PVC

πŸ”§ Step 1: Ensure PVC Exists

πŸ’‘ You should already have myclaim from earlier labs:

kubectl get pvc myclaim
# STATUS = Bound

πŸ”§ Step 2: Deploy the Application

kubectl apply -f deployment-using-pvc.yaml

# Verify
kubectl get pods -l app=nginx

πŸ”§ Step 3: Write Data to Persistent Storage

# Create a file
kubectl exec webapp-<pod-hash> -- sh -c "echo '<h1>Persistent via Deployment!</h1>' > /var/www/html/index.html"

# Verify
curl http://<service-ip>
# βœ… "Persistent via Deployment!"

πŸ”§ Step 4: Simulate Pod Failure (Data Survives)

# Delete Pod (Deployment recreates it)
kubectl delete pod webapp-<pod-hash>

# Verify new Pod serves same data
curl http://<service-ip>
# βœ… Still "Persistent via Deployment!" β†’ data survived!