https://drive.google.com/file/d/1Ev4w574TWSojotAOdazZDyMGkcKmkrPA/view?usp=sharing
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp
labels:
app: nginx
spec:
replicas: 1 # β Only 1 replica (critical!)
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: lovelearnlinux/webserver:v1
volumeMounts:
- mountPath: "/var/www/html"
name: webroot
volumes:
- name: webroot
persistentVolumeClaim:
claimName: myclaim # β References existing PVC
π Key Insight:
This Deployment mounts a PVC (
myclaim) to persist web content.β Safe only because
replicas: 1.
β οΈ Critical Warning:
Deployments + PVCs = Dangerous with
replicas > 1!
- All replicas share the same PVC β data corruption
- PVCs with
ReadWriteOncecanβt be mounted on multiple nodes
| Scenario | Safe? | Why |
|---|---|---|
replicas: 1 + RWO PVC |
β Yes | Only one Pod uses the PVC |
replicas: 1 + RWX PVC |
β Yes | NFS supports shared access |
replicas > 1 + RWO PVC |
β No | Scheduling fails if Pods land on different nodes |
replicas > 1 + RWX PVC |
β οΈ Risky | All Pods write to same directory β data collision |
π― Your YAML is safe because replicas: 1.
π‘ You should already have myclaim from earlier labs:
kubectl get pvc myclaim
# STATUS = Bound
kubectl apply -f deployment-using-pvc.yaml
# Verify
kubectl get pods -l app=nginx
# Create a file
kubectl exec webapp-<pod-hash> -- sh -c "echo '<h1>Persistent via Deployment!</h1>' > /var/www/html/index.html"
# Verify
curl http://<service-ip>
# β
"Persistent via Deployment!"
# Delete Pod (Deployment recreates it)
kubectl delete pod webapp-<pod-hash>
# Verify new Pod serves same data
curl http://<service-ip>
# β
Still "Persistent via Deployment!" β data survived!