https://drive.google.com/file/d/13qUv3XlMUR9SMbpzN_DJOvpYroTldbZ_/view?usp=sharing
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteMany
volumeMode: Filesystem
resources:
requests:
storage: 4Gi
storageClassName: slow
๐ Key Insight:
This PVC is a storage request that will bind to a compatible PV (like your
pvone-nfs).
๐ก How binding works:
Kubernetes matches PVC to PV based on:
storageClassName(must match)accessModes(PVC modes โ PV modes)storage(PV capacity โฅ PVC request)
| Requirement | Your PVC | Your PV (pvone-nfs) |
Match? |
|---|---|---|---|
storageClassName |
slow |
slow |
โ Yes |
accessModes |
ReadWriteOnce |
ReadWriteOnce |
โ Yes |
storage |
4Gi |
5Gi |
โ Yes (5Gi โฅ 4Gi) |
โ Result: PVC will bind to pvone-nfs โ status = Bound
โ ๏ธ Critical Note:
Your PV uses
ReadWriteOnce, but NFS supportsReadWriteMany.โ If your app needs shared access, change both PVC and PV to
ReadWriteMany.
๐ก You should already have pvone-nfs from the previous lab:
kubectl get pv
# NAME CAPACITY ACCESS MODES STATUS CLAIM
# pvone-nfs 5Gi RWO Available <none>
# Apply PVC
kubectl apply -f pvc-nfs.yaml
# Verify binding
kubectl get pvc
# NAME STATUS VOLUME CAPACITY ACCESS MODES
# myclaim Bound pvone-nfs 5Gi RWO
kubectl get pv
# NAME STATUS CLAIM
# pvone-nfs Bound default/myclaim
๐ Key Observation:
- PVC status =
Bound- PV is now claimed by
default/myclaim
# pod-using-pvc.yaml
apiVersion: v1
kind: Pod
metadata:
name: nfs-pod
spec:
containers:
- name: web
image: nginx
volumeMounts:
- mountPath: /usr/share/nginx/html
name: nfs-storage
volumes:
- name: nfs-storage
persistentVolumeClaim:
claimName: myclaim # โ Reference PVC