https://drive.google.com/file/d/1L94T05RDk0_rnE1lBvR1q804MPaiDhBp/view?usp=sharing
apiVersion: apps/v1 # β
Modern API (extensions/v1beta1 is DEPRECATED)
kind: Deployment
metadata:
name: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx # β
Required in apps/v1
template:
metadata:
labels:
app: nginx
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: kubernetes.io/hostname
labelSelector:
matchLabels:
app: nginx # β Avoid nodes running Pods with this label
containers: # β
Plural "containers" (was "container")
- name: nginx
image: nginx:1.25-alpine # β
Avoid :latest
ports:
- containerPort: 80
π Key Fixes:
apiVersion: apps/v1β current standard (since Kubernetes 1.9)- Added
selector.matchLabelsβ required inapps/v1containers(plural) β correct field name- Specific image tag β avoid
:latest
| Field | Purpose |
|---|---|
podAntiAffinity |
"Donβt schedule on nodes that match this rule" |
labelSelector.matchLabels |
Target your own Pods (app: nginx) |
topologyKey: kubernetes.io/hostname |
Scope = same node |
requiredDuringScheduling... |
Hard rule β must spread Pods |
π― Your Rule:
"Do NOT schedule two
nginxPods on the same node."
π‘ Why This Matters:
- If a node fails, only 1 replica is lost (not all 3!)
- Critical for high availability in production
β Assumption: You have β₯3 worker nodes in your k3s cluster.
(If you have only 2 nodes, reduce
replicas: 2)
# Save as deployment-with-anti-affinity-fixed.yaml
kubectl apply -f deployment-with-anti-affinity-fixed.yaml
# Wait for Pods to be ready
kubectl get pods -l app=nginx -o wide
# Check node distribution
kubectl get pods -l app=nginx -o wide
# β
Expected (on 3-node cluster):
# NAME READY NODE
# nginx-7df8b9b5d4-abc12 1/1 k3s-node1
# nginx-7df8b9b5d4-def34 1/1 k3s-node2
# nginx-7df8b9b5d4-ghi56 1/1 k3s-node3
# β If you see 2 Pods on same node β anti-affinity failed!
π‘ If you have only 2 nodes but replicas: 3:
# One Pod will stay Pending!
kubectl get pods -l app=nginx
# Describe to confirm
kubectl describe pod <pending-pod>
# Events: 0/2 nodes available: 2 node(s) didn't match pod anti-affinity rules.
π Solution:
- Add more nodes, OR
- Use
preferredDuringScheduling...for soft anti-affinity