https://drive.google.com/file/d/1PkS8KYjSKCmftUz8dvNEqgh8uSNTLKaH/view?usp=sharing
apiVersion: v1
kind: Pod
metadata:
name: nnappone
namespace: learning
labels:
app: nnappone
spec:
containers:
- name: crackone-app
image: nginx
resources:
requests:
memory: "300Mi"
limits:
memory: "500Mi"
affinity:
podAntiAffinity: # ← CHANGED FROM podAffinity!
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nnweb
topologyKey: "kubernetes.io/hostname"
🔑 Key Fix:
podAffinity→podAntiAffinity
| Field | Purpose |
|---|---|
podAntiAffinity |
"Avoid nodes that match this rule" |
labelSelector |
Select existing Pods to avoid |
topologyKey |
Scope of avoidance:<br>• kubernetes.io/hostname → same node<br>• topology.kubernetes.io/zone → same AZ |
🎯 Your Rule:
"Do NOT schedule this Pod on any node that already runs a Pod with
app=nnweb."
💡 Use Case:
- Spread replicas across nodes → high availability
- Avoid noisy neighbors → performance isolation
✅ Assumption: You have at least 2 worker nodes in your k3s cluster.
nnweb)💡 You’ll need pod-for-anti-affinity.yml — let’s assume it looks like this:
# pod-for-anti-affinity.yml
apiVersion: v1
kind: Pod
meta
name: nnweb
namespace: learning
labels:
app: nnweb # ← Critical: this label is targeted
spec:
containers:
- name: web
image: nginx
Now deploy it:
# 1. Create namespace
kubectl create namespace learning
# 2. Deploy first Pod
kubectl apply -f pod-for-anti-affinity.yml
# 3. Check where it runs
kubectl get pods -n learning -o wide
# Example: nnweb runs on k3s-node1
# Apply the CORRECTED anti-affinity Pod
kubectl apply -f pod-with-anti-pod-affinity.yml
# Check placement
kubectl get pods -n learning -o wide
# ✅ Expected:
# nnweb → k3s-node1
# nnappone → k3s-node2 (NOT on same node!)