https://drive.google.com/file/d/1PkS8KYjSKCmftUz8dvNEqgh8uSNTLKaH/view?usp=sharing


✅ : True Anti-Affinity


apiVersion: v1
kind: Pod
metadata:
  name: nnappone
  namespace: learning
  labels:
    app: nnappone
spec:
  containers:
    - name: crackone-app
      image: nginx
      resources:
        requests:
          memory: "300Mi"
        limits:
          memory: "500Mi"
  affinity:
    podAntiAffinity:  # ← CHANGED FROM podAffinity!
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - key: app
            operator: In
            values:
            - nnweb
        topologyKey: "kubernetes.io/hostname"

🔑 Key Fix:

podAffinitypodAntiAffinity


📌 How Pod Anti-Affinity Works

Field Purpose
podAntiAffinity "Avoid nodes that match this rule"
labelSelector Select existing Pods to avoid
topologyKey Scope of avoidance:<br>• kubernetes.io/hostnamesame node<br>• topology.kubernetes.io/zonesame AZ

🎯 Your Rule:

"Do NOT schedule this Pod on any node that already runs a Pod with app=nnweb."

💡 Use Case:


🧪 k3s Lab: Enforce Pod Anti-Affinity

✅ Assumption: You have at least 2 worker nodes in your k3s cluster.

🔧 Step 1: Deploy the First Pod (nnweb)

💡 You’ll need pod-for-anti-affinity.yml — let’s assume it looks like this:

# pod-for-anti-affinity.yml
apiVersion: v1
kind: Pod
meta
  name: nnweb
  namespace: learning
  labels:
    app: nnweb          # ← Critical: this label is targeted
spec:
  containers:
    - name: web
      image: nginx

Now deploy it:

# 1. Create namespace
kubectl create namespace learning

# 2. Deploy first Pod
kubectl apply -f pod-for-anti-affinity.yml

# 3. Check where it runs
kubectl get pods -n learning -o wide
# Example: nnweb runs on k3s-node1

🔧 Step 2: Deploy the Anti-Affinity Pod

# Apply the CORRECTED anti-affinity Pod
kubectl apply -f pod-with-anti-pod-affinity.yml

# Check placement
kubectl get pods -n learning -o wide

# ✅ Expected:
# nnweb      → k3s-node1
# nnappone   → k3s-node2 (NOT on same node!)