https://drive.google.com/file/d/1uLx1C0J4Kul69fUwDoBwNXvcFP7eQU2L/view?usp=sharing

πŸ” YAML Breakdown

apiVersion: v1
kind: Pod
metadata:
  name: nnappone
  namespace: learning
  labels:
    app: nnappone
spec:
  containers:
    - name: crackone-app
      image: nginx
      resources:
        requests:
          memory: "300Mi"
        limits:
          memory: "500Mi"
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: size
            operator: NotIn        # ← EXCLUDE these values
            values:
            - small

πŸ”‘ Key Rule:

"Do NOT schedule this Pod on any node where label size is small."

βœ… Important:

This does NOT require the size label to exist!


πŸ“Œ How NotIn Works

Node Label Matches NotIn [small]? Why?
size=small ❌ No Explicitly excluded
size=large βœ… Yes Not in exclusion list
size=medium βœ… Yes Not in exclusion list
No size label βœ… Yes Key doesn’t exist β†’ can’t be small

πŸ’‘ This is different from In:


πŸ§ͺ k3s Lab: Exclude small Nodes

βœ… Assumption: You have 2+ worker nodes in your k3s cluster.

πŸ”§ Step 1: Label Nodes (One as small, One as large)

# 1. List nodes
kubectl get nodes

# 2. Label nodes
kubectl label node k3s-node1 size=small
kubectl label node k3s-node2 size=large

# 3. Verify
kubectl get nodes --show-labels | grep size

πŸ”§ Step 2: Deploy the Pod

# 1. Create namespace
kubectl create namespace learning

# 2. Apply Pod
kubectl apply -f pod-with-node-affinity-cannot.yml

# 3. Check placement
kubectl get pods -n learning -o wide

# βœ… Expected: Runs on k3s-node2 (size=large)
# ❌ Will NOT run on k3s-node1 (size=small)

πŸ”§ Step 3: Test Edge Case β€” All Nodes Are small

πŸ’‘ What if every node is labeled size=small?

# 1. Relabel k3s-node2 as "small"
kubectl label node k3s-node2 size=small --overwrite

# 2. Deploy a new Pod
kubectl run test-pod -n learning --image=nginx --restart=Never \\\\
  --overrides='
{
  "spec": {
    "affinity": {
      "nodeAffinity": {
        "requiredDuringSchedulingIgnoredDuringExecution": {
          "nodeSelectorTerms": [
            {
              "matchExpressions": [
                {
                  "key": "size",
                  "operator": "NotIn",
                  "values": ["small"]
                }
              ]
            }
          ]
        }
      }
    }
  }
}'

# 3. Check status
kubectl get pods -n learning
# β†’ STATUS = Pending

# 4. Describe to see why
kubectl describe pod test-pod -n learning
# Events: 0/2 nodes are available: 2 node(s) didn't match Pod's node affinity rules.

πŸ” Key Insight:

If all nodes are excluded, Pod stays Pending forever β€” just like a required affinity with no match.