https://drive.google.com/file/d/1903si3LmOHSLEYs8V5lBTi4OhmqtRIJn/view?usp=sharing

๐Ÿ” YAML Breakdown

apiVersion: v1
kind: Pod
metadata:
  name: nnappone
  namespace: learning
  labels:
    app: nnappone
spec:
  containers:
    - name: crackone-app
      image: nginx
      resources:
        requests:
          memory: "300Mi"
        limits:
          memory: "500Mi"
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: size
                operator: In
                values:
                  - large
                  - medium  # Pod can run on either "large" or "medium" nodes

๐Ÿ”‘ Key Insight: Within a single matchExpressions entry, values: [large, medium] means size IN (large, medium) โ†’ logical OR. ๐ŸŽฏ Rule: "Schedule this Pod only on nodes where label size is large OR medium."


๐Ÿ“Œ Affinity Logic Deep Dive

Kubernetes affinity uses a two-level logic structure:

Level Logic Purpose
nodeSelectorTerms OR between terms "Match any of these sets of conditions"
matchExpressions (inside a term) AND between expressions "All these conditions must be true"

โœ… In your YAML:

matchExpressions:
- key: size; operator: In; values: [large]
- key: disk; operator: In; values: [ssd]
# โ†’ Node must have BOTH labels


๐Ÿงช k3s Lab: Deploy Pod with Multiple Affinity Values

โœ… Assumption: You have at least 2 worker nodes in your k3s cluster.

๐Ÿ”ง Step 1: Label Your k3s Nodes

# 1. List nodes
kubectl get nodes
# Example:
# NAME STATUS ROLES AGE VERSION
# k3s-master Ready master 2d v1.28.5+k3s1
# k3s-node1 Ready 2d v1.28.5+k3s1
# k3s-node2 Ready 2d v1.28.5+k3s1
# 2. Label nodes with different sizes
kubectl label node k3s-node1 size=small
kubectl label node k3s-node2 size=large
# 3. Verify
kubectl get nodes --show-labels | grep size
# k3s-node1 ... size=small
# k3s-node2 ... size=large

๐Ÿ”ง Step 2: Deploy the Pod

# 1. Create namespace
kubectl create namespace learning
# 2. Apply Pod
kubectl apply -f pod-with-node-affinity-multiple.yml
# 3. Check placement
kubectl get pods -n learning -o wide
# โœ… Expected: Runs on k3s-node2 (size=large)
# โŒ Will NOT run on k3s-node1 (size=small โ€” not in [large, medium])

๐Ÿ”ง Step 3: Test Failure (No Matching Node)

๐Ÿ’ก Temporarily remove the large label to simulate no match.

# 1. Remove label from k3s-node2
kubectl label node k3s-node2 size-
# 2. Deploy again
kubectl apply -f pod-with-node-affinity-multiple.yml
# 3. Check status
kubectl get pods -n learning
# โ†’ STATUS = Pending
# 4. Describe to confirm
kubectl describe pod nnappone -n learning
# Events: 0/2 nodes match node affinity

๐Ÿ”ง Step 4: Clean Up