https://drive.google.com/file/d/1kvdiur755gq4MtlglNvrHQ6krRCB2K7l/view?usp=sharing

πŸ” YAML Breakdown

apiVersion: v1
kind: Pod
metadata:
  name: nnappone
  namespace: learning
  labels:
    app: nnappone
spec:
  affinity:
    nodeAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 1                          # ← Importance (1-100)
        preference:
          matchExpressions:
          - key: size
            operator: In
            values:
            - small
  containers:
    - name: cracokone-app
      image: nginx
      resources:
        requests:
          memory: "300Mi"
        limits:
          memory: "500Mi"

πŸ”‘ Key Concepts:

🎯 Behavior:


πŸ“Œ How Preferred Affinity Works

Scenario Outcome
βœ… size=small node exists + has resources Pod scheduled there (preferred)
❌ No size=small node Pod scheduled elsewhere (no failure!)
🟑 Multiple preferred rules Scheduler scores nodes β†’ picks highest total weight

πŸ’‘ Weight Range: 1 to 100


πŸ§ͺ k3s Lab: Preferred Affinity in Action

βœ… Assumption: You have at least 2 worker nodes in your k3s cluster.

πŸ”§ Step 1: Label One Node as small, Leave Others Unlabeled

# 1. List your k3s nodes
kubectl get nodes

# Example:
# NAME         STATUS   ROLES    AGE   VERSION
# k3s-master   Ready    master   2d    v1.28.5+k3s1
# k3s-node1    Ready    <none>   2d    v1.28.5+k3s1
# k3s-node2    Ready    <none>   2d    v1.28.5+k3s1

# 2. Label ONLY ONE node as "small"
kubectl label node k3s-node1 size=small

# 3. Verify
kubectl get nodes --show-labels | grep -E "k3s-node1|k3s-node2"
# k3s-node1 ... size=small
# k3s-node2 ... (no size label)

πŸ”§ Step 2: Deploy the Pod

# 1. Create namespace
kubectl create namespace learning

# 2. Apply Pod
kubectl apply -f pod-with-preferred-node-affinity.yml

# 3. Check where it runs
kubectl get pods -n learning -o wide

# βœ… Expected: Runs on k3s-node1 (the "small" node)
# But if k3s-node1 is full, it might run on k3s-node2 β€” that's OK!

πŸ”§ Step 3: Test Fallback Behavior

πŸ’‘ Force fallback: Temporarily taint the small node so it’s unschedulable.

# 1. Taint the "small" node
kubectl taint node k3s-node1 test=unschedulable:NoSchedule

# 2. Deploy a SECOND Pod (same spec)
kubectl run nnappone2 -n learning --image=lovelearnlinux/webserver:v1 --restart=Never \\\\
  --overrides='
{
  "spec": {
    "affinity": {
      "nodeAffinity": {
        "preferredDuringSchedulingIgnoredDuringExecution": [
          {
            "weight": 1,
            "preference": {
              "matchExpressions": [
                { "key": "size", "operator": "In", "values": ["small"] }
              ]
            }
          }
        ]
      }
    }
  }
}'

# 3. Check placement
kubectl get pods -n learning -o wide

# βœ… Expected: nnappone2 runs on k3s-node2 (fallback!)

πŸ”§ Step 4: Clean Up