https://drive.google.com/file/d/1kvdiur755gq4MtlglNvrHQ6krRCB2K7l/view?usp=sharing
apiVersion: v1
kind: Pod
metadata:
name: nnappone
namespace: learning
labels:
app: nnappone
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1 # β Importance (1-100)
preference:
matchExpressions:
- key: size
operator: In
values:
- small
containers:
- name: cracokone-app
image: nginx
resources:
requests:
memory: "300Mi"
limits:
memory: "500Mi"
π Key Concepts:
preferredDuringScheduling...β soft rule (scheduler tries, but wonβt block)weight: 1β how much to favor this rule (1 = low, 100 = high)preferenceβ what node attributes are preferred
π― Behavior:
- If any
size=smallnode is available β Pod likely runs there- If no
size=smallnode β Pod still runs on another node- Multiple preferences? Scheduler scores nodes and picks the best
| Scenario | Outcome |
|---|---|
β
size=small node exists + has resources |
Pod scheduled there (preferred) |
β No size=small node |
Pod scheduled elsewhere (no failure!) |
| π‘ Multiple preferred rules | Scheduler scores nodes β picks highest total weight |
π‘ Weight Range: 1 to 100
- Use higher weights for stronger preferences (e.g.,
weight: 80for SSD nodes)
β Assumption: You have at least 2 worker nodes in your k3s cluster.
small, Leave Others Unlabeled# 1. List your k3s nodes
kubectl get nodes
# Example:
# NAME STATUS ROLES AGE VERSION
# k3s-master Ready master 2d v1.28.5+k3s1
# k3s-node1 Ready <none> 2d v1.28.5+k3s1
# k3s-node2 Ready <none> 2d v1.28.5+k3s1
# 2. Label ONLY ONE node as "small"
kubectl label node k3s-node1 size=small
# 3. Verify
kubectl get nodes --show-labels | grep -E "k3s-node1|k3s-node2"
# k3s-node1 ... size=small
# k3s-node2 ... (no size label)
# 1. Create namespace
kubectl create namespace learning
# 2. Apply Pod
kubectl apply -f pod-with-preferred-node-affinity.yml
# 3. Check where it runs
kubectl get pods -n learning -o wide
# β
Expected: Runs on k3s-node1 (the "small" node)
# But if k3s-node1 is full, it might run on k3s-node2 β that's OK!
π‘ Force fallback: Temporarily taint the small node so itβs unschedulable.
# 1. Taint the "small" node
kubectl taint node k3s-node1 test=unschedulable:NoSchedule
# 2. Deploy a SECOND Pod (same spec)
kubectl run nnappone2 -n learning --image=lovelearnlinux/webserver:v1 --restart=Never \\\\
--overrides='
{
"spec": {
"affinity": {
"nodeAffinity": {
"preferredDuringSchedulingIgnoredDuringExecution": [
{
"weight": 1,
"preference": {
"matchExpressions": [
{ "key": "size", "operator": "In", "values": ["small"] }
]
}
}
]
}
}
}
}'
# 3. Check placement
kubectl get pods -n learning -o wide
# β
Expected: nnappone2 runs on k3s-node2 (fallback!)