https://drive.google.com/file/d/1903si3LmOHSLEYs8V5lBTi4OhmqtRIJn/view?usp=sharing
apiVersion: v1
kind: Pod
metadata:
name: nnappone
namespace: learning
labels:
app: nnappone
spec:
containers:
- name: crackone-app
image: nginx
resources:
requests:
memory: "300Mi"
limits:
memory: "500Mi"
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: size
operator: In
values:
- large
- medium # Pod can run on either "large" or "medium" nodes
๐ Key Insight: Within a single matchExpressions entry, values: [large, medium] means size IN (large, medium) โ logical OR. ๐ฏ Rule: "Schedule this Pod only on nodes where label size is large OR medium."
Kubernetes affinity uses a two-level logic structure:
| Level | Logic | Purpose |
|---|---|---|
nodeSelectorTerms |
OR between terms | "Match any of these sets of conditions" |
matchExpressions (inside a term) |
AND between expressions | "All these conditions must be true" |
โ In your YAML:
- Only 1
nodeSelectorTerm- Inside it: 1
matchExpressionwith 2 values โsize IN (large, medium)๐ก To express AND: Add morematchExpressionsin the same term:matchExpressions: - key: size; operator: In; values: [large] - key: disk; operator: In; values: [ssd] # โ Node must have BOTH labels
โ Assumption: You have at least 2 worker nodes in your k3s cluster.
# 1. List nodes
kubectl get nodes
# Example:
# NAME STATUS ROLES AGE VERSION
# k3s-master Ready master 2d v1.28.5+k3s1
# k3s-node1 Ready 2d v1.28.5+k3s1
# k3s-node2 Ready 2d v1.28.5+k3s1
# 2. Label nodes with different sizes
kubectl label node k3s-node1 size=small
kubectl label node k3s-node2 size=large
# 3. Verify
kubectl get nodes --show-labels | grep size
# k3s-node1 ... size=small
# k3s-node2 ... size=large
# 1. Create namespace
kubectl create namespace learning
# 2. Apply Pod
kubectl apply -f pod-with-node-affinity-multiple.yml
# 3. Check placement
kubectl get pods -n learning -o wide
# โ
Expected: Runs on k3s-node2 (size=large)
# โ Will NOT run on k3s-node1 (size=small โ not in [large, medium])
๐ก Temporarily remove the large label to simulate no match.
# 1. Remove label from k3s-node2
kubectl label node k3s-node2 size-
# 2. Deploy again
kubectl apply -f pod-with-node-affinity-multiple.yml
# 3. Check status
kubectl get pods -n learning
# โ STATUS = Pending
# 4. Describe to confirm
kubectl describe pod nnappone -n learning
# Events: 0/2 nodes match node affinity