π Prerequisite: Chapter 01 (Pods, resources, QoS) β β youβve mastered it!
Start with the most straightforward way to control where a Pod runs.
pod-for-specific-node.yml
β Schedule Pod on a specific node by name using spec.nodeName.
β Hard assignment (bypasses scheduler). Rarely used β good for learning.
pod-for-specific-node-selector.yml
β Use nodeSelector to schedule on nodes with matching labels.
β Simple, widely used, great for zonal placement (e.g., disktype: ssd).
Modern, powerful replacement for nodeSelector β supports required and preferred rules.
pod-with-required-node-affinity.yml
β Must run on nodes matching affinity rules (requiredDuringScheduling...).
pod-with-preferred-node-affinity.yml
β Prefer certain nodes, but run elsewhere if needed (preferredDuringScheduling...).
pod-with-node-affinity-multiple.yml
β Combine multiple affinity rules (e.g., match zone AND GPU).
pod-with-node-affinity-cannot.yml
β Example where no node matches β Pod stays Pending (great for troubleshooting).
π‘ Key Concept:
required= hard constraint (likenodeSelector)preferred= soft constraint (scheduler tries, but wonβt block)
Control placement based on other Pods β critical for HA and isolation.
pod-with-anti-pod-affinity.yml
β Avoid running on same node/zone as other Pods (e.g., spread replicas).
pod-for-anti-affinity.yml
β Likely a companion Pod used to demonstrate anti-affinity (e.g., deploy two Pods that must be separated).
π― Use Cases:
- Anti-affinity: Spread app replicas across nodes/zones β high availability
- Affinity: Co-locate cache + app β low latency