https://drive.google.com/file/d/10viXBvrdtGSne5rKc6jwRsoa30IcNgj5/view?usp=sharing

๐Ÿ” YAML Breakdown

apiVersion: v1
kind: LimitRange
metadata:
  name: memory-limit-range
  namespace: learning
spec:
  limits:
  - default:              # โ† Memory limit if not specified
      memory: 500Mi       # = 500 Mebibytes
    defaultRequest:       # โ† Memory request if not specified
      memory: 250Mi       # = 250 Mebibytes
    type: Container

๐Ÿ”‘ Key Insight:

This LimitRange only manages memory โ€” CPU is not constrained (Pods can use any CPU unless restricted elsewhere).

๐Ÿ’ก Why memory-only?


๐Ÿ“Œ How Memory Units Work

Unit Meaning Notes
250Mi 250 Mebibytes = 250 ร— 1024ยฒ bytes (binary)
250M 250 Megabytes = 250 ร— 1000ยฒ bytes (decimal)
Kubernetes uses Mi/Gi โœ… Always prefer binary units

โœ… Best Practice:

Use Mi (Mebibytes) and Gi (Gibibytes) for clarity and consistency.


๐Ÿงช k3s Lab: Test Memory-Only LimitRange

๐Ÿ”ง Step 1: Create Namespace & Apply LimitRange

# Create namespace
kubectl create namespace learning

# Apply LimitRange
kubectl apply -f namespace-memory-limitrange.yml

# Verify
kubectl describe limitrange memory-limit-range -n learning
# Type       Resource  Min  Max  Default Request  Default Limit
# Container  memory    -    -    250Mi             500Mi

๐Ÿ”ง Step 2: Deploy Pod with NO Memory Resources

# pod-no-memory.yaml
apiVersion: v1
kind: Pod
metadata:
  name: no-memory-pod
  namespace: learning
spec:
  containers:
  - name: nginx
    image: nginx

kubectl apply -f pod-no-memory.yaml

# Check applied memory resources
kubectl get pod no-memory-pod -n learning -o jsonpath='{.spec.containers[0].resources}'
# โœ… Output:
# {"limits":{"memory":"500Mi"},"requests":{"memory":"250Mi"}}

๐Ÿ” Result:

LimitRange auto-applied memory defaults โ†’ Pod has 250Mi request, 500Mi limit

๐Ÿ”ง Step 3: Trigger OOMKilled (Exceed Memory Limit)

๐Ÿ’ก Deploy a Pod that tries to use 600Mi ( > 500Mi limit):

# pod-oom-test.yaml
apiVersion: v1
kind: Pod
metadata:
  name: oom-test
  namespace: learning
spec:
  containers:
  - name: stress
    image: lovelearnlinux/stress:latest
    resources:
      requests:
        memory: "300Mi"
      limits:
        memory: "500Mi"   # โ† Hard ceiling
    command: ["stress"]
    args: ["--vm", "1", "--vm-bytes", "600M", "--vm-hang", "1"]