Official Kubernetes documentation covers all workloads in detail: https://kubernetes.io/docs/concepts/workloads/ Note: Kubernetes certification exams (CKA/CKAD) allow you to read the official documentation during the exam — so bookmarking it is a valid and encouraged strategy.
Every resource you have seen so far — Deployments, StatefulSets, DaemonSets — is designed to run forever. Kubernetes restarts them if they stop. But not everything should run forever. A database migration should run once, finish, and never run again. A nightly backup should run at 2 AM and stop. For these, Kubernetes has Jobs and CronJobs.
Before diving in, it helps to know what statuses you will see when you run kubectl get pods — especially when watching Jobs run:
| Status | What it means |
|---|---|
ContainerCreating |
Pod is being set up, image is being pulled |
Running |
Pod is live and working |
Completed |
Pod finished its task and exited — normal and expected for Jobs |
Error |
Pod crashed with a non-zero exit code |
CrashLoopBackOff |
Pod keeps crashing and restarting repeatedly |
Pending |
Pod is waiting to be scheduled on a node |
Terminating |
Pod is being deleted |
ErrImagePull |
Could not pull the container image |
When a Job Pod finishes successfully, its status becomes Completed — not Error, not Terminating. That is the expected healthy state for a Job.
A Job runs a Pod to completion. Once the Pod finishes successfully, the Job is done. Kubernetes does not restart it. If the Pod fails, the Job retries it — up to a limit you set.
This is the key difference that confuses beginners:
Deployment sees a Pod finish and thinks something went wrong:
Pod completes -> "it shouldn't have stopped" -> restarts it forever
Job understands that finishing is the goal:
Pod completes successfully -> "task done" -> stops, no restart
Pod fails -> "something went wrong" -> retries
Retries exceed backoffLimit -> Job marked as Failed
If you ran a database migration as a Deployment, Kubernetes would run it over and over again in an infinite loop — applying the same migration hundreds of times and corrupting your database.
You create a Job: "run this migration script once"
Job creates Pod
Pod runs: python manage.py migrate
Migration completes successfully
Pod exits with code 0
Job sees exit code 0 -> marks itself Complete -> done
If the Pod crashes (exit code 1):
Job creates a new Pod and tries again
Retries up to backoffLimit (default: 6)
If all retries fail -> Job marked as Failed
apiVersion: batch/v1
kind: Job
metadata:
name: db-migration
spec:
completions: 1 # how many Pods must succeed before Job is complete
parallelism: 1 # how many Pods run at the same time
backoffLimit: 3 # retry up to 3 times if Pod fails
template:
spec:
restartPolicy: Never # Never = create a new Pod on failure (not restart same one)
containers:
- name: migration
image: my-app:latest
command: ["python", "manage.py", "migrate"]