Today I actually spent my time first learning about ReplicaSets. However, when looking at my cluster, I have noticed something strange.

There was one pod still running that I thought I had deleted yesterday.

The status of the pod indicated "CrashLoopBackOff"; meaning the pod would fail every time it was trying to start. It will start, fail, start fail, start f...

This is quite common, and in case the restart policy of the pod is set to always, Kubernetes will try to restart the pod every time it has an error.

There are several reasons why the pod would end up in this poor state:

  1. Something is wrong within our Kubernetes cluster
  2. The pod is configured correctly
  3. Something is wrong with the application

In this case, it was easier to identify what had gotten wrong that resulted in this misbehaved pod.

Like mentioned in previous days, there are multiple ways that one can create a pod. This can be categorized roughly into

As part of yesterdays learning, I tried to set-up a container image inside a pod and inside our cluster with the following command:

kubectl create deployment --image=<name of the image>