Endpoints reference

Liveness probe

apiVersion: v1
kind: Pod
metadata:
  name: liveness
  labels:
    app.kubernetes.io/name: liveness
spec:
  containers:
    - name: web
      image: startkubernetes/app-health:0.1.0
      ports:
        - containerPort: 3000
      livenessProbe:
        httpGet:
          path: /healthz
          port: 3000
        initialDelaySeconds: 3
        periodSeconds: 1

Save the above YAML in liveness.yaml and create the Pod with kubectl apply -f liveness.yaml.

Let's use the describe command to look at the Pod events:

$ kubectl describe po liveness
...
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age   From               Message
  ----     ------     ----  ----               -------
  Normal   Scheduled  12s   default-scheduler  Successfully assigned default/liveness to minikube
  Normal   Pulled     11s   kubelet, minikube  Container image "startkubernetes/app-health:0.1.0" already present on machine
  Normal   Created    11s   kubelet, minikube  Created container web
  Normal   Started    11s   kubelet, minikube  Started container web
  Warning  Unhealthy  0s    kubelet, minikube  Liveness probe failed: HTTP probe failed with statuscode: 500

The Pod is healthy for the first 10 seconds or so. After that, looking at the last line in the output, the liveness probe fails, and Kubernetes restarts the container. Once the container restarts, the same story repeats - the liveness probe is healthy for 10 seconds, and then fails.

You can see the number of times Kubernetes restarted the container if you run the kubectl get po command and look at the RESTARTS column:

$ kubectl get po
NAME       READY   STATUS    RESTARTS   AGE
liveness   1/1     Running   5          2m31s

In addition to just specifying the path and port for the HTTP check, you can also use the httpHeaders field to specify the headers you want the probe to use. For example, if you want to add the Host header to the call, you could do it like this:

apiVersion: v1
kind: Pod
metadata:
  name: liveness-headers
  labels:
    app.kubernetes.io/name: liveness-headers
spec:
  containers:
    - name: web
      image: startkubernetes/app-health:0.1.0
      imagePullPolicy: Always
      ports:
        - containerPort: 3000
      livenessProbe:
        httpGet:
          path: /healthz
          port: 3000
          httpHeaders:
            - name: Host
              value: liveness-host
        initialDelaySeconds: 3
        periodSeconds: 1

Save the above YAML in liveness-headers.yaml and create the Pod using kubectl apply -f liveness-headers.yaml. The App Health application automatically logs the headers from incoming requests, so after the Pod starts, you can look at the logs:

$ kubectl logs liveness-headers

> [email protected] start /app
> node server.js

appHealth running on port 3000.
{"host":"liveness-host","user-agent":"kube-probe/1.18","accept-encoding":"gzip","connection":"close"}
GET /healthz 200 7.083 ms - 2

Notice the host header value is set to liveness-host, just like we configured it in the probe configuration.