Let’s say you have a production system or cluster up and running. Your resources are limited, so you would want your pods to take up only what is required in terms of Kubernetes resources

Unfortunately, a particular app in your cluster might take up a lot of resources, leading to fewer overall resources. Subsequently, other apps fail. If you had not set up monitoring, you’d have to face this issue during production, causing your app to experience downtime. We wouldn’t want that. Of course, we can always be reactive and fix the issue after it has occurred, but do you know what’s better? Being proactive!

By setting up monitoring beforehand, you can detect when a certain pod or node is low on resources or when there’s an internal error. With this information, you could set up actions such as notifications when a pod’s CPU usage reaches a threshold or when a certain pod crashes. A well maintained monitoring system adds a safety basket for your production setting, creates a better developer experience for the developers on your team, and ensures end-users aren’t negatively impacted. This article explains how to monitor a Kubernetes cluster effectively using Prometheus.

https://cdn-images-1.medium.com/max/800/0*gI-iuEhasOvjOvmB

How to get started with Kubernetes monitoring

Monitoring a Kubernetes cluster involves choosing a monitoring tool that can collect application metrics such as traffic or memory usage from target kubernetes pods or services, allowing you to monitor said target’s health. These tools are deployed on the cluster as sidecar containers for you to regularly receive metrics so that you can then use those metrics for analyzing them through visualization tools.

For this blog, we’ll be exploring an excellent open source tool: Prometheus as the monitoring solution. We’ll set up a Kubernetes cluster and then install Prometheus on it. We’ll then use Prometheus to pull metrics from target endpoints and discuss how the metrics are being stored locally. Finally, we’ll visualize those metrics in a Grafana dashboard.

Let’s start 😄

Setting up the Kubernetes cluster

As part of the first step, we’ll deploy a Kubernetes cluster on MicroK8s

sudo snap install microk8s - classic
microk8s enable dns storage metrics-server

While writing this blog initially, I faced issues on Minikube, so I advise you to proceed with MicroK8s or other offerings. Once we have MicroK8s installed, we should have our Kubernetes cluster up and running. To not have to type microk8s every time, let’s create an alias for it inside the .bashrc file by running the commands below:

nano ~/.bashrc
# Add the following at the end of the file
alias kubectl='microk8s kubectl'
# Reload your shell configuration
source ~/.bashrc

Now, let’s create a directory for our project and choose a name. Choose a name that would work well for this project. Now, to confirm that our Kubernetes cluster is up, run the following:

kubectl cluster-info

Upon having our Kubernetes cluster ready, let’s create our namespace and update our context so that we’re a step closer to getting our kubernetes monitoring solution:

kubectl create namespace monitoring
kubectl config set-context - current - namespace=monitoring

Installing Prometheus

There are different ways to install Prometheus on a cluster. You could create and apply all the individual manifest files for Prometheus separately, but this would result in much manual labor. You should only go down this path if you need that level of granularity and control. Let’s use the kube-prometheus-stack Helm chart instead, as that’s the most straightforward way to do it because it takes care of setting up all the individual components along with the operator itself.