top of page
  • Writer's picturealexchiri

Add Prometheus, Grafana, Loki to a local cluster

Updated: Mar 16

Introduction


Continuing on the series of setting up various services on a local Kubernetes cluster and playing around with it, this post is about adding Prometheus, Grafana and Loki to the mix.


In previous posts:

  1. Easily reproducible local Kubernetes environment with WSL shows how to create a local Kubernetes cluster in WSL using Docker

  2. Deploy with ArgoCD some infra...and ArgoCD itself adds the ArgoCD setup together with Traefik and Harbor


If you have been following along, the easiest way to continue is to delete the WSL and recreate it using the instructions in the first post. Then continue with the instructions below, which are very similar with the ones in the second post.


Note: If at any point you want to make your own changes to the cluster, I suggest you fork the repo and then re-create the cluster, clone your fork and make a commit (and push) where you change all references to my argo-play repo in the repo code to point to your fork instead. Then proceed with the normal steps (only make sure to use your repo instead of mine in the instructions).



Getting started

To get started, we need to do a few things:

  1. Open a Minikube terminal (considering you have completed the instructions in the first post)

  2. Switch to the ubuntu user

  3. Start docker (if you haven't done it when following the instructions in the other post)

  4. Create a new cluster (if you have done it, please delete it with minikube delete, we need to create it with the insecure flag) -> we need the insecure flag to be able to pull images in the cluster from the Harbor registry

  5. Clone the beta tag of the argo-play repo

  6. Go into the local copy of the repo in the install subfolder

  7. Install the argocd Helm chart

  8. Install the argocd-apps Helm chart

  9. Expose the traefik LoadBalancer with minikube tunnel


su ubuntu && cd ~
sudo service docker start
minikube start --insecure-registry "10.0.0.0/24"
git clone --branch beta https://github.com/alexchiri/argo-play.git
cd argo-play/install
helm install argocd ./argo-cd \
 --namespace=argocd \
 --create-namespace \
 -f argocd-values.yaml
helm install argocd-apps ./argocd-apps \
 --namespace=argocd \
 --create-namespace \
 -f argocd-apps-values.yaml
minikube tunnel

In a few minutes, you should have everything up and running. Open a new Minikube terminal tab, switch to ubuntu user, get all the resources in the cluster and enjoy the show:

su ubuntu
k get pod -A -w

Once all pods are running, get the external IP of your Traefik LoadBalancer service (for me it is 127.0.0.1):

k get services --namespace infra traefik --output jsonpath='{.status.loadBalancer.ingress[0].ip}'

Besides ArgoCD and Harbor (from the previous post), we now also have exposed Grafana:

  1. Grafana: https://127.0.0.1/grafana (replace with your IP if different)


Grafana


Grafana is a data visualization tool focused on metrics, logs and traces and one of the most popular options out there that you can host yourself.

In order to deploy it to the cluster, I used the official helm chart and made a few small customizations to make it easier to use in a local cluster and pre-populate several dashboards (you can see all of these in the Application manifest in the repo):



I configure Grafana to be accessible under a subpath, otherwise it would always try to fetch static resources and do redirects from the root. I also disable login and assign the role Admin to the anonymous user, since this is a local copy and I don't need multiple users, therefore logging in is mostly a nuisance in this case. Enabled debug logging in order to debug some issues with setting up the Logs dashboard.



Since I know that both Prometheus and Loki will be deployed in the same cluster and namespace, I pre-configure these datasources, so when everything is installed in the cluster, they can be used right away. I set them uids as well, so I can reference them in the Logs dashboard.



I also added a few pre-defined dashboards, two for generic Kubernetes metrics that I found on a repo (there are more there, but the others don't quite work for this setup) and one for Loki for viewing logs that I created quickly myself and embedded its JSON in the values file. This shows how to use pre-defined dashboards committed in a repo, but also how you can embed the dashboard definition directly in the values.yaml file.


With all these done, when all the pods are running, you can go the Grafana URL in your cluster and already see these dashboards in action!



Prometheus

Prometheus is a popular monitoring and alerting tool that ingests data from various sources and stores it as timeseries, which can then be queried and visualized using Grafana.


Prometheus is deployed also using its Helm chart and in its case I did not customize anything. Helm chart comes with kube-state-metrics which provides out-of-the-box common metrics about a Kubernetes cluster, so we have some data to query from the get-go! Some of these metrics are visualized in the two dashboards imported under the Kubernetes folder in Grafana, go check them out!


Loki

Grafana Loki is another product done by Grafana Labs, which is for logs what Prometheus is for metrics.


Setting it up using its official Helm chart wasn't that straightforward, which is a pity. I think all quality Helm charts should be runnable without much customization and getting Loki to run on a local cluster required a bit of tinkering (most likely because I have never used it before).



Loki is made to run in a high-availability mode, with several of its components running on distinct nodes of the cluster. Since our local cluster has only one node, this is not feasible, so running it in Single Binary mode is important to get it running this way. Second, unless you want to configure some cloud storage, you need to make it store its data on the local filesystem.



Another important detail (for now) is that we need to use the ServerSideApply option, because the Loki CRDs have very long descriptions and this prevents us from applying them client side.


Using the simple Logs dashboard I setup in Grafana, you can see and search in the logs of all pods running in the cluster. In order to get the logs ingested by Loki, I also deployed Promtail, an agent that can be used to send logs to a private Loki instance.


The only special configuration for the Promtail chart is to configure the path to the Loki instance. With all of this in place, we now also have a way to visualize metrics and logs of our some infra components. Next step is to add some kind of service in the cluster and monitor it, maybe add some alerts. But that is for the next post!




32 views0 comments
bottom of page