Monitoring Elastic Search using Prometheus and Grafana.

Reading Time: 4 minutes

Image taken from Wikipedia

This Article will guide you to Monitor Elastic Search using Prometheus and Grafana in Kubernetes. Before Going deep in this topic, First of all let’s understand what are these.

Elastic Search:

Elasticsearch is a distributed, free and open search and analytics engine for all types of data, including textual, numerical, geospatial, structured, and unstructured. Here i believe you have knowledge of setting up elastic search so i will not be focusing on that and this blog will focus on setting up Prometheus and Grafana.

Prometheus and Grafana:

Prometheus is 100% open source and community-driven that provides monitoring capabilities. prometheus collects metrics from targets by scraping metrics HTTP endpoints. Grafana is a visualization tools which we can use with Prometheus for monitoring.

In this Tutorial we will be making use of Exporters, Let’s us first know what a exporter is and how it will help us in monitoring Elastic Search.

Exporters :

Prometheus provides us a way to monitor third party applications with the help Exporters and Here we will make use of elastic search exporter which can act as side car container and collects metrics which Prometheus scrape.

Elastic_Search exporter is written in go language. Prometheus Community maintains the exporter. Earlier justwatch used to maintain it. They transferred it to Prometheus Community in May 2021. Now as the part of this setup you have to run exporter as side car container with elastic search container inside the same pod.

Setting up Prometheus:

Config.yml

First of all we will setup this config.yml for prometheus that contains all the configurations.

apiVersion: v1
data:
  prometheus.yml: |-
    global:
      scrape_interval: 15s 
      evaluation_interval: 15s 
    scrape_configs:
      - job_name: "prometheus"
        static_configs:
          - targets: ["localhost:9090"]
      - job_name: "exporter"
        static_configs:
          - targets: ["elasticsearch:9108"]
kind: ConfigMap
metadata:
  name: pr-conf
  namespace: elk

Next we have to write is i.e a service file for Prometheus. Here we use NodePort Service where we gave the node port to 32200 and our target port is 9090, Here we have made our own namespace i.e elk , you can change it to your own and even use default if you want.

apiVersion: v1
kind: Service
metadata:
  name: prometheus
  namespace: elk
  
spec:
  selector: 
    app: pr
  type: NodePort  
  ports:
    - port: 9090
      targetPort: 9090
      nodePort: 32200

Next we will create a Deployment for Prometheus.

apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: elk
  labels:
    app: pr
  name: pr
spec:
  replicas: 1
  selector:
    matchLabels:
      app: pr
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: pr
    spec:
      containers:
      - image: prom/prometheus
        name: prometheus
        ports:
        - containerPort: 9090
        volumeMounts:
        - name: data
          mountPath: /etc/prometheus
      volumes: 
      - name: data
        configMap:
          name: pr-conf
          items:
              - key:  prometheus.yml
                path: prometheus.yml

        

Now we have done with prometheus part , next we will be setting up grafana.

Setting up Grafana:

First of all we will set up a service file for grafana.

apiVersion: v1
kind: Service
metadata:
  name: grafana
  namespace: elk
spec:
  selector: 
    app: grafana
  type: NodePort  
  ports:
    - port: 3000
      targetPort: 3000
      nodePort: 32000

Here the nodePort is 32000 , this nodePort will help to access grafana through our browser and next we will write a deployment for grafana.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: grafana
  namespace: elk
spec:
  replicas: 1
  selector:
    matchLabels:
      app: grafana
  template:
    metadata:
      name: grafana
      labels:
        app: grafana
    spec:
      containers:
      - name: grafana
        image: grafana/grafana:latest
        ports:
        - name: grafana
          containerPort: 3000
       

Note: This Grafana deployment does not have a persistent volume. pods are ephemeral and restarting the pod erases all changes. Use a persistent volume if you are deploying Grafana for your project requirements and want persist all the configs and data.

Now you should be able to access the Grafana dashboard using any node IP on port 32000

http://<your-node-ip>:32000

Setting up Grafana Dashboard:

You can also use port forwarding to do this but since we have created a service of type nodePort. We don’t need to do this. Now i have accessed Grafana , you have to login using default id and password which is admin.

We will login with default credentials , next it will ask to change the password, you can skip that step if you want. Next we have to add a data source and configure our dashboard. For data source we will add this.

http://prometheus:9090 

After selecting the data source Prometheus and we will add the URL. In the URL you can see Prometheus is written which is the name of my service and port is 9090 where our Prometheus is listening. Remember in my case my pods and services are running in same namespace otherwise my URL would be including the name of namespace too. After this, just click on save and test and data source is added.

Next we will configure a dashboard to view and monitor. For that we will import a dashboard from Grafana.com. You can make your own too, after entering the ID or dashboard URL you can import your own. Here i have entered 14191 id and imported a dashboard from grafana.

when you click on import a dashboard will appear and you can see it.

Conclusion:

This blog gave a step-by-step guide in setting up monitoring for elastic search using Prometheus and grafana. If you liked this blogs, please share, comment. To learn how to setup elastic search you can refer to this.

References:

References 1: https://devopscube.com/setup-grafana-kubernetes/ References 2: https://prometheus.io/

Written by 

Passionate about Technology and always Interested to Upskill myself in new technology, Working in the field of DevOps