How to monitor MongoDB cluster using Prometheus

Reading Time: 5 minutes

MongoDB is an open-source document-oriented no SQL database. It is deployed as a cluster of multiple nodes to solve problems regarding data inconsistency and for purposes of disaster management. But this cluster needs continuous monitoring so as to keep an eye on the health of the nodes. Prometheus is a monitoring tool that helps us collect metrics that we need to monitor an application. Prometheus scrapes these metrics from its targets and stores them in a timestamp database. But with Prometheus we have another problem. The problem is Prometheus understands a certain type of metrics. So we use MongoDB exporter which converts the metrics to the type which Prometheus can understand and scrape. Lastly, we will be using Grafana to query and visualize the metrics collected by Prometheus.

Different Deployment methods:
  • We can deploy the mongodb exporter as a side car container along with mongodb statefulset. Then we can use stand alone prometheus deployment to monitor the cluster.
  • We can use Helm to deploy the mongodb exporter as well as the Prometheus operator. The prometheus operator creates a stand alone prometheus instance which scrapes metrics from service monitor created from the chart deployed for mongodb exporter.
  • Another method is to deploy the exporter using helm chart and deploy the stand alone prometheus using prometheus statefulset instead of operator. I will be using this method to deploy promethus and exporter.
  • A Kubernetes Cluster
  • Helm 3 installed
  • A MongoDB cluster
Deploy the MongoDB exporter

To deploy the MongoDB cluster refer to Now we will be using helm chart to deploy mongodb exporter. Execute the following command to add the repository and download the chart.

helm repo add prometheus-community
helm repo update
helm fetch prometheus-community/prometheus-mongodb-exporter

Now we need to set custom values file with the URL for the mongodb cluster, from where the exporter will expose the metrics. Create a values-mongodb.yaml file:

  uri: "mongodb://mongodb-0.mongodb:27017,mongodb-1.mongodb:27017,mongodb-2.mongodb:27017/?replicaSet=replicaset"

  enabled: false

We have disabled the service Monitor above as we will use standalone prometheus rather than using operator to monitor mongodb cluster. Now we deploy the helm chart with the following commands:

helm install exporter prometheus-mongodb-exporter -f prometheus-mongodb-exporter/values-mongodb.yaml

We can verify the chart has been deployed using the command:

helm ls
Deploy the Prometheus
Deploy the ConfigMap

We will need a config Map config.yml for Prometheus which contains the endpoint of the exporter to be scraped.

apiVersion: v1
  prometheus.yml: |-
      scrape_interval: 15s 
      evaluation_interval: 15s 
      - job_name: "exporter"
          - targets: ["exporter-prometheus-mongodb-exporter:9216"]
kind: ConfigMap
  name: prometheus

MongoDB exporter helm Chart by default creates a service exporter-prometheus-mongodb-exporter. Deploy the configMap using the command:

kubectl apply -f config.yml
Deploy the headless service

We will create a prometheus-headless.yml for the Prometheus instances to communicate with each other:

apiVersion: v1
kind: Service
  name: prometheus-headless
    app: prometheus
  clusterIP: None
    - name: metrics
      port: 9090
      targetPort: 9090

Execute the following command to deploy the headless service:

kubectl apply -f prometheus-headless.yml
Deploy the Prometheus StatefulSet

We will create a prometheus-sts.yml to deploy the Prometheus Statefulset. We will also mount the configMap we previously created as a volume containing the configuration and the target which Prometheus will scrape.

apiVersion: apps/v1
kind: StatefulSet
  name: prometheus
      app: prometheus
  replicas: 1
  serviceName: prometheus-headless
      name: prometheus-pod
        app: prometheus
      terminationGracePeriodSeconds: 15
        - name: prometheus
          image: prom/prometheus
          imagePullPolicy: IfNotPresent
            - name: metrics
              containerPort: 9090
            - name: prometheus-config
              mountPath: /etc/prometheus
            - name: data
              mountPath: /prometheus
        - name: prometheus-config
            name: prometheus
              - key: prometheus.yml
                path: prometheus.yml
    - metadata:
        name: data
        accessModes: [ReadWriteOnce]
        storageClassName: fast
            storage: 400Mi

Execute the following command to deploy the Prometheus Stateful Set:

kubectl apply -f prometheus-sts.yml

We can verify the Prometheus deployment using the command:

kubectl get sts prometheus
Deploying Grafana for visualization

We will deploy a grafana deployment to visualize and query the metrics Prometheus collects. We will create a yaml file grafana.yml:

apiVersion: apps/v1
kind: Deployment
  name: grafana
  replicas: 1
      tool: grafana
      name: grafana-pod
        tool: grafana
        - name: grafana
          image: grafana/grafana
          imagePullPolicy: IfNotPresent
            - containerPort: 3000

Then we will use the following command o deploy the Grafana:

kubectl apply -f grafana.yml
Exposing the Grafana and Prometheus

We can use NodePort services or ingresses to expose Grafana and Prometheus. But here we will be using port-forwarding for simplicity. You can use any of the above methods.

Execute the following commands:

kubectl port-forward prometheus-0 9090:9090
kubectl port-forward grafana-75b7f4865d-m44f4 3000:3000 

This will map localhost:3000 and localhost:9090 to the Prometheus and Grafana pods. We can access them on locahost:3000 and localhost:9090.

Testing the Prometheus Deployment

Access localhost:9090 and go to targets. We will see the MongoDB exporter as target which Prometheus is scraping for metrics.

Testing the Grafana Deployment

We can now access Grafana on localhost:3000. Now we will add the prometheus target as a data source on Grafana:

Now we will configure the Prometheus data source with the URL for the Prometheus instance:

Next click on Save and Test to test and save the configuration.

Now we will import a dashboard to visualize the metrics. You can create your own Dashboard but I will be using the prebuilt dashboard. Paste this link to import the dashboard

Once the dashboard is imported, we can visualize the metrics in form of graphs and gauges. We can view the uptime, members of the cluster and their health and so on.


In this blog, you have become aware of how to monitor MongoDB cluster. In case of any queries or any mistakes, you can contact me. I have provided all the references below which you might need.


Written by 

Dipayan Pramanik is a DevOps Software Consultant at Knoldus Inc. He is passionate about coding, DevOps tools, automating tasks and is always ready to take up challenges. His hobbies include music and gaming.