How to deploy ELK stack on Kubernetes

Reading Time: 3 minutes

ELK stack consists of Elastic Search, Kibana, Logstash. The main purpose of this is to aggregate logs. Rise of micro-service architecture demands better way of aggregating and searching through logs for debugging purpose. ELK stack helps to aggregate these logs and explore through those logs. The main components of ELK stack are:

  • Elastic Search: This is the database which stores all the logs
  • Kibana: Kibana is the visualization platform and we can use Kibana to query Elastic Search
  • Logstash: Logstash is data ingestion tool. It ingests data(logs) from various sources and processes them before sending to Elastic Search
  • Filebeat: Filebeat is very important component and works as the log exporter. It exports and forwards the log to Logstash.

Prerequisites for elk

  • Kubernetes Cluster
  • Helm v3 installed

Installing ELK

Now let us deploy each and every component one by one.

First we will create a values file which will expose the elastic search using ingress. Be sure to deploy the ingress controller beforehand. Create a file, values-2.yaml with the following content:

replicas: 1
minimumMasterNodes: 1

ingress:
  enabled: true
  hosts:
    - host: es-elk.s9.devopscloud.link #Change the hostname to the one you need
      paths:
        - path: /
  
volumeClaimTemplate:
  accessModes: ["ReadWriteOnce"]
  resources:
    requests:
      storage: 10Gi

Now execute the following commands to add the Elastic Search helm repo:

helm repo add elastic https://helm.elastic.co
helm repo update

Now to deploy the elastic search, execute the command:

helm install elk-elasticsearch elastic/elasticsearch -f values-2.yaml --namespace logging --create-namespace

To verify the elastic search is working fine, use the ingress host on browser.

Deploy Kibana

Now, we will create a custom values file for Kibana helm chart. Create a file values-2.yaml with the following content:

elasticsearchHosts: "http://elasticsearch-master:9200"
ingress:
  enabled: true
  className: "nginx"
  hosts:
    - host: kibana-elk.s9.devopscloud.link
      paths:
        - path: /

Now, to deploy the helm chart use the command:

helm install elk-kibana elastic/kibana -f values-2.yamls

To verify the kibana is working fine, use the ingress host on browser.

Deploy the logstash

Now, we will create a custom values file for Logstash helm chart. Create a file values-2.yaml with the following content:

persistence:
  enabled: true

logstashConfig:
  logstash.yml: |
    http.host: 0.0.0.0
    xpack.monitoring.enabled: false

logstashPipeline: 
 logstash.conf: |
    input {
      beats {
        port => 5044
      }
    }
    output {
      elasticsearch {
        hosts => "http://elasticsearch-master.logging.svc.cluster.local:9200"
        manage_template => false
        index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
        document_type => "%{[@metadata][type]}"
      }
    }

service:
  type: ClusterIP
  ports:
    - name: beats
      port: 5044
      protocol: TCP
      targetPort: 5044
    - name: http
      port: 8080
      protocol: TCP
      targetPort: 8080

Now to deploy the logstash, execute the following command:

helm install elk-logstash elastic/logstash -f values-2.yaml

Deploy the filebeat

Now, we will create a custom values file for Logstash helm chart. Create a file values-2.yaml with the following content:

daemonset:
  filebeatConfig:
    filebeat.yml: |
      filebeat.inputs:
      - type: container
        paths:
          - /var/log/containers/*.log
        processors:
        - add_kubernetes_metadata:
            host: ${NODE_NAME}
            matchers:
            - logs_path:
                logs_path: "/var/log/containers/"

      output.logstash:
        hosts: ["elk-logstash-logstash:5044"]

Now, to deploy the logstash use the following command:

helm install elk-filebeat elastic/filebeat -f values-2.yaml

Verify ELK installation

If we go to discover tab in Kibana we will find the following output:

We can see logs of container logged in the Elastic search.

References:

Written by 

Dipayan Pramanik is a DevOps Software Consultant at Knoldus Inc. He is passionate about coding, DevOps tools, automating tasks and is always ready to take up challenges. His hobbies include music and gaming.