In this blog article,we aim to give reader a sense of what is the need of monitoring application logs and a methodology of how we can monitor them using an example.
Lets say you have a use case where you would like to monitor important aspects of your application for e.g. temperature” as a measure” or “facet” for multiple devices emitting data for complete IoT system for monitoring home temperatures. You would like to raise an alert in scenarios where abnormal conditions arises. Abnormal conditions here refers to scenarios where spike in temperature occurs beyond specific threshold limit.Hold those thoughts, because we would be using this example in subsequent sections to monitor data emitted by different sensors.
First, let us understand the necessity of monitoring logs.
Why do we need to monitor logs?
1.) For validating the behaviour of your application with the expected behaviour.
2) Proper management of system resources that your application consumes like (CPU, Memory etc.)
3) Keeping track of essential fields inside your application like temperature, no. of files received etc. depending on your application.
4) Gather intelligence and improve you business as well.
QuickStart guide to obtain Metrics from Application Logs
1. Background of Grok Exporter
Grok provides functionality to parse unstructured log data into something structured and queryable. Grok is heavily used in Logstash to provide log data as input for ElasticSearch. It is easy to extend Grok with custom patterns.
The grok_exporter aims at porting Grok from the application metric to Prometheus monitoring. The goal is to use Grok patterns for extracting Prometheus metrics from arbitrary log files.
Note :- You can use any exporter for extracting metrics from your logs. For the scope of this example I have used grok_exporter.
Download it from https://github.com/fstab/grok_exporter/releases.
2.Write Parser Using GROK
Write a Parser in config.yml at ~/grok_exporter-0.2.7.linux-amd64/example/ that reads logs and converts it into metrics.
Now run ./grok_exporter –config ./example/config.yml inside grok directory. You should be able to see metrics at localhost:9144/metrics as depicted in below image:-
In the screenshot above you can see that my metrics Device-Id and Temperature are exposed at localhost:9144/metrics.
Note : You can write your custom pattern inside patterns directory and use in config.yml.
3. Use Prometheus for Visualizing your custom metrics
Start Prometheus using comand
sudo systemctl start prometheus
Check its status using below command :-
sudo systemctl status prometheus
Once prometheus is up and running. You should be able to see prometheus UI – localhost:9090/graph. In screenshot attached below, you will be able to see metrics that you created using parser inside prometheus.
Refer to link below for more details on prometheus :-
4. ADD ALERTS TO THE MONITORED METRICES
Now visualize your metrics using grafana –
- Start grafana using below command :-
Create an alerting rule corresponding to abnormal conditions.You should be able to see dashboard in case of any spike in temperature
Refer to this link below for more information on grafana.
Result of Monitoring Logs
Above graph clearly depicts spike in temperature for “Device44”. You can set an alerting rule in grafana to get notification on your choice of channels like mail, slack etc.
In this post, we have gone through a way out of how we can trace our application logs and generate alerts as soon as an unexpected condition arises.Stay Tuned for more interesting blogs like this.