Hi folks, in this blog, we will be discussing what challenges we face when we want to view logs with respect to metrics and how we can achieve this using Grafana, Loki, and Promtail.
Problem Statement
When using monitoring solutions like Grafana, Prometheus, Elatic Stack etc for our infrastructure, we have metrics and logs decoupled to an extent that it is very difficult to look at logs of an application whenever we notice a spike in the metrics. We feel the need to look at logs in as few steps as possible and also to be able to relate and visualise them with the metrics. We also don’t want to jump from one application view to another just for the sake of viewing metrics and logs.
This is the situation where Grafana Labs’ Loki comes into the picture.
What is Loki?
As Loki once said to Iron Man, “We have an army!” and got smashed! Well this Loki does not have an army, but some friends and this is the one that is doing the smashing.
Loki is a very effective log aggregation solution created by Grafana Labs and launched in November 2019. Loki was announced in KubeCon 2018. It is by design very cost effective and easy to operate. It does not index the contents of the logs, but rather a set of labels for each log stream.
Unlike other logging systems, Loki uses the idea of only indexing metadata about your logs: labels (just like Prometheus labels). Log data itself is then compressed and stored in chunks in object stores like S3, GCS, or local file-system. A small index and highly compressed chunks simplifies the operation and significantly lowers the cost of Loki.
Another tool that we will be using along with Loki is Promtail, which is an agent which ships the contents of local logs to a Loki instance. It is usually deployed to every machine that has applications needed to be monitored. It is responsible for discovering targets, attaching labels to log streams, and pushing them to Loki instance.
Promtail
Promtail borrows the same service discovery mechanism from Prometheus, although it currently only supports static and kubernetes service discovery. This limitation is due to the fact that Promtail is deployed as a daemon to every local machine. So, it does not discover label from other machines. Kubernetes service discovery fetches required labels from the Kubernetes API server while static usually covers all other use cases.
Let us further look at how we can use these tools together to link metrics with logs.
Using Loki and Promtail
For the example in this blog we will be using the following applications to create a suitable set up:
- Kafka Version: 2.12-2.5.0
- Prometheus Version: 2.6.1
- Grafana Version: Above 6.4
- Loki version: 0.4.0
- Promtail version: 0.4.0
- JMX Exporter: jmx_prometheus_javaagent-0.3.0.jar
We will be using binaries and for Kafka, Prometheus, Loki, and Promtail and we will be running a docker image for Grafana.
Running Zookeeper
We have downloaded the jmx_exporter jar and placed it inside /opt/jmx-exporter directory.
To run Zookeeper:
> EXTRA_ARGS="-javaagent:/opt/jmx-exporter/jmx exporter.jar=7070:/etc/jmx-exporter/zookeeper.yml" ./bin/zookeeper-server-start.sh ./config/zookeeper.properties
Running Kafka
To run Kafka:
> KAFKA_OPTS='-javaagent:/opt/jmx-exporter/jmx-exporter.jar=7071:/etc/jmx-exporter/kafka.yml' bin/kafka-server-start.sh config/server.properties
Running Loki
To download and set up binary for Loki
> cd /usr/local/bin
> sudo curl -fSL -o loki.gz “https://github.com/grafana/loki/releases/download/v0.4.0/loki-linux-amd64.gz"
> sudo gunzip loki.gz
> sudo chmod a+x loki
After downloading the Loki binary, we will create a configuration file for Loki in the /usr/local/bin directory itself with the name config-loki.yml.
auth_enabled: false
server:
http_listen_port: 3100
ingester:
lifecycler:
address: 127.0.0.1
ring:
kvstore:
store: inmemory
replication_factor: 1
final_sleep: 0s
chunk_idle_period: 5m
chunk_retain_period: 30s
schema_config:
configs:
- from: 2018-04-15
store: boltdb
object_store: filesystem
schema: v9
index:
prefix: index_
period: 168h
storage_config:
boltdb:
directory: /tmp/loki/index
filesystem:
directory: /tmp/loki/chunks
limits_config:
enforce_metric_name: false
reject_old_samples: true
reject_old_samples_max_age: 168h
chunk_store_config:
max_look_back_period: 0
table_manager:
chunk_tables_provisioning:
inactive_read_throughput: 0
inactive_write_throughput: 0
provisioned_read_throughput: 0
provisioned_write_throughput: 0
index_tables_provisioning:
inactive_read_throughput: 0
inactive_write_throughput: 0
provisioned_read_throughput: 0
provisioned_write_throughput: 0
retention_deletes_enabled: false
retention_period: 0
Now we will be starting Loki
> sudo loki -config.file /usr/local/bin/config-loki.yml
Running Promtail
To download and set up binary for Promtail
> cd /usr/local/bin
> sudo curl -fSL -o promtail.gz "https://github.com/grafana/loki/releases/download/v0.4.0/promtail-linux-amd64.gz"
> sudo gunzip promtail.gz
> sudo chmod a+x promtail
After downloading the Promtail binary, we will create a configuration file for Loki in the /usr/local/bin directory itself with the name config-promtail.yml.
server:
http_listen_port: 9080
grpc_listen_port: 0
positions:
filename: /tmp/positions.yaml
clients:
- url: http://127.0.0.1:3100/loki/api/v1/push
scrape_configs:
- job_name: system
static_configs:
- targets:
- localhost
labels:
job: varlogs
__path__: /var/log/*log
- job_name: prometheus
static_configs:
- targets:
- localhost
labels:
job: prometheus
__path__: /home/knoldus/kafka-monitoring/3/prometheus-2.6.1.linux-amd64/logs/*log
- job_name: kafka
static_configs:
- targets:
- localhost
labels:
job: kafka
__path__: /home/knoldus/kafka-monitoring/3/kafka_2.12-2.5.0/logs/server.log
You can set multiple targets under scrape_configs field with their paths for reference.
Now we will be starting Promtail
> sudo promtail -config.file /usr/local/bin/config-promtail.yml
Running Grafana
To run Grafana using docker, we will be using the host network driver here:
> docker run -d --name=grafana --network host grafana/grafana
Running Prometheus
Inside the directory where the Prometheus binary is located, run the following command to start Prometheus and store logs in suitable location:
> ./prometheus 2> logs/prom.log
After completing the above steps, we have to add data sources in Grafana.
One will be for Loki (localhost:3100) and another for Prometheus (localhost:9090).
Viewing Logs:
- Go to explore tab
- Select Data Source as Loki
- Queries Sample -> {job=”kafka”}, {job=”prometheus”}
Viewing Logs From Panel:
- Create a Panel with visualization set to Logs
- Select Data Source as Loki
- Queries Sample -> {job=”kafka”}, {job=”prometheus”}
- View this panel and copy the URL
- Now go to another panel
- Go to General section -> Add links -> Paste the link of Logs Panel
- The link will be visible in top-left corner of the Panel
Another Way:
- Click on dropdown of Panel next to its name.
- Select Explore
- (Optional) Split screen from Grafana with the button on top right
- Change data source to Loki
Conclusion
In this blog, we have seen what is Loki and what problem does it solve. Also, we looked at how to use Loki and Promtail along with some previously used applications to link metrics with logs.
We might look at Loki as an alternative to the Elastic Stack. But with Loki still at an early stage combined with the community support for Elastic stack we can say that there is a long road ahead for Loki. But it is a very useful tool and there is no doubt that it is going to give tough competition to the other tools.