Kafka and its Important use cases

Reading Time: 3 minutes

Kafka is an open-source distributed event streaming platform capable of handling trillions of events a day. It provides the messaging backbone for building distributed applications. A streaming platform needs to handle the constant influx of data. And process this data data sequentially and incrementally. It is a platform where you can publish data, or subscribe to read data. We use Kafka for building real-time data pipelines and streaming apps. It is a publish-subscribe messaging system which lets you exchange data between applications, servers, and processors as well. 

It is a broker based solution which operates by maintaining streams of data within a cluster of servers.So it is easy to set up and use.Moreover it is stable, provides reliable durability, has a flexible publish-subscribe/queue. That scales well with N-number of consumer groups and has robust replication.

Advantages of Kafka

  • Reliability − It is distributed, partitioned, replicated and fault tolerance.
  • Scalability − Its messaging system scales easily without down time.
  • Durability − It uses Distributed commit log which means messages persists on disk as fast as possible, hence it is durable.
  • Performance − It has high throughput for both publishing and subscribing messages.

Kafka replicates data and is able to support multiple subscribers. In addition to this, it automatically balances consumers in the event of failure. That means it’s more reliable than similar messaging services available. It is a valuable tool in scenarios requiring real-time data processing and application activity tracking, as well as for monitoring purposes. 

Kafka image

Use Cases of Kafka

Real-time processing in Kafka

Many modern systems require data to be processed as soon as it becomes available. So the models should constantly analyze streams of data as in case of IoT devices. Kafka is useful here as it is able to transmit data from producers to data handlers and then to data storages. Moreover it allows for the immediate trigger when there is any deviation. 

In addition to this, one more benefit is that you don’t need to build real-time subscriber from scratch. As once events are coming to Kafka, you can delay the decision of what to do with the data. And how to process it for a later time. For instance, you can use Kafka to migrate from a batch-processing pipeline to a real-time pipeline. 

Metrics

Kafka monitors operational data by producing centralized feeds of that data. The centralized feeds includes aggregating statistics from distributed applications.Operational data means monitoring things from technology to security logs to supplier information, and so on.

Website tracking

The use case of Kafka is to rebuild a user activity tracking pipeline as a set of real-time publish-subscribe feeds. So site activity publishes to central topics with one topic per activity type. As the quantity of generated message is quite high for each user page view. Therefore the activity tracking is also high. Site activity refers to page views, searches, or other actions that user can take. These site activity is available for real-time processing, dashboards and offline analytics in data warehouses like Google’s BigQuery.

Log Aggregation Solution

Kafka collects logs from multiple servers. After that it is available in standard format for multiple consumers. Kafka abstracts away the details of files and gives a cleaner abstraction of log as a stream of messages. Therefore there is lower-latency processing and easier support for multiple data sources and distributed data consumption.

Stream Processing

Kafka process data in processing pipelines which includes multiple stages. Extract raw input data from topics. And transform it into new topics for further consumption. These new topics becomes available to users and applications such as Spark Streaming, Storm, etc.

Event Sourcing

Event sourcing is a style of application design where state changes are logged as a time-ordered sequence of records. Since Kafka supports the collection of huge amounts of log data. It is a best-fit backend for any application.

Commit Log

Kafka serve as a kind of external commit-log for a distributed system. The log helps replicate data between nodes and acts as a re-syncing mechanism for any failed nodes to restore their data. It can also act as a pseudo commit-log. For instance, If a user is tracking device data for IoT sensors. And finds an issue with the database that all the data is not getting stored. Then the user can replay the data for replacing the missing information in the database.

Reference for Kafka

https://www.tutorialspoint.com/apache_kafka/index.htm

Written by 

Shivani Sarthi is a Software Consultant at Knoldus Software. She has completed her MCA from BCIIT and Bachelors in Electronic Science from Delhi University. She has a keen interest toward learning new technologies. Her practice area is Devops. When not working, you will find her watching anime or with a book.

Discover more from Knoldus Blogs

Subscribe now to keep reading and get access to the full archive.

Continue reading