An Introduction to Kafka’s Internals

Reading Time: 6 minutes

In this blog, we will get the opportunity to come across what Kafka is, and explain how Kafka works from the inside out.  How does it replicate data between nodes, what happens if replication fails, and how do consumers scale-out automatically?

Insights of Apache Kafka

Kafka is a statistics streaming system that permits builders to react to new activities as they arise in real-time. Kafka structure includes a garage layer and a compute layer. The storage layer is designed to keep records successfully and is a dispensed machine such that if your storage desires develop over the years you may without difficulty scale-out the machine to house the boom. The compute layer consists of 4 core additives—the manufacturer, client, streams, and connector APIs, which permit Kafka to scale programs throughout allotted systems. In this manual, we’ll delve into every component of Kafka’s inner architecture and the way it works.

Inside the Apache Kafka broker

Consumer requests fall into classes: produce requests and fetch requests. A produce request is soliciting for that a batch of facts be written to a distinct subject matter. A fetch request is requesting statistics from Kafka subjects. Both sorts of requests undergo some equal steps. We’ll start by searching at the flow of the product request, after which see how the fetch request differs.

1. Producer Request

While a producer sends an event report, it’ll use a configurable partitioner to decide the topic partition to assign to the report. If the report has a key, then the default partitioner will use a hash of the key to determining the proper partition. After that, any facts with the same key will always be assigned to the same partition. If the file has no key then a partition approach is used to balance the statistics in the walls.

2. Fetch Request

A customer sends a fetch request to the broker, specifying the topic, partition, and offset it desires to consume. The fetch request is went to the broker’s socket to obtain buffer which it’s far picked up by way of a community thread. The community thread puts the request within the request queue, as changed into done with the product request.

Data Plane: Replication Protocol

Replication is one of the maximum vital capabilities of the Kafka dealer, and it handles it well. So properly, in fact, that we regularly don’t consider it much, aside from putting the replication component of our subjects. But while you take into account how lots we rely on replication to offer the sturdiness and excessive availability that we’ve come to expect from Kafka, it probably warrants a deeper knowledge of the ways it works. In this module, jun gives us just that, with unique factors and illustrated examples. He covers the roles of partition leaders and fans, the in-sync reproduction listing, chief epochs, high watermarks, and extra.

The Apache Kafka control plane

We’ve all heard about the zookeeper removal that was first announced with kip-500, now in this module, we’ll get a near-up to observe the zookeeper’s substitute, kraft. We’ll see a number of the advantages of kraft, which include stepped-forward scalability and extra efficient metadata propagation. Then we’ll go through a few step-by means of-step examples of kraft metadata replication and reconciliation as well as how the lively controller is elected from the available electorate.

Consumer group protocol

Client organizations are the nearly magical issue that allows us to scale Kafka customer programs up or down with no trouble and safety. The era at the back of that wizard’s curtain is the customer institution protocol. In this module, jun offers us an intensive clarification of the ways the patron group protocol works. He’ll cowl the group coordinator, group club, partition undertaking strategies, and how they have an effect on the rebalancing system. We’ll also study group coordinator failover, group initialization, and partition offset monitoring. We’ll even see specified examples of the one-of-a-kind partition task techniques in movement, which includes the first-rate cooperative sticky assignor. When all is said and finished, you would possibly nevertheless be wondering if it’s without doubt magic in spite of everything.

Data Distribution, Data Replication

Each component within the entire collection is owned by a single vendor, called a partition leader. The partition can also be shared with multiple vendors, in other words – the split may be repeated throughout the collection. In such cases (when the repetition factor is greater than 1) the messages are copied and stored to other vendors throughout the collection.

Messages are replicated across the cluster by use of a mechanism called the primary-backup model, which means that one of the partitions works as a leader, and the other ones, trying to be in sync with the leader, work as followers.

Such architecture provides data redundancy, which protects from data loss, streaming downtimes (under certain conditions), and from delays in serving messages owing to the fact that leaders are distributed on different brokers.

Data Retention

While retaining messages, Kafka provides a way to save. A well-prepared Kafka collection can prevent delays, overloading, and even failure. There are a few options for making configurations depending on specific needs and uses.

First, Kafka can store messages for a period of time (e.g., 4 hours or 10 days). Another possibility is to set the threshold size (e.g., 5GB), where, when these limits are reached, the “asset” data is removed from the cluster.

In most cases, the storage method is adjusted throughout the Kafka collection. However, in some cases, retention can also be set for individual topics. That way the topic that keeps metrics and logs can have a storage time set of 6 hours. While an article that acts as an event finder can store data from scratch.

The last but the least common way to configure a topic is log density. In that case, only messages sent with a given key will be saved in this article.

Zookeeper & Broker Discovery

This is why each vendor should be connected to Apache Zookeeper, which stores all data relating to organizations (such as vendors, titles, transactions, etc.) within a particular collection. Such ‘data’ is called metadata (or configuration data) and contains a list of available titles, the exact amount of classification of all topics created within the Kafka collection, the location of the copy, and the information on which the node is selected to be the leader.

Talk about the lead node (or controller) – Zookeeper is the place where the “control selection” is performed. If one of the nodes within a group fails, it is the administrator’s job to send information to the appropriate partitions about such failures and to select a new lead partition.

Why Kafka is so fast?

Kafka supports a high-throughput, highly distributed, fault-tolerant platform with low-latency delivery of messages. Following is the approach that Kafka follows to provide the aforementioned characteristics:

  • Use of File System:- Kafka instead of storing as much memory as possible and taking it all out of the file system by swearing when we run out of space, reverses that. Kafka uses Sequential I / O. All data is instantly logged into a continuous log file system without being moved to disk. This actually means it is being transferred to the kernel pagecache.
  • Use of Queue:- BTree is the most persistent data structure used by messaging systems. They operate in O (log N) which is a constant. But this O (log N) will not be considered normal in the event of disk operation.
  • Batching of data:- Kafka eliminates system system malfunctions: very small I / O tasks. This problem occurs both between the client and the server and ongoing server activities.
  • Compression of Batches:- The next obstacle to any data pipeline is Network Bandwidth. Normally in any data pipeline, a user may / may not want to send data over a wide area network. Or, the user can compress the data himself without help from Kafka but that may lead to incorrect measurement as duplication between messages of the same type (e.g. field names in JSON or agents for users on the web logs or unit standard values).


In conclusion, in this blog, we have learned about the internals of Kafka. I will cover more topics in the further blogs.

For more, you can refer to the documentation:

For a more technical blog, you can refer to the Knoldus blog:

Written by 

Bhavya is a Software Intern at Knoldus Inc. He has completed his graduation from IIMT College of Engineering. He is passionate about Java development and curious to learn Java Technologies.