Streaming Data Pipelines using Kafka connect

Reading Time: 3 minutes

Are the messages produced by using one part of a utility to apache Kafka and eaten up with the aid of some other part of that equal utility, different applications aren’t interested in those messages?

Allow’s imagine that the distributing statistics units which are computed these rubdowns may also need those messages into an outside gadget or push it from an external carrier. So having a framework for transferring data between streaming information pipelines and outside systems including databases, key-fee stores, and report structures is a project to make this method even simpler for not unusual use cases. Here comes Kafka join, a framework to ease ingest facts from an outside database into Kafka or from Kafka to the outside machine.

Kafka Connect API and Kafka Connectors

Kafka connects API is a central component of apache Kafka, introduced in version 0.9. It affords scalable and resilient integration among Kafka and other structures. Kafka connect may be run as a clustered system across multiple nodes and handles all of the tricky enterprises of integration, along with:

  1. Scaleout of ingesting and egress throughout nodes for greater throughput

2. Computerized restart and failover of responsibilities on the occasion of node failure

3. Automatic offset control

4. Automated maintenance of supply statistics schema

5. Usage of information schema to create target gadgets (e.G. Hive tables while streaming to hdfs, RDBMS tables when streaming to a database)

6. Single message alterations

To apply Kafka join you certainly need a plugin (connector) that integrates with the generation in which you’re interested in. The confluent platform has several of these plugins (connectors), along with JDBC, hdfs, elasticsearch, and s3. You could create custom plugins (connectors).

Type of Connectors

  • Source import data from another system (e.g. JDBCSourceConnector would import a relational database into Kafka)
  • Sink export data (e.g. S3SinkConnector would export the contents of a Kafka topic to an S3 bucket).

Kafka Connect Modes

  • standalone (single process).
  • distributed.

Standalone mode

In standalone mode, all the tasks are done in a single process. This configuration is less complicated to set up and get commenced with and may be beneficial in situations but it does not benefit from some of the capabilities of Kafka join including fault tolerance. You can start a standalone system with the following command:

> bin/ config/ [ …]

we can start our first connector setup:

$CONFLUENT_HOME/bin/connect-standalone \
  $CONFLUENT_HOME/etc/kafka/ \
  $CONFLUENT_HOME/etc/kafka/ \

First, we will inspect the content of the topics using the command

$CONFLUENT_HOME/bin/kafka-console-consumer --bootstrap-server localhost:9092 --topic connect-test --from-beginning

then we will see that the source connector took the data from the .txt file, convert it into JSON, and sent it to Kafka: and a .txt file will be created in $CONFLUENT_HOME folder

As the sink connector extracts the value from the payload attribute and writes it to the destination file, the data in sink.txt has the content of the original .txt file.

Distributed mode

Distributed mode handles computerized balancing of tasks, allows you to scale up (or down) dynamically, and gives fault tolerance each within the energetic responsibilities and for configuration and offset dedicated data. Execution is very similar to standalone mode:

> bin/ config/

we can configure the distributed mode in connect-distributed. properties. Parameters are mostly the same as for standalone mode. There are only a few differences:

  • group. id defines the name of the Connect cluster group. The value must be different from any consumer group ID
  • and define topics for these settings. For each topic, we can also define a replication factor

We can start Connect in distributed mode as follows:


Steps to creating Kafka S3 connectors with the Kafka Connect APIs.

On the cloud side, first, create an s3 bucket with the right permissions. As an example, after setting permissions, i simply need to choose a call and an area for my s3 bucket

To connect S3 connector we need to set up AWS credentials

Steps to add credentials

An easy way to do that is to add credentials in ~/.aws/credential file or export two environment variables:



Configure the S3 connector by inserting its properties in JSON format:



“config”: {




















Here, we discussed about Streaming Data Pipelines using Kafka connect

for more information click here

Written by 

Chiranjeev kumar is a Software intern at Knoldus. He is passionate about the java programming . He is recognized as a good team player, a dedicated and responsible professional, and a technology enthusiast. He is a quick learner & curious to learn new technologies. His hobbies include listening music , playing video games.

Leave a Reply