As we know, We use connectors to copy data between Apache Kafka and other systems that we want to fetch or send data to. We can download the connector from Confluent Hub. So, in this blog, we will see how we can set up Confluent Hub on our local system. And how we can run confluent services like Kafka, zookeeper, schema registry, etc, etc.
The first primary step is, if we have a windows system then we should set up Linux because the Windows operating system is not supported by confluent at the moment.
The second step is to install Java and we can check the version of java by running the below command on the terminal.
The third step is to download the confluent bundle. So, there are two ways to download it, first, one is to visit the confluent.io portal, provide your email address and download the zip or tar bundle, and manually copy it to your Linux home directory.
The second method is to use the curl command to download that bundle in zip or tar file and it will directly be downloaded to your home directory of Linux. So use this command
After unzipping this zip file, the next step is to set an environment variable we basically need two environment variables one is the confluent home to refer to the confluent install directory and the second is the path to refer to the bean directory under that confluent home.
Now next step is to run Confluent services like Kafka, zookeeper, etc. We can run this command confluent local services start to start all the services,
Now go to the browser and open the control center URL by running localhost 9021/. It will open the control center and our services are running successfully.
- The first tab of Topics will give you the overview of the topic which means production and consumption specific to the topic.
- Next, we’ve message, from here we can see or download the message from the Kafka topic, as of now we don’t have any message for this topic so that’s why no message is showing here.
- In the third tab, we have the schema tab, as we know Kafka data or Kafka topic data is always in form of key and value pair so here we can have what type of value we want or what type of key we want, so here we can see key-value schema present.
- The last tab has all configuration of the topic, like no of replicas no of partitions, etc.
- The next page is connect, it is a part of Kafka connect features, it manages, monitors, and configures the source and sync connector.
- The next thing we have is KSQLdb, here we can execute queries against topic data and browse download messages from query results it will also allow us to add or stream or table against the topic data by running customized queries.
- So next we have is Consumers, here it will give you the list of the consumer groups present under your cluster and for each consumer group, it will show you the number of messages behind the number of consumers we have and what the topics it is consuming.
- Then we have Replicators here it will monitor and configure replicated topics, at the moment we don’t have any topics replicated that’s why it is showing no replicators found.
- Then last we have Cluster Settings. So this is all the general settings about the cluster, cluster name, and cluster id.
In this blog, we have learned how we can set up Confluent on our local system and can use its services like Kafka, Zookeeper, etc
- If you want to learn more about this please follow this
- and if you want to learn Kafka connect concepts follow this