Sink Connector: The MarkLogic Kafka Connector

Visualizing data - blue matrix with data, electronic, digital, abstract, dark blue, data science
Reading Time: 2 minutes

The MarkLogic Kafka connector is a sink connector for receiving messages from Kafka and writing them to a MarkLogic database. The sink pulls messages from the Kafka topics to store in MarkLogic as JSON documents. This acquires messages using Kafka from numerous brokers and then writes to marklogic with no coding required. The connector uses the MarkLogic Data Movement SDK (DMSDK) to store those messages in a MarkLogic database.


MarkLogic Server is an enterprise no sequel database. The unique structure of MarkLogic ensures that your applications are both scalable and have extreme performance. MarkLogic’s Server is a multi-model database. It has documents in place of a schema.

Apache Kafka

Apache Kafka is an open-source streaming platform utilized by heaps of agencies for high-overall performance facts pipelines, streaming analytics, facts integration, and mission-important applications.
Kafka is a distributed system consisting of servers and clients that communicate via a TCP network protocol. 

MarkLogic Data Movement SDK

The MarkLogic Data Movement SDK helps long-going write, read, delete, or rework jobs. Long-going for walks writes jobs are enable via way of means of WriteBatcher. Long-going read, delete, or rework jobs are enable via way of means of QueryBatcher. That perform actions on all the matching query or on all queries provided by an Iterator.

MarkLogic Kafka Connector Advantages

  • The MarkLogic Kafka Connector is convenient and straightforward to use. 
  • Each component may be set up and integrated without writing any code.
  • Kafka with MarkLogic’s ACID properties has extremely high reliability.
  • A system made up of Kafka, MarkLogic, and Kafka Connector is broadly scalable and reliable.
  • To prevent data loss Marklogic Kafka connector can be used.
  • As resources are maxed out, each component may be expanded independently to meet data flow requirements.
  • These components are compatible with AWS Cloud Computing Services.

MarkLogic Kafka Connector Usecases

  • Near real-time ingestion requirements.
  • Streaming large amounts of data into the MarkLogic database.
  • Regulate the traffic towards MarkLogic.
  • The requirement is to maintain the order of the messages.
  • A Kafka ecosystem exists.
  • Replay messages using an offset.

In the next blog, we gonna work on the proper Marklogic Kafka connector working demo.

Thanks for Reading !!

Written by 

I am Software Consultant at Knoldus and I am curious about learning new technologies.