Streaming Solutions

Big Data Evolution: Migrating on-premise database to Hadoop

We are now generating massive volumes of data at an accelerated rate. To meet business needs, address changing market dynamics as well as improve decision-making, sophisticated analysis of this data from disparate sources is required. The challenge is how to capture, store and model these massive pools of data effectively in relational databases. Big data is not a fad. We are just at the beginning Continue Reading

Using Vertica with Spark-Kafka: Write using Structured Streaming

In two previous blogs, we explored about Vertica and how it can be connected to Apache Spark. The first blog in this mini series was about reading data from Vertica using Spark and saving that data into Kafka. The next blog explained the reverse flow i.e. reading data from Kafka and writing data to Vertica but in a batch mode. i.e reading data from Kafka Continue Reading

Monitoring Kafka with Prometheus and Grafana

Kafka monitoring is an operation which is used for the optimization of the Kafka deployment. This process is easy and efficient, by applying one of the existing monitoring solutions instead of building your own. Let’s say, we use Apache Kafka for message transfer and processing and we want to monitor it.But, before learning the steps for monitoring, let’s first understand the prerequisites. Kafka It is Continue Reading

Flinkathon: Guide to setting up a Local Flink Custer

In our previous blog post, Flinkathon: First Step towards Flink’s DataStream API, we created our first streaming application using Apache Flink. It was easy, clean, and concise. However, the real power of Apache Flink is seen on a cluster, where data is processed in a distributed manner, with the advantage of multi-core/multi-memory systems. So, in this blog post, we will see how to set up Continue Reading

Determine Kafka broker health using Kafka stream application’s JMX metrics and setup Grafana alert

As we all know, Kafka exposes the JMX metrics whether it is Kafka broker, connectors or Kafka applications. A few days ago, I got the scenario where I needed to determine Kafka broker health with the help of Kafka stream application’s JMX metrics. It looks bit awkward, right? I should use the broker’s JMX metrics to do this, why am I looking to application JMX Continue Reading

Knolx: Alpakka-Connecting Kafka & ElasticSearch to Akka Streams

Hi all, Knoldus has organized a 30 min session on 1st  March 2019 at 3:30 PM. The topic was Alpakka – Connecting Kafka and ElasticSearch to Akka Streams.  Many people have joined and enjoyed the session. I am going to share the slides here. Please let me know if you have any question related to linked slides or video. The slides of the KnolX are here: And Continue Reading

Flinkathon: First Step towards Flink’s DataStream API

In our previous blog posts: Flinkathon: Why Flink is better for Stateful Streaming applications? Flinkathon: What makes Flink better than Kafka Streams? We saw why Apache Flink is a better choice for streaming applications. In this blog post, we will explore how easy it is to express a streaming application using Apache Flink’s DataStream API. DataStream API DataStream API is used to develop regular programs Continue Reading

Flinkathon: What makes Flink better than Kafka Streams?

Initially, I would like you all to focus on a few questions before comparing the frameworks:1. Is there any comparison or similarity between Flink and the Kafka?2. What could be better in Flink over the Kafka?3. Is it the problem or system requirement to use one over the other? Before talking about the Flink betterment and use cases over the Kafka, let’s first understand their Continue Reading

Kafka: Consumer – Push vs Pull approach

Have you ever thought about the Push vs Pull approach for the system, which one suits or solves which problem? Another Question why did Kafka choose Pull over Push design for Consumers? Before talking about the Kafka approach, whether the Broker should push the data to consumer or consumer should pull from Kafka? Let’s first understand both of the approaches, as each one has its Continue Reading

Working with Project Reactor: Reactive Streams

.The words “Reactive” and “Streams” often go hand in hand. The streams API of Java 8 is a great tool for making your projects Reactive. But that’s not the only stream you can have. In this blog, I’d like to talk about this awesome project called Project Reactor.

Running jmx2graphite as a java agent to push the JMX metrics into Graphite

In my previous blog, we discussed how to monitor a Kafka stream application using Grafana and Graphite. In this solution, we used jmx2graphite as a metrics exporter which takes the metrics from the Jolokia URL where Jolokia exposes the JMX metrics and pushes those metrics to Graphite. But, there is a problem with this solution that we need to deploy one jmx2graphite per service. So Continue Reading

Reactivate your streams with Reactive Streams!!

As you all might have known by now that one of the hot topics for quite some time has been streaming of big data. Day after day, we see tons of streaming technologies out there competing with one another. The obvious reason for that, processing big volumes of data is not enough. We need real-time processing of data, especially when we need to handle continuously increasing Continue Reading

Spark Streaming vs. Structured Streaming

Fan of Apache Spark? I am too. The reason is simple. Interesting APIs to work with, fast and distributed processing, unlike map-reduce no I/O overhead, fault tolerance and many more. With this much, you can do a lot in this world of Big data and Fast data. From “processing huge chunks of data” to “working on streaming data”, Spark works flawlessly in all. In this Continue Reading

Knoldus Pune Careers - Hiring Freshers

Get a head start on your career at Knoldus. Join us!