Hi Druids, In this blog we will download, setup cluster on single machine using tutorial conf and will also see how to load data from Kafka and query the data.
Hello Friends , In last day i was doing some configuration work to extract functional coverage report using JACOCO Agent.jar. At first i would mention that i was working on maven project and it was running over the Apache Spark 1.5 . I followed steps mentioned below to get functional coverage (You can comment below if you have any better way to do this 🙂 Continue Reading
Hello inqisitor. In previous blog we have seen about the predefined Stream receiver of Spark. In this blog we are going to discuss about Custom receiver of spark so that we can source the data from any . So if we want to use Custom Receiver than we should know first we are not going to use SparkSession as entry point , if there are Continue Reading
Hello friends , In last few days i was working on a project with akka using java . This was really an amazing experience in akka . Here we will discuss that how to use Akka in java and write the test case for the same . If we see documentation of Akka they extends a class named UntypedActor to create an actor . But here Continue Reading
Hello geeks , we have discussed about how to start programming with spark in scala. In this blog we will discuss about how we can use hive with spark 2.0. When you start to work with hive , at first we need HiveContext (inherits SqlContext) , core-site.xml , hdfs-site.xml and hive-site.xml for spark. In case if you dont configure hive-site.xml then the context automatically creates metastore_db in the Continue Reading
Hello associate! Hope you are doing well . Today I am going to share some of my programming experience with Apache Spark. So if you are getting started with Apache Spark then this blog may helpfull for you. Prerequisite to start with Apache Spark – MVN / SBT Scala To start with Apache Spark at first you need to either download pre-built Apache Spark or, Continue Reading