Author: Rahul Kumar

Functional Coverage with Jacoco Agent in Maven project(uses Apache Spark)

Reading Time: 2 minutes Hello Friends , In last day i was doing some configuration work to extract functional coverage report using JACOCO Agent.jar. At first i would mention that i was working on maven project and it was running over the Apache Spark 1.5 . I followed steps mentioned below to get functional coverage  (You can comment below if you have any better way to do this 🙂 Continue Reading

Schedule a Job in Play with Akka

Reading Time: < 1 minute Greetings to all , In this blog we will see how we can schedule a job after some time period in play using Scala . I assume that reader of this blog have little knowledge about : Akka Play and Scala So here we start with an actor where we will write our job to be scheduled as follow :

Streaming with Apache Spark Custom Receiver

Reading Time: 2 minutes Hello inqisitor. In previous blog we have seen about the predefined Stream receiver of Spark. In this blog we are going to discuss about Custom receiver of spark so that we can source the data from any . So if we want to use Custom Receiver than we should know first we are not going to use SparkSession as entry point , if there are Continue Reading

Streaming with Apache Spark 2.0

Reading Time: 2 minutes Hello geeks we were discussed about Apache Spark 2.0 with hive in earlier blog. Now i am going to describe how can we use spark to stream the data   . At first we need to understand this new Spark Streaming architecture  .

Akka with java

Reading Time: 3 minutes Hello friends , In last few days i was working on a project with akka using java . This was really an amazing experience in akka . Here we will discuss that how to use Akka in java and write the test case for the same . If we see documentation of Akka they extends a class named UntypedActor to create an actor .  But here Continue Reading

Apache Spark 2.0 with Hive

Reading Time: < 1 minute Hello geeks , we have discussed about how to start programming with spark in scala. In this blog we will discuss about how we can use hive with spark 2.0. When you start to work with hive , at first we need HiveContext (inherits SqlContext)  , core-site.xml , hdfs-site.xml and hive-site.xml for spark. In case if you dont configure hive-site.xml then the context automatically creates metastore_db in the Continue Reading

Knolx – A Step to Programming with Apache Spark

Reading Time: 3 minutes Hello associate! Hope you are doing well . Today I am going to share some of my programming experience with Apache Spark. So if you are getting started with Apache Spark then this blog may helpfull for you. Prerequisite to start with Apache Spark – MVN / SBT Scala To start with Apache Spark at first you need to either download pre-built Apache Spark  or, Continue Reading