Play with Spark: Building Spark MLLib in a Play Spark Application


In our last post of Play with Spark! series, we saw how to integrate Spark SQL in a Play Scala application. Now in this blog we will see how to add Spark MLLib feature in a Play Scala application.

Spark MLLib is a new component under active development. It was first released with Spark 0.8.0. It contains some common machine learning algorithms and utilities, including classification, regression, clustering, collaborative filtering, dimensionality reduction, as well as some optimization primitives. For detailed list of available algorithms click here.

To add Spark MLLib feature in a Play Scala application follow these steps:

1). Add following dependencies in build.sbt file

libraryDependencies ++= Seq(
"org.apache.spark"  %% "spark-core"              % "1.0.1",
"org.apache.spark"  %% "spark-mllib"             % "1.0.1"
)

The dependency – “org.apache.spark”  %% “spark-mllib” % “1.0.1” is specific to Spark MLLib.

As you can see that we have upgraded to Spark 1.0.1 (latest release of Apache Spark).

2). Create a file app/utils/SparkMLLibUtility.scala & add following code to it

package utils

import org.apache.spark.SparkContext
import org.apache.spark.SparkConf

import org.apache.spark.mllib.regression.LabeledPoint
import org.apache.spark.mllib.linalg.Vectors
import org.apache.spark.mllib.classification.NaiveBayes

object SparkMLLibUtility {

 def SparkMLLibExample {

 val conf = new SparkConf(false) // skip loading external settings
                .setMaster("local[4]") // run locally with enough threads
                .setAppName("firstSparkApp")
                .set("spark.logConf", "true")
                .set("spark.driver.host", "localhost")
 val sc = new SparkContext(conf)

 val data = sc.textFile("public/data/sample_naive_bayes_data.txt")    // Sample dataset
 val parsedData = data.map { line =>
 val parts = line.split(',')
 LabeledPoint(parts(0).toDouble, Vectors.dense(parts(1).split(' ').map(_.toDouble)))
 }
 // Split data into training (60%) and test (40%).
 val splits = parsedData.randomSplit(Array(0.6, 0.4), seed = 11L)
 val training = splits(0)
 val test = splits(1)

 val model = NaiveBayes.train(training, lambda = 1.0)
 val prediction = model.predict(test.map(_.features))

 val predictionAndLabel = prediction.zip(test.map(_.label))
 val accuracy = 1.0 * predictionAndLabel.filter(x => x._1 == x._2).count() / test.count()
 println("Accuracy = " + accuracy * 100 + "%")
 }
}

In above code we have used Naive Bayes algorithm as an example.

3). In above code you can notice that we have parsed data into Vectors object of Spark.

val parsedData = data.map { line =>
 val parts = line.split(',')
 LabeledPoint(parts(0).toDouble, Vectors.dense(parts(1).split(' ').map(_.toDouble)))
 }

Reason for using Vectors object of Spark instead of Vector class of Scala is that, Vectors object of Spark contains both Dense & Sparse methods for parsing both dense & sparse type of data. This allows us to analyze data according to its properties.

4). Next we observe that we have split data in 2 parts – 60% for training & 40% for testing.

// Split data into training (60%) and test (40%).
val splits = parsedData.randomSplit(Array(0.6, 0.4), seed = 11L)
val training = splits(0)
val test = splits(1)

5). Then we trained our model using Naive Bayes algorithm & training data.

val model = NaiveBayes.train(training, lambda = 1.0)

6). At last we used our model to predict the labels/class of test data.

 val prediction = model.predict(test.map(_.features))
 val predictionAndLabel = prediction.zip(test.map(_.label))
 val accuracy = 1.0 * predictionAndLabel.filter(x => x._1 == x._2).count() / test.count()
 println("Accuracy = " + accuracy * 100 + "%")

Then to find how good our model is, we calculated the Accuracy of the predicted labels.

So, we see that how easy it is to use any algorithm available in Spark MLLib to perform predictive analytics on data. For more information on Spark MLLib click here.

To download a Demo Application click here.

Advertisements
This entry was posted in Agile, Play Framework, Scala, Spark and tagged , , . Bookmark the permalink.

6 Responses to Play with Spark: Building Spark MLLib in a Play Spark Application

  1. Have you been able to get it working with play 2.3.2?
    I’m getting a lot of strange errors. probably because of that akka version mismatch between spark and play

    • Leon Radley (@LeonRadley) Play & Spark uses different version of Akka. That is why, we get a lot of strange errors. So, we have to use a common version of Akka.
      Using Akka 2.2.3 can solve this problem.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s