Spark Cassandra Connector On Spark-Shell

Reading Time: 2 minutes

Using Spark-Cassandra-Connector on Spark Shell

Hi All , In this blog we will see how we can execute our spark code on spark shell using Cassandra . This is very efficient at testing or learning time , where we have to execute our code on spark shell rather than doing on any IDE .

Here we will use spark version –  1.6.2 

you can download the version from Here

and off course its appropriate spark Cassandra connector as

Cassandra Connector –  spark-cassandra-connector_2.10-1.6.2.jar .

you can download the connector(jar file) from Here .

So lets Begin : –

Step 1 ) Create any test table in your Cassandra ( I am using Cassandra version Cassandra 3.0.10 ) .

CREATE TABLE test_smack.movies_by_actor (
actor text,
release_year int,
movie_id uuid,
genres set,
rating float,
title text,
PRIMARY KEY (actor, release_year, movie_id)
) WITH CLUSTERING ORDER BY (release_year DESC, movie_id ASC)

Insert Some Test Data : –

INSERT INTO movies_by_actor (actor , release_year , movie_id , genres , rating , title ) VALUES ( ‘Johnny Depp’,2010,now(),{‘Drama’, ‘Thriller’}, 7.5 ,’The Tourist’) ;

INSERT INTO movies_by_actor (actor , release_year , movie_id , genres , rating , title ) VALUES ( ‘Johnny Depp’,2011,now(), {‘Animated’, ‘Comedy’}, 8.5,’Rango’) ;

INSERT INTO movies_by_actor (actor , release_year , movie_id , genres , rating , title ) VALUES ( ‘Johnny Depp’,2012,now(),{‘Crime’, ‘Dark Comedy’}, 6.5,’Dark Shadows’) ;

INSERT INTO movies_by_actor (actor , release_year , movie_id , genres , rating , title ) VALUES ( ‘Johnny Depp’,2013,now(),{‘Adventurous’, ‘Thriller’}, 9.5,’Transcendence’) ;

INSERT INTO movies_by_actor (actor , release_year , movie_id , genres , rating , title ) VALUES ( ‘Johnny Depp’,2013,now(),{‘Adventurous’, ‘Thriller’}, 6.5,’The Lone Ranger
‘) ;

INSERT INTO movies_by_actor (actor , release_year , movie_id , genres , title ) VALUES ( ‘Johnny Depp’,2014,now(),{‘thriller’},’Black Mass’) ;

 

Step 2 ) Go to the path where you have kept you spark binary folder . ( ex: /Desktop/spark-1.6.2-bin-hadoop2.6/bin ) and start spark by including the jar file we downloaded above .

$ sudo ./spark-shell –jars /PATH_TO_YOUR_CASSANDRA_CONNECTOR/spark-cassandra-connector_2.10-1.6.2.jar

 

Step 3 ) When you starts spark using spark – shell then spark by default creates a spark context named ‘sc’ .  Now we need to do the following steps to connect our spark cluster with Cassandra  : –

    sc.stop
    import com.datastax.spark.connector._
    import org.apache.spark.SparkContext
    import org.apache.spark.SparkContext._
    import org.apache.spark.SparkConf
    val conf = new SparkConf(true).set(“spark.cassandra.connection.host”, “localhost”) 

// Here localhost is the address where your spark is running
    val sc = new SparkContext(conf)

 

Step 4) Its all done , now you can query you Db and play with your results , like we are calculating the no of ‘Johnney Depp’ movies for each year  : –

sc.cassandraTable(“keyspaeceName”,”movies_by_actor”).select(“release_year”).as((year:Int) => (year,1)).reduceByKey(_ + _).collect.foreach(println)

 

Output : –

(2010,1)
(2012,1)
(2013,2)
(2011,1)


KNOLDUS-advt-sticker

2 thoughts on “Spark Cassandra Connector On Spark-Shell2 min read

Comments are closed.

Discover more from Knoldus Blogs

Subscribe now to keep reading and get access to the full archive.

Continue reading