Apache Spark 2.0 with Hive


Hello geeks , we have discussed about how to start programming with spark in scala.

In this blog we will discuss about how we can use hive with spark 2.0.

When you start to work with hive , at first we need HiveContext (inherits SqlContext)  , core-site.xml hdfs-site.xml and hive-site.xml for spark. In case if you dont configure hive-site.xml then the context automatically creates metastore_db in the current directory and creates warehouse directory indicated by HiveConf(which defaults user/hive/warehouse).

hive-site.xml

<configuration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost/metastore_db</value>
<description>metadata is stored in a MySQL server</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>MySQL JDBC driver class</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hiveuser</value>
<description>user name for connecting to mysql server </description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>hivepassword</value>
<description>password for connecting to mysql server </description>
</property>
</configuration>

Now if we talk about Spark 2.0 , HiveContext and SqlContext has been Depricated but spark does provide backward compatiblity. And newly they gave another Common Entry point as SparkSession.

We can create object of SparkSession as :

SparkSession.builder
  .master("local")
  .appName("demo")
  .getOrCreate()

Now we can get sqlContext and sparkContext object and others from SparkSession object.

If anybody wants to work with HiveContext then we need to enable the same as :

val sparSession = SparkSession.builder
  .master("local")
  .appName("demo")
.enableHiveSupport()
  .getOrCreate()

And Now here we go to execute our Querry:

sparkSession.sqlContext.sql(“INSERT INTO TABLE students VALUES (‘Rahul’,’Kumar’), (‘abc’,’xyz’)”)

 

Complete Demo Code on Github

 

Thanks

KNOLDUS-advt-sticker

About Rahul Kumar

Software Consultant At Knoldus
This entry was posted in apache spark, Scala, Spark and tagged , . Bookmark the permalink.

2 Responses to Apache Spark 2.0 with Hive

  1. Pingback: Knolx – A Step to Programming with Apache Spark | Knoldus

  2. Pingback: Streaming with Apache Spark 2.0 | Knoldus

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s