Basic Example for Spark Structured Streaming & Kafka Integration

Reading Time: 2 minutes

The Spark Streaming integration for Kafka 0.10 is similar in design to the 0.8 Direct Stream approach. It provides simple parallelism, 1:1 correspondence between Kafka partitions and Spark partitions, and access to offsets and metadata. However, because the newer integration uses the new Kafka consumer API instead of the simple API, there are notable differences in usage. This version of the integration is marked as experimental, so the API is potentially subject to change. In this blog, I am going to implement the basic example on Spark Structured Streaming & Kafka Integration.

Here, I am using

  • Apache Spark  2.2.0
  • Apache Kafka 0.11.0.1
  • Scala 2.11.8

Create the built.sbt

Let’s create a sbt project and add following dependencies in build.sbt.

libraryDependencies ++= Seq("org.apache.spark" % "spark-sql_2.11" % "2.2.0",
                        "org.apache.spark" % "spark-sql-kafka-0-10_2.11" % "2.2.0",
                        "org.apache.kafka" % "kafka-clients" % "0.11.0.1")

Create the SparkSession

Now, we have to import the necessary classes and create a local SparkSession, the starting point of all functionalities in Spark.

val spark = SparkSession
 .builder
 .appName("Spark-Kafka-Integration")
 .master("local")
 .getOrCreate()

Define the Schema

We have to define the schema for our data that we are going to read from csv.

val mySchema = StructType(Array(
 StructField("id", IntegerType),
 StructField("name", StringType),
 StructField("year", IntegerType),
 StructField("rating", DoubleType),
 StructField("duration", IntegerType)
))

Sample of my csv file is here and dataset description is as given here

Create the Streaming Dataframe

Now, we have to create a streaming DataFrame whose schema is defined in a variable called “mySchema”. If you drop any csv file into dir that will automatically change into the streaming dataframe.

val streamingDataFrame = spark.readStream.schema(mySchema).csv("path of your directory like home/Desktop/dir/")

Publish the stream  to Kafka

streamingDataFrame.selectExpr("CAST(id AS STRING) AS key", "to_json(struct(*)) AS value").
  writeStream
  .format("kafka")
  .option("topic", "topicName")
  .option("kafka.bootstrap.servers", "localhost:9092")
  .option("checkpointLocation", "path to your local dir")
  .start()

Create the topic called ‘topicName’ for Kafka and send dataframe with that topic to Kafka. Here, 9092 is the port number of the local system on which Kafka in running. We use checkpointLocation to create the offsets about the stream.

Subscribe the stream from Kafka

import spark.implicits._
val df = spark
  .readStream
  .format("kafka")
  .option("kafka.bootstrap.servers", "localhost:9092")
  .option("subscribe", "topicName")
  .load()

At this point, we just subscribe our stream from kafka with same topic name that we gave above.

Convert Stream according to my schema along with TimeStamp

val df1 = df.selectExpr("CAST(value AS STRING)", "CAST(timestamp AS TIMESTAMP)").as[(String, Timestamp)]
  .select(from_json($"value", mySchema).as("data"), $"timestamp")
  .select("data.*", "timestamp")

Here, we convert the data that is coming in Stream from kafka to Json & from Json we just create the dataframe as per our need described schema in ‘mySchema’. We also take timestamp column to it.

Here, we just print our data to the console.

df1.writeStream
    .format("console")
    .option("truncate","false")
    .start()
    .awaitTermination()

For more details, you can refer on this.

knoldus-advt-sticker

Written by 

Ayush is a Software Consultant, with experience of more than 1 year. He has specialisation in Hadoop and has good knowledge of many programming languages like C, Java and Scala. HQL, Pig Latin, HDFS, Flume and HBase adds to his forte. He is familiar with technology like Scala, Spark Kafka, Cassandra, Dynamo DB, Akka & many more. His hobbies include playing football and biking.

11 thoughts on “Basic Example for Spark Structured Streaming & Kafka Integration2 min read

  1. i use the example code above. Environment: spark 2.2.0, kafka 0.11.0.0.
    But when spark-submit runs Utils.AppInfoParser: Kafka version : 0.10.0-kafka-2.1.0.
    Why spark uses another kafka version:0.10.0?

  2. specs:
    spark version: 2.3.0
    scala version: 2.11.8
    kafka version: kafka_2.11-1.1.0

    Some key imports:
    import org.apache.spark.sql._
    import org.apache.spark.sql.types.{StructType, StructField, StringType, IntegerType,DoubleType,TimestampType};
    import spark.implicits._

    I attempted in spark-shell on my mac:
    spark-shell –packages org.apache.spark:spark-sql-kafka-0-10_2.11:2.3.0

    Producer:
    val spark = SparkSession.builder.appName(“Spark-Kafka-Integration”).master(“local”).getOrCreate()

    val mySchema = StructType(Array(StructField(“id”, IntegerType),StructField(“name”, StringType),StructField(“year”, IntegerType),StructField(“rating”, DoubleType),StructField(“duration”, IntegerType)))

    val streamingDataFrame = spark.readStream.schema(mySchema).csv(“/Users/rmian/Documents/training/spark/SparkStream/csv”)

    streamingDataFrame.selectExpr(“CAST(id AS STRING) AS key”, “to_json(struct(*)) AS value”).writeStream.format(“kafka”).option(“topic”, “sparkTopic1”).option(“kafka.bootstrap.servers”, “localhost:9092”).option(“checkpointLocation”,”/Users/rmian/Documents/training/spark/SparkStream/tmp”).start()

    Consumer:
    val df = spark.readStream.format(“kafka”).option(“kafka.bootstrap.servers”, “localhost:9092”).option(“subscribe”, “sparkTopic1”).load()

    val df1 = df.selectExpr(“CAST(value AS STRING)”, “CAST(timestamp AS STRING)”).as[(String, String)].select(from_json($”value”, mySchema).as(“data”), $”timestamp”).select(“data.*”, “timestamp”)

    df1.writeStream.format(“console”).option(“truncate”,”false”).start().awaitTermination()

    Output:
    scala> df1.writeStream.format(“console”).option(“truncate”,”false”).start().awaitTermination()
    2018-04-01 16:27:26 WARN NetworkClient:600 – Error while fetching metadata with correlation id 1 : {sparkTopic1=LEADER_NOT_AVAILABLE}
    ——————————————-
    Batch: 0
    ——————————————-
    +—+—-+—-+——+——–+———+
    |id |name|year|rating|duration|timestamp|
    +—+—-+—-+——+——–+———+
    +—+—-+—-+——+——–+———+

  3. I have been trying your examples for a while and keep getting
    java.lang.IllegalArgumentException: Option ‘basePath’ must be a directory.
    I have tried all possible combinations: directory, empty directory, canonical path, etc
    I am working with Spark on my local machine. Any help would be appreciated

Comments are closed.

Discover more from Knoldus Blogs

Subscribe now to keep reading and get access to the full archive.

Continue reading