Basic Example for Spark Structured Streaming & Kafka Integration

The Spark Streaming integration for Kafka 0.10 is similar in design to the 0.8 Direct Stream approach. It provides simple parallelism, 1:1 correspondence between Kafka partitions and Spark partitions, and access to offsets and metadata. However, because the newer integration uses the new Kafka consumer API instead of the simple API, there are notable differences in usage. This version of the integration is marked as experimental, so the API is potentially subject to change.

In this blog, I am going to implement the basic example on Spark Structured Streaming & Kafka Integration.

Here, I am using

  • Apache Spark  2.2.0
  • Apache Kafka
  • Scala 2.11.8

Create the built.sbt

Let’s create a sbt project and add following dependencies in build.sbt.

libraryDependencies ++= Seq("org.apache.spark" % "spark-sql_2.11" % "2.2.0",
                        "org.apache.spark" % "spark-sql-kafka-0-10_2.11" % "2.2.0",
                        "org.apache.kafka" % "kafka-clients" % "")

Create the SparkSession

Now, we have to import the necessary classes and create a local SparkSession, the starting point of all functionalities in Spark.

val spark = SparkSession

Define the Schema

We have to define the schema for our data that we are going to read from csv.

val mySchema = StructType(Array(
 StructField("id", IntegerType),
 StructField("name", StringType),
 StructField("year", IntegerType),
 StructField("rating", DoubleType),
 StructField("duration", IntegerType)

Sample of my csv file is here and dataset description is as given here

Create the Streaming Dataframe

Now, we have to create a streaming DataFrame whose schema is defined in a variable called “mySchema”. If you drop any csv file into dir that will automatically change into the streaming dataframe.

val streamingDataFrame = spark.readStream.schema(mySchema).csv("path of your directory like home/Desktop/dir/")

Publish the stream  to Kafka

streamingDataFrame.selectExpr("CAST(id AS STRING) AS key", "to_json(struct(*)) AS value").
  .option("topic", "topicName")
  .option("kafka.bootstrap.servers", "localhost:9092")
  .option("checkpointLocation", "path to your local dir")

Create the topic called ‘topicName’ for Kafka and send dataframe with that topic to Kafka. Here, 9092 is the port number of the local system on which Kafka in running. We use checkpointLocation to create the offsets about the stream.

Subscribe the stream from Kafka

import spark.implicits._
val df = spark
  .option("kafka.bootstrap.servers", "localhost:9092")
  .option("subscribe", "topicName")

At this point, we just subscribe our stream from kafka with same topic name that we gave above.

Convert Stream according to my schema along with TimeStamp

val df1 = df.selectExpr("CAST(value AS STRING)", "CAST(timestamp AS TIMESTAMP)").as[(String, Timestamp)]
  .select(from_json($"value", mySchema).as("data"), $"timestamp")
  .select("data.*", "timestamp")

Here, we convert the data that is coming in Stream from kafka to Json & from Json we just create the dataframe as per our need described schema in ‘mySchema’. We also take timestamp column to it.

Print the dataframe on console

Here, we just print our data to the console.


For more details, you can refer on this.


This entry was posted in Scala, Spark, Streaming and tagged , . Bookmark the permalink.

10 Responses to Basic Example for Spark Structured Streaming & Kafka Integration

  1. Lakshmi Narayana Viswanadha says:

    Very useful article.

  2. Pingback: Using Spark Streaming On Kafka – Curated SQL

  3. DevBlog says:

    […] Ayush Tiwari has an prefatorial tutorial on using electric discharge Streaming on top(a) of Kafka: […]

  4. Pingback: Assimilation of Spark Streaming With Kafka | Knoldus

  5. mkalmkal says:

    i use the example code above. Environment: spark 2.2.0, kafka
    But when spark-submit runs Utils.AppInfoParser: Kafka version : 0.10.0-kafka-2.1.0.
    Why spark uses another kafka version:0.10.0?

  6. Rizwan Mian says:

    spark version: 2.3.0
    scala version: 2.11.8
    kafka version: kafka_2.11-1.1.0

    Some key imports:
    import org.apache.spark.sql._
    import org.apache.spark.sql.types.{StructType, StructField, StringType, IntegerType,DoubleType,TimestampType};
    import spark.implicits._

    I attempted in spark-shell on my mac:
    spark-shell –packages org.apache.spark:spark-sql-kafka-0-10_2.11:2.3.0

    val spark = SparkSession.builder.appName(“Spark-Kafka-Integration”).master(“local”).getOrCreate()

    val mySchema = StructType(Array(StructField(“id”, IntegerType),StructField(“name”, StringType),StructField(“year”, IntegerType),StructField(“rating”, DoubleType),StructField(“duration”, IntegerType)))

    val streamingDataFrame = spark.readStream.schema(mySchema).csv(“/Users/rmian/Documents/training/spark/SparkStream/csv”)

    streamingDataFrame.selectExpr(“CAST(id AS STRING) AS key”, “to_json(struct(*)) AS value”).writeStream.format(“kafka”).option(“topic”, “sparkTopic1”).option(“kafka.bootstrap.servers”, “localhost:9092”).option(“checkpointLocation”,”/Users/rmian/Documents/training/spark/SparkStream/tmp”).start()

    val df = spark.readStream.format(“kafka”).option(“kafka.bootstrap.servers”, “localhost:9092”).option(“subscribe”, “sparkTopic1”).load()

    val df1 = df.selectExpr(“CAST(value AS STRING)”, “CAST(timestamp AS STRING)”).as[(String, String)].select(from_json($”value”, mySchema).as(“data”), $”timestamp”).select(“data.*”, “timestamp”)


    scala> df1.writeStream.format(“console”).option(“truncate”,”false”).start().awaitTermination()
    2018-04-01 16:27:26 WARN NetworkClient:600 – Error while fetching metadata with correlation id 1 : {sparkTopic1=LEADER_NOT_AVAILABLE}
    Batch: 0
    |id |name|year|rating|duration|timestamp|

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s