UDF overloading in spark

UDF are User Defined Function which are register with hive context to use custom functions in spark SQL queries. For example if you want to prepend some string in any other string or column then you can create a following UDF

def addSymbol(input:String, symbol:String)={

Now to register above function in hiveContext we need to register UDF as follows


Now you can use above udf in your sql query in Spark SQL as like below:

hiveContext.sql("select addSymbol('50000','$')").show

Now if you want to overload the above udf for another signature like if user call addSymbol function with single argument and we prepend default String, So now come in your mind is to create another function for addSymbol with single argument add register it with hiveContext like above. Okay once try and then come back you will get your answer.

Its works?

Answer is not, you were see there is no error  when you register another udf with same name but now you cant use first register function signature. Spark hiveContext only register one udf with one name so its register last one so now when you use first signature its give you exception.

So for above problem we have one solution we just create a hive UDF with creating a class which extend hive UDF class as below.

import org.apache.hadoop.hive.ql.exec.UDF

class AddSymbol extends UDF {

  def evaluate(input:String, symbol:String): String = {
  def evaluate(input:String): String = {

In above class you can overload evaluate function to overload different signatures.

Next you need to create hive temporary function with the above class as follows:

hiveContext.sql("CREATE TEMPORARY FUNCTION addSymbol AS 'AddSymbol'")

Now the above function addSymbol is available as a temporary function of hive, you see as below we can call both the signature of UDF.

hiveContext.sql("select addSymbol('5000','&')").show
hiveContext.sql("select addSymbol('5000')").show


About sandeep

Software Developer at Lunatech Labs B.V. || Big data and functional programming enthusiastic || Certified Apache Spark Developer || open source contributor
This entry was posted in apache spark, big data, Scala, Spark and tagged , , . Bookmark the permalink.

3 Responses to UDF overloading in spark

  1. Harry says:

    Nice one dude… if you are using sqlContext, you may need to do it in a bit different way.
    val nameOfUdf[retType , argType1, …] (x => ….)
    df.withColumn(“newcol”, nameOfUDF(arg1,…)

    refer : https://lets-do-something-big.blogspot.in/2016/06/custom-udf-in-apache-spark.html

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s