An Invitation From Scala String Interpolation


“Every generation needs a new revolution.” – Thomas Jefferson

This blog narrates the tale of an awesome change that was introduced in Scala 2.10 that made life of a Scala programmer repose and the change was the introduction of “String Interpolation” which is a mechanism that enables to embed or bind a variable references(our simple vars and vals) or a result yielding expressions(like match case, if-else, try-catch etc.) directly into a processed/unprocessed string literal.

OVERVIEW

In this blog we will learn about string interpolation; ‘s’, ‘f’ and ‘raw’ interpolators present in ‘StringContext’ class and finally how to create custom interpolators.
One major purpose of the blog is to lay a foundation for ‘Quasiqoutes’ which are implemented using string interpolation and will be covered in the next blog (If I do not get too lazy on weekends).

What does String Interpolation mean

According to Merriam Webster interpolation means – “to put (words) into a piece of writing or a conversation”
Using the statement above we can conclude that string interpolation is a mechanism that enable us to sew words(any value) in between a processed/unprocessed string literal. Here by processed we mean processing of meta-characters like escape sequences(\n, \t, \r etc.) or in other words we can say the string literal is first processed and then embedded with variable references. Consider an example where we want to print name, age, salary of an employee on the console :

val name = "LIHAS"
var age = 24
var salary = 12345.6789
println(name+" is " + age + "years old and earns ₹" + salary)

Just type the lines yourself and you will understand how troublesome it is.
To simplify this we can use a string interpolator which wave off following problems :

a). Keeping a check on opening closing double quotes
b). Repeatedly concatenating the strings to form final message.

val name = "LIHAS"
var age = 24
var salary = 12345.6789
println(s"$name is $age years old and earns ₹ $salary")

In the above example we have used the ‘s interpolator’ and the code is really elegant. We will now discuss about the three string interpolation methods provided by the ‘StringContext’ class and how they work internally and achieve this elegance. Continue reading

Posted in Scala | Leave a comment

Customizing String Interpolation – An Example


OVERVIEW

This blog is continuation of – ‘An Invitation From Scala String Interpolation’. Here we will explore how to define a custom string interpolator.

OBJECTIVE

In this blog the custom interpolator being designed works exactly like the ‘s interpolator’ with an extra ability to write the “post interpolation content” into a file and finally returning the status message whether the file write was successful or not.

STEPS TO SUCCESS

Step 1 “Starting Up” ->

Create new sbt project in intellij.

Step 2 “Including required dependencies and plugins” ->

a). libraryDependencies += “org.scalatest” %% “scalatest” % “2.2.4” % “test”

It includes the scalatest jar to project which facilitates code testing(add in build.sbt).

Also following plugins were used in the project (add in plugins.sbt)

a). addSbtPlugin(“org.scoverage” % “sbt-scoverage” % “1.3.5”)

Used to see the code coverage by unit tests

b). addSbtPlugin(“org.scalastyle” %% “scalastyle-sbt-plugin” % “0.8.0”)

this plugin checks if any scalastyle warning is present or not.

Step 3 “Creating Desired Directory Structure” ->

The following image shows the directory structure of the project :

blog6Pic

In ‘main’, the ‘scala’ sub-folder contains the implementation for our custom string interpolator and the ‘test’ sub-folder contains the test cases to test our implementation, the ‘resources’ folder contains the file ‘output’ in which the output is written.

Step 4 “Defining custom string interpolator” ->

Continue reading

Posted in Scala | Tagged | 2 Comments

Sharing RDD’s states across Spark applications with Apache Ignite


Apache Ignite offers an abstraction over native Spark RDDs such that the state of RDDs can be shared across spark jobs, workers and applications which is not possible with native Spark RDDS. In this blog, we will walk through the steps on how to share RDDs between two spark Application.

Preparing Ingredients

To test the Apache Ignite with Apache Spark application we need at least one master process and a worker node. Download Apache Spark pre-built binary and Apache Ignite and put at the same location on all nodes. Let us call these directories SPARK_HOME and IGNITE_HOME respectively.

I am assuming you are aware with the basics of setting up a spark cluster. If not, you can go through spark documentation.

Start Master Node

Switch to SPARK_HOME on master node and run:
blog1
As soon as you hit the command, the shell will give a logging file info saying “starting org.apache.spark.deploy.master.Master, logging to … [logging_dire]. You can get the master URL in the form [spark://master_host:master_port] from the log file. I got it in the log file as:
blog12

Start Workers

Switch to directory SPARK_HOME on worker node and run the below command:
blog14.png
You can notice, the master URL is provided while starting the worker. Once it is registered with the master, you will get notification as:
blog15

Start Ignite

On each of the worker switch to the directory  IGNITE_HOME and start an Ignite node by running the following command:
blog16.png
This will start Ignite node on the worker.

Creating Sample Spark Application

Now we will package and submit two spark applications, namely: RDDProducer and RDDConsumer on the master. The application RDDProducer saves a pair RDD into Ignite node. Here is a glimpse of code of these two applications:

import org.apache.ignite.spark.IgniteContext
import org.apache.ignite.configuration._
import org.apache.spark.{SparkConf, SparkContext}

object RDDProducer extends App {
  val conf = new SparkConf().setAppName("SparkIgnite")
  val sc = new SparkContext(conf)
  val ic = new IgniteContext[Int, Int](sc, () => new IgniteConfiguration())
  val sharedRDD: IgniteRDD[Int,Int] = ic.fromCache("partitioned")
  sharedRDD.savePairs(sc.parallelize(1 to 100000, 10).map(i => (i, i)))
}

object RDDConsumer extends App {
  val conf = new SparkConf().setAppName("SparkIgnite")
  val sc = new SparkContext(conf)
  val ic = new IgniteContext[Int, Int](sc, () => new IgniteConfiguration())
  val sharedRDD = ic.fromCache("partitioned")
  val lessThanTen = sharedRDD.filter(_._2 < 10)
  println("The count is:::::::::::: "+lessThanTen.count())
}

Sharing RDD from Spark Application

Let us go through the application one by one. IgniteContext is the main entry point for Spark-Ignite integration. Here application RDDProducer  creates an IgniteConetxt[Int,Int] by supplying Spark configuration and a closure to instantiate default IgniteConfiguration. After successfully created IgniteConfiguration, IgniteRDD is created by invoking method fromCache(“partitioned”) on IgniteConfiguration (“partitioned” is the name of the Ignite Cache). Here IgniteRDD is live view of Ignite cache holding the RDD. IgniteRDD has all the methods that RDD supports.

The following line saves the spark RDD into IgniteCache.

sharedRDD.savePairs(sc.parallelize(1 to 100000, 10).map(i => (i, i)))

Retrieving RDD from another Spark Application

The application RDDConsumer have all the configuration and steps as application RDDProducer except it never saves an RDD to an Ignite Cache. Its been done already by previous application. It simply retrieves the RDDs cached from Ignite cache by

  val sharedRDD = ic.fromCache("partitioned")

and apply a transformation filter for pairs having values less than ten and count those values up and prints it.

Deploying Applications.

I am assuming you’ve packaged the applications into a jar, ready to be submitted to the cluster. The instruction for packaging Spark application into a single jar can be found here. The application source can be found at: Github. Switch to SPARK_HOME and run following command to submit these applications on the cluster:

./bin/spark-submit --class "com.knoldus.RDDProducer"  --master spark://192.168.2.181:7077 "/home/knoldus/Projects/Spark Lab/spark-ignite/target/scala-2.11/spark_ignite-assembly-1.0.jar"
./bin/spark-submit --class "com.knoldus.RDDConsumer"  --master spark://192.168.2.181:7077 "/home/knoldus/Projects/Spark Lab/spark-ignite/target/scala-2.11/spark_ignite-assembly-1.0.jar"

We will deploy these applications one by one by changing the –class argument. Here first app RDDProducer will cache the PairRDD into Ignite cache and when we deploy second application, The output will be like:
blog18
It is obvious from the result that we were able to retrieve the RDD back in another application from the Ignite cache.
For Code example,  checkout : GitHub

References:

Apache Ignite Documentation


KNOLDUS-advt-sticker

Posted in apache spark, Scala, Spark | Tagged , , , , | 1 Comment

Intl-tel-input


International Telephone Input

It is a jQuery plugin for entering and validating international telephone numbers. It adds a flag dropdown to any input, detects the user’s country, displays a relevant placeholder and provides formatting/validation methods.

It is also widely known as intl-tel-input.

Below is some of its advanced features:

  • Provides a very friendly user interface to enter a phone number. All countries are shown as a drop list with the flags and suggestion phone number
  • Provides up-to-date patterns of phone numbers in the world. The data are taken from Google libphonenumber library so they are completely stable.
  • Has a few APIs to validate and integrate with other tools

How to use:

1) Download the latest release for intl-tel-input. You can download it from https://github.com/jackocnr/intl-tel-input/releases/tag/v9.0.0 .

2) Include the stylesheet:

<link rel=”stylesheet” href=”path_to_intlTelInput.css”>

3) Override the path to flags.png in your CSS :

.iti-flag {background-image: url(“path_to_flags.png”);}

4) Add the plugin script and initialise it on your input element:

<script src="path_to_jquery-2.2.0.min.js"></script>
<script src="path_to_intlTelInput.js"></script>
<script> $("#phone").intlTelInput();</script>

 

Full width input
If you want your input to be full-width, you need to set the container to be the same i.e.

.intl-tel-input {width: 100%}

A sample working example using intl-tel input: (phoneDemo.html)

<head>
<link rel="stylesheet" href="intl-tel-input-9.0.0 (2)/build/css/intlTelInput.css">
<style>
   .iti-flag {background-image: url("/home/knodus/Downloads/intl-tel-input-9.0.0 (2)/build/img/flags.png");}
</style>
</head>
<body>phone number
<input type="tel" id="phone">
</body>
   <script src="jquery-2.2.0.min.js"></script>
   <script src="intl-tel-input-9.0.0 (2)/build/js/intlTelInput.js"></script>
   <script>
     $("#phone").intlTelInput();
  </script>

Some Useful Options:

1) autoPlaceholder
Type:
Boolean

Default: true
Set the input’s placeholder to an example number for the selected country. If there is already a placeholder attribute set on the input then that will take precedence. Requires the utilScript option.

For Example:

    var telInput = $(“#phone);
    telInput.intTelInput({
       utilsScript: “path_to_utils.js”, //file from google’s libphonenumber library
    autoPlaceholder:true
    });

2) utilsScript
Type:
String

Default:””

Example : “build/js/utils.js”
Enable formatting/validation etc. by specifying the path to the included utils.js script ,which is fetched only when the page has finished loading (on window.load) to prevent blocking.

3) isValidNumber
It Validate the current number present in the textbox. Expects an internationally formatted number unless ‘national mode’ is enabled. If validation fails, you can use getValidationError to know about the error occured. Requires the utilScript option.

Var isValid = $(“#phone”).intlTelInput(“isValidNumber”);

Returns: true/false

Utilities Script

It uses a custom build of Google’s libphonenumber which enables the following features:

  • Formatting upon initialisation
  • Validation with isValidNumber, getNumberType and getValidationError methods

Specifying utilsScript in Code:

    var telInput = $(“#phone);
    telInput.intTelInput({
    utilsScript: “path_to_utils.js”
    });

NOTE: util.js file is from google’s libphonenumber library.


Applying Validations on the phone number field

Given below is a demo example for applying validation on the intl-tel-input field:

<head>
<link rel="stylesheet" href="intl-tel-input-9.0.0 (2)/build/css/intlTelInput.css">
<style>
.iti-flag {background-image: url("intl-tel-input-9.0.0 (2)/build/img/flags.png");}
</style>
</head>
<body>
<form id="userPhoneForm" role="form" method="post" >
phone number
<input type="tel" id="phone" name="phone">
<span class="input-group-btn" style="color:red;">
<button class="addPhoneSubmit" type="submit" onclick="validate()">Add Phone Number </button>
<div id ="userPhoneDiv" ></div>
</span>
</form>
</body>
<script src="jquery-2.2.0.min.js"></script>
<script src="jquery.validate.min.js"></script>
<script src="intl-tel-input-9.0.0 (2)/build/js/intlTelInput.js"></script>
<script src="libphonenumber/utils.js"></script>
<script type="text/javascript">
$.validator.addMethod("phoneNumValidation", function(value) {
    return $("#phone").intlTelInput("isValidNumber")
}, 'Please enter a valid number');
var validate = function() {
$("#userPhoneForm").validate({
rules: {
phone: {
required: true,
phoneNumValidation: true
}
},
messages: {
phone: {
required: "Phone number is required field."
}
},
errorPlacement : function(error, element) {
error.insertAfter($("#userPhoneDiv"));
}
});
}
$(document).ready(function() {
   console.log("in ready");
   $("#phone").intlTelInput({utilsScript: "libphonenumber/utils.js"});

});
</script>

If someone still face any problem while using intl-tel input, you can get the above working example from the github repo: https://github.com/knoldus/intl-tel-input-example , including all the required files.

References:

1) https://github.com/jackocnr/intl-tel-input

2) https://jqueryvalidation.org/

Posted in Scala | Tagged , , | Leave a comment

Java Executor Vs Scala ExecutionContext


Java supports low-level of concurrency and some of rich api’s which is just a wrapper of low-level constructs like wait, notify, synchronize etc, But with Java concurrent packages Scala have high level concurrency frameworks for “which goal to achieve, rather than how to achieve“. These types of programming paradigms are “Asynchronous programming using Futures“, “Reactive programming using event streams“, “Actor based programming” and more.

Note: Thread creation is much more expensive then allocating a single object, acquiring a monitor lock or updating an entry in a collection.

For high-performance multi-threading application we should use single thread for handling many requests and a set of such reusable threads usually called thread pool.

400px-Thread_pool

In java, Executor is a interface for encapsulate the decision of how to run concurrently executable work tasks, with an abstraction. In other words, this interface provides a way of decoupling task submission from the mechanics of how each task will be run, including details of thread use, scheduling, etc

interface Executor{
    public void execute(Runnable command)
}

Java Executor :

  1. leaderExecutor decide on which thread and when to call run method of Runnable object.
  2. Executor object can start a new thread specifically for this invocation of execute or even the execute the Runnable object directly on the caller thread.
  3. Tasks scheduling is depends on implementation of Executor.
  4. ExecutorService is a sub interface of Executor for manage termination and methods that can produce a future for tracking progress of one or more asynchronous tasks.
  5. In Java some basic Executor implementations are ThreadPoolExecutor (JDK 5), ForkJoinPoll (JDK 7) etc or developers should are provide custom implementation of Executor.

scala.concurrent package defines the  ExecutionContext trait that offers a similar functionality of that Executor object but it more specific to Scala. Scala object take  ExecutionContext object as implicit parameter. ExecutionContext have two abstract method execute(same as Java Executor method) and reportFailure (takes Throwable object and is called whenever some tasks throw an exception)

trait ExecutionContext {
    def execute(runnable: Runnable): Unit
    def reportFailure(cause: Throwable): Unit
}

Scala ExecutionContext:

  1. ExecutionContext also have an companion object which have some methods for creating ExecutionContext object from Java Executor or ExecutorService (act as a bridge between Java and Scala)
  2. ExecutionContext companion object contains the default execution context called global which internally uses a ForkJoinPool instance.

The Executor and ExecutionContext object are a attractive concurrent programming abstraction, but they are not without culprits. They can improve throughputs by reusing the same set of threads for different tasks but are unable to execute tasks if those threads becomes unavailable, because all thread are busy with running other tasks.

Note: java.util.concurrent.Executors is a utility class, which is used to create create thread pool according to requirements.

References:

  1. Learning Concurrent Programming in Scala by Aleksandar Prokopec.

  2. Scala api’s docs.

  3. Java api’s docs.

KNOLDUS-advt-sticker

Posted in Scala | Tagged , , , , | 1 Comment

Knolx – A Step to Programming with Apache Spark


Hello associate! Hope you are doing well . Today I am going to share some of my programming experience with Apache Spark.
So if you are getting started with Apache Spark then this blog may helpfull for you.

Prerequisite to start with Apache Spark –

  • MVN / SBT
  • Scala

To start with Apache Spark at first you need to either

download pre-built Apache Spark  or,

download source code and build on your local machine.

Now, If you downloaded pre-built  spark then you only need to extract the tar file at the location where you have the permission to read and write.

Else you ned to extract the source code and run the following command at SPARK_HOME directory to build the spark –

  • Building with Maven and Scala 2.11
./dev/change-scala-version.sh 2.11 mvn -Pyarn -Phadoop-2.4 -Dscala-2.11 -DskipTests clean package
  • Building with SBT
build/sbt -Pyarn -Phadoop-2.3 assembly

Now to start spark

goto the SPARK_HOME/bin
 Execute ./spark-shell

You will get following prompt :

Screenshot from 2016-07-24 01-34-19

hence Apache spark provedes you following two object by default on spark-shell :

  1. sc : SparkContext
  2. spark : SparkSession

Screenshot from 2016-07-24 01-36-47

Although you can also create your own SparkContext (if creating project apart with Spark-Shell ) :

val conf = new SparkConf().setAppName("Demo").setMaster("local[2]")
 val sc = new SparkContext(conf)

Now you can load data with two type of Dataset :

  1. RDD
  2. Dataframe

Now You know that : 

  • A data frame is a table, or two-dimensional array-like structure, in which each column contains measurements on one variable, and each row contains one case.
  • DataFrame has additional metadata due to its tabular format, which allows Spark to run certain optimizations on the finalized query.
  • An RDD, on the other hand, is merely a Resilient Distributed Dataset that is more of a blackbox of data that cannot be optimized as the operations that can be performed against it are not as constrained.
  • However, you can go from a DataFrame to an RDD via its rdd method, and you can go from an RDD to a DataFrame (if the RDD is in a tabular format) via the toDF method

Creating an object of RDD and load data to RDD dataset

 val data = Array(1, 2, 3, 4, 5)
 val distData = sc.parallelize(data)
 distData: org.apache.spark.rdd.RDD[Int]

Either you can load data from a file

val distFile = sc.textFile("data.txt")
 distFile: RDD[String]

Here is a complete example of WordCount to understand RDD : 

val textFile = sc.textFile("words.txt")
 val counts = textFile.flatMap(line => line.split(" ")) .map(word => (word, 1)) .reduceByKey(_ + _)
 counts.saveAsTextFile("count.txt")

Simillarly you can create DataFrame object : 

val sqlContext = new SQLContext(sc)
 val df = sqlContext.read.json("emp.json")

Now you can Querry with DataFrame Object .

Example of DataFrame :

val sqlContext = new SQLContext(sc)
 val df = sqlContext.read.json("emp.json")
 df.printSchema()
 df.show()
 df.select("firstName").show()
 df.select(df("firstName"), df("age") + 1).show()
 df.filter(df("age") > 25).show()
 df.groupBy("age").count().show()

println("\n\n\nUsing Collect Method")
 df.collect.toList.map(aRow=>println(aRow))

Here is Slide for the Same

Here is Youtube Video

Reference :

http://spark.apache.org/

Stay tuned for Spark with Hive

Thanks
KNOLDUS-advt-sticker

Posted in Scala, Spark | Tagged , | Leave a comment

Shapeless- Generic programming for Scala!


Knoldus organized a half an hour session on 15 July 2016 at 4:00 PM. The topic was “Introduction to Shapeless- Generic programming for Scala !”. Broadly speaking, shapeless is about programming with types. Doing things at compile-time that would more commonly be done at runtime to ensure type-safety. A long list of features provided by Shapeless are explained in the enclosed presentation.

Here is the video for the same.


KNOLDUS-advt-sticker

Posted in Scala | Leave a comment

Effective Programming In Scala – Part 3 : Powering Up your code implicitly in Scala


Hi Folks,
In this series we talk about the concepts that provide a better definition to the code written in scala. We provide the methods with some definitions that lead to perform a task in a better way. Lets have a look at what we have done in the series so far,

Effective Programming in Scala – Part 1 : Standardizing code in better way
Here we covered the better solution to code, so that it cannot lead the any kind of code styling or formatting errors behind. Provided the way to use scala properties and collections in a better way according to their behaviour.
Effective Programming In Scala – Part 2 : Monads as a way for abstract computations in Scala
Here we provided short hand solutions to the long and lengthy writing of code to perform some task in a predefined sequence. Provided the better use of comprehensions so that functionality depending upon Monads can be easily performed.
Now in this blog we continue to explain the concepts that can be used to tell the scala to use the desired value types and generate the monkey patches in way of Scala.

Type Annotations versus Ascription

In Scala we know that if we do not define the type of a variable, then it takes its type itself,

scala> val name = "John"
name: String = John

However we can control the type of a value to be a byte, int or string as well,

scala> val name: String = "John"
name: String = John

scala> val age: Int = 25
age: Int = 25

scala> val salary: Double = 25000.0
salary: Double = 25000.0

If we make the code to use the desired form, we type the following,

scala> val bonus = 5000.75 : Double
bonus: Double = 5000.75

scala> val name = "John" : String
name: String = John

As we can see that compiler provides the definition to the type as we need, but when we want to provide own definition or override the type of value, then we can use type annotations as well.

Ascription

Scala Ascription is same approach as compared with the annotations, but with a slight difference. Sometimes a new developer of Scala can be confused with both annotations and ascriptions. Ascription can be taken as a process of up-casting of a type and annotations as simple way of defining the result, we want from an expression after its execution.

In the following examples, we can show the ascription clearly.

We have bonus as of Double type and salary of object type. As you can see here, we want the salary to be of an Object type, otherwise it must not be compiled with String or unmatching type.

scala> val bonus: Double = 3000.75
bonus: Double = 3000.75

scala> val salary = bonus: Object
salary: Object = 3000.75

scala> val name: String = "John"
name: String = John

scala> val salary = name: Double
<console>:8: error: type mismatch;
found   : String
required: Double
val salary = name: Double

Now have a look at the below example, where the Ascription can be applied in another way,

val numberList = (1 to 5).toList

numberList.foldLeft(Nil: List[Int]) {
  (table, currentNumber) => table :+ (currentNumber * 2)
}

numberList.headOption.fold(Left("List is empty"): Either[String, Int]) {
  number => Right(number * 2)
}

In above code we used the (Nil: List[Int]) and (Left(“List is empty”): Either[String, Int]) are another examples of ascriptions.

Continue reading

Posted in Scala | Tagged , , , , , , , , , , , | 1 Comment

Getting Started Neo4j with Scala : An Introduction


Earlier we used Relational Database for storing the data but there we store data in predefined table and than we define foreign key for references between tables or rows. We are using this in present time also. Now when we talk about the graph database, we stored data in the nodes. Graph database provides us flexibility to arrange data in easy way. When we transform from Relational Database Management System to Graph Database, Here are some transformation :

  • Table is represented by a label on nodes
  • Row in a entity table is a node
  • Columns on those tables become node properties.
  • Remove technical primary keys, keep business primary keys
  • Add unique constraints for business primary keys, add indexes for frequent lookup attributes
  • Replace foreign keys with relationships to the other table, remove them afterwards
  • Remove data with default values, no need to store those
  • Data in tables that is denormalized and duplicated might have to be pulled out into separate nodes to get a cleaner model.
  • Indexed column names, might indicate an array property
  • Join tables are transformed into relationships, columns on those tables become relationship properties
When we want to transform from Relational Database, this is very important that we know about these terms and graph model.
We used SQL Statement there for interacting with the database and here we used Cypher Statement for the same.
SQL Statement :

SELECT c.customer_id , c.customer_name FROM customer AS c WHERE c.customer_city = 'Delhi';

Cypher Statement :

Match (c: customer)
WHERE c.customer_city = 'Delhi'
RETURN c.customer_id , c.customer_name ;

Same and boring😉 , we can write Cypher like this :

Match (c: customer{customer_city : 'Delhi'})
RETURN c.customer_id , c.customer_name ;

Now we see how we can use Scala with Neo4j. For using Neo4j we can use neo4j-java-driver for creating Driver and Session.
libraryDependencies += "org.neo4j.driver" % "neo4j-java-driver" % "1.0.4"

Create Driver and Session :

val driver = GraphDatabase.driver("bolt://localhost/7687", AuthTokens.basic("anurag", "@nurag06"))
val session = driver.session
In Neo4j we use Bolt protocol. It is based on the PackStream serialization and supports protocol versioning, authentication and TLS via certificats.
Now we can create case class where we can define its value. Here we have a case class :
case class User(name: String, last_name: String, age: Int, city: String)
Now for CRUD operation :

Create a Node :

val script = s"CREATE (user:Users {name:'${user.name}',last_name:'${user.last_name}',age:${user.age},city:'${user.city}'})"
val result = session.run(script)

Retrieve all Node :

val script = "MATCH (user:Users) RETURN user.name AS name, user.last_name AS last_name, user.age AS age, user.city AS city"
val result = session.run(script)

Update a Node :

val script =s"MATCH (user:Users) where user.name ='$name' SET user.name = '$newName' RETURN user.name AS name, user.last_name AS last_name, user.age AS age, user.city AS city"
val result = session.run(script)

Delete a Node :

val script =s"MATCH (user:Users) where user.name ='$name' Delete user"
val result = session.run(script)

Now we can create relation between the node. We implement a method which take user’s name, list of those user’s name with whom we want to create relation and relation_type(i.e. Friend, Family). We modify list in comma separated ( “\”, \””) string and pass that in script.

val nameOfFriends = "\"" + userList.mkString("\", \"") + "\""
val script = s"MATCH (user:Users {name: '${user_name}'}) FOREACH (name in [${nameOfFriends}] | CREATE (user)-[:$relation_name]->(:Users {name:name}))"
session.run(script)

Here we send a user’s name ‘Anurag’, List of Friend (“Sandy”, “Manish”, “Shivansh”) and relation between them is ‘Friends’.Screenshot from 2016-07-22 12:25:29

Now we create two more node as a friend of ‘Sandy’.

newFOF

Now we want to know the Friends of Friends. Here is the Cypher:

val script = s"MATCH (user:Users)-[:$relation_name]-(friend:Users)-[:$relation_name]-(foaf:Users) WHERE user.name = '$user_name' AND NOT (user)-[:$relation_name]-(foaf) RETURN foaf"
session.run(script)

Result For this Cypher is :

result FOF

For Deleting all relation’s record, we can use this Cypher:

val script = s"MATCH (n)-[relation:$relation_name]->(r) DELETE relation"
session.run(script)

So this is the basic idea that how we can use Neo4j with scala.

I hope it will help you to start with Graph Database(Neo4j).:)

You can get the above working example from the github repo, checkout : GitHub

Thanks.

Reference:

  1. Neo4j: SQL to Cypher

KNOLDUS-advt-sticker

Posted in Scala | Tagged , , , , | 5 Comments

Improve Memory Usage and Performance of Application Using Yourkit Profiler


In this blog we are walking through how to improve performance of application using Yourkit profiler it is  helpful for both Tester and Developers.

Your Kit is big achievement in the evolution of the profiling tools. It is intelligent tools for profiling java , .net and ( JVM support languages)  based applications.

It is often important to check memory usage and memory used per process on servers so that resources do not fall short and users are able to access the server. It is very effective for any application.

Problem Related Issue for any application 

If there is memory related issues that are present in our application it slows down the performance and memory-related problems:

  • Application uses more memory than it should
  • Out of Memory occurs when jvm cannot allocate object because it is out of memory.
  • Memory leaks
  • Application creates a lot of temporary objects

Due to these problems our application leads to serious overall system performance degradation.

Your kit is very helpful for recognizing these types of problem for improving performance of the applicationFor execute profiler one should run this command:

/bin/yjp. sh

MEMORY USAGE 

In  Memory tab if we select Memory and GC telemetry section showing the memory related graphs.In which

  • Heap Memory shows Java heap statistics.In this we can easily see individual pools or all pools together.Java uses the heap as the storage for Java objects.
  • Non-Heap Memory shows the non-heap memory statistics. Java uses the non-heap memory to store loaded classes and other meta-data. You can see individual pools or all pools together.
  • Classes shows how the number of loaded classes changed in time, and the total number of unloaded classes.
  • Garbage Collection and  GC pauses show the garbage collection statistics
  • Object Allocation Recording graph shows the number of objects created per second.

Screenshot from 2016-07-21 17:32:01

Memory

Screenshot from 2016-07-22 12:38:38

Here you see the current memory allocations and number of objects has been created and size reserved.

CPU Usage

CPU tab shows  CPU consumption statistics. It is available when you are connected to the profiled application.

Screenshot from 2016-07-22 12:44:34

Performance Chart

In this tab we are able to catch all the statistics performance at one place e.g CPU, Memory etc.

Screenshot from 2016-07-22 12:25:41

Perf_ch1

Perf_ch2

Perf_ch5

Using this we can improve  performance of our application and make it effictive.

I hope this blog shows you what you can do with YourKit java profiler.

Refrences

 


KNOLDUS-advt-sticker

Posted in Java, Scala, testing, Tutorial | Tagged , , , , , , | 1 Comment