Dribbling with Filter.js: client-side JS filtering of JSON objects


Dribbling Filter.js

Play framework with client-side JS filtering of JSON objects and rendering HTML snippets via jQuery.

Big chunk to display? Interactive filtering? Most importantly it has to be really fast. Isn’t it like dribbling against Netherland! Big ground, lots of hooting and most importantly have to be fast and win
 
UI programming is an exciting ground to play, That’s why i chose reactive platform play framework. But problem was how to dribble at client side. Need an api to provide fast filtering of data. Answer is Filter.js

In this post we are integrating Filter.js with Play Framework for reactive data streaming using Ajax and filtering data on client side using Filter.js magic.

Filter.js is client-side JSON objects filter to show/hide HTML elements. Multiple filter criteria can be specified and used in conjunction with each other.

Mustache.js is a logic-less template syntax. It can be used for HTML, config files, source code – anything. It works by expanding tags in a template using values provided in a hash or object.


Instructions :-


  • The Github code for the project is at : playing-json-object-filter-js
  • Clone the project into local system
  • To run the Play framework 2.3.3, you need JDK 6 or later
  • Install Typesafe Activator if you do not have it already. You can get it from here: download
  • Execute activator clean compile to build the product
  • Execute activator run to execute the product
  • playing-json-object-filter-js should now be accessible at localhost:9000

References :-


This is the start, from next week onwards we would be working on this application to make it grow. We would look at how we can make it more functional, then we would be adding more modules to it together. If you have any changes then feel free to send in pull requests and we would do the merges :) Stay tuned.

Posted in AJAX, Bootstrap, JavaScript, jQuery, Play Framework, Reactive, Scala, Web | Tagged , , , , , , , , | 1 Comment

Knolx Session : Introduction to Selenium


In this presentation, I have briefly explained about the Selenium IDE.

 

Posted in Scala, Web | Leave a comment

How to setup and use zookeeper in scala using Apache Curator


In order to use Zookeeper to manage your project’s configurations across the cluster, first we will setup the zookeeper ensemble on our local machine (setup is for testing on a single machine) by following these steps:

1) Download a stable zookeeper release

2) Unpack it at three places and rename it to:

/home/user/Desktop/zookeeper1,
/home/user/Desktop/zookeeper2, and
/home/user/Desktop/zookeeper3

3) In order to use zookeeper we will need to setup configuration files for all servers.

Make a new file zoo.cfg,
/home/user/Desktop/zookeeper1/conf/zoo.cfg

and add following details:

tickTime=2000
initLimit=10
syncLimit=5
dataDir=/home/user/Desktop/zookeeperData1
clientPort=2181
server.1= localhost:2888:3888
server.2= localhost:2889:3889
server.3= localhost:2890:3890

Similarly,
/home/user/Desktop/zookeeper2/conf/zoo.cfg, as:

tickTime=2000
initLimit=10
syncLimit=5
dataDir=/home/user/Desktop/zookeeperData2
clientPort=2182
server.1= localhost:2888:3888
server.2= localhost:2889:3889
server.3= localhost:2890:3890

And,
/home/user/Desktop/zookeeper3/conf/zoo.cfg, as:

tickTime=2000
initLimit=10
syncLimit=5
dataDir=/home/user/Desktop/zookeeperData3
clientPort=2183
server.1= localhost:2888:3888
server.2= localhost:2889:3889
server.3= localhost:2890:3890

4) Now we will have to define each server’s id by making a new file in:
/home/user/Desktop/zookeeperData1/myid
which should have: 1
/home/user/Desktop/zookeeperData2/myid
which should have: 2
/home/user/Desktop/zookeeperData3/myid
which should have: 3

5) Next, we will start zookeeper ensemble for each server in 3 different terminals:

cd /home/user/Desktop/zookeeper1
bin/zkServer.sh start

cd /home/user/Desktop/zookeeper2
bin/zkServer.sh start

cd /home/user/Desktop/zookeeper3
bin/zkServer.sh start

6) Now we will add some data in one of the ZNode of the zookeeper ensemble by following steps:

a) bin/zkCli.sh
b) create /test_node “Some data”

7) Then we will write the following code in order to setup a watcher for zookeeper node so as to get stored data from zookeeper server using apache curator as a library to interact with our zookeeper server.

Add the following dependency in your build.sbt file:

libraryDependencies ++= Seq(
"org.apache.curator" % "curator-framework" % "2.6.0",
"org.apache.curator" % "curator-recipes" % "2.6.0"
)

and use this to interact with the zookeeper server:


class ZookeeperClient {

private val logger = LoggerFactory.getLogger(this.getClass.getName)

def main(args: Array[String]) = {
val retryPolicy = new ExponentialBackoffRetry(1000, 3)
val curatorZookeeperClient = CuratorFrameworkFactory.newClient("localhost:2181,localhost:2182,localhost:2183", retryPolicy)
curatorZookeeperClient.start
curatorZookeeperClient.getZookeeperClient.blockUntilConnectedOrTimedOut
val znodePath = "/test_node"
val originalData = new String(curatorZookeeperClient.getData.forPath(znodePath)) // This should be "Some data"

/* Zookeeper NodeCache service to get properties from ZNode */
val nodeCache = new NodeCache(curatorZookeeperClient, znodePath)
nodeCache.getListenable.addListener(new NodeCacheListener {
@Override
def nodeChanged = {
try {
val dataFromZNode = nodeCache.getCurrentData
val newData = new String(currentData.getData) // This should be some new data after it is changed in the Zookeeper ensemble
} catch {
case ex: Exception => logger.error("Exception while fetching properties from zookeeper ZNode, reason " + ex.getCause)
}
}
nodeCache.start
})
}
}

Posted in Java, Scala | Tagged , , , , | Leave a comment

Scala in Business | Knoldus Newsletter – August 2014


We are back again with August 2014, Newsletter. Here is this Scala in Business | Knoldus Newsletter – August 2014

In this newsletter, you will find that how industries are adopting Typesafe Reactive Platform for scaling their applications and getting benefits, how scala and akka repositories are most popular in this month and how spark and Typesafe reactive platform together making big data applications.

So, if you haven’t subscribed to the newsletter yet then make it hurry and click on Subscribe Monthly Scala News Letter

Screenshot from 2014-08-26 18:18:01

Posted in Akka, Java, News, Non-Blocking, Play Framework, Reactive, Scala, Spark, Web | Leave a comment

Knolx Session : Introduction of webRTC


In this presentation, I explained briefly about the webRTC and turn server (relay server) .

Posted in AJAX, JavaScript, jQuery, Node.js, Web | Leave a comment

SCALA: Introduction of scala


Scala is object oriented and functional programming language which is created by Martin Odersky and it was first released in 2003.

Scala is known due to development productivity, applications scalability and overall reliability.

Scala and Java have common runtime platform and most of feature of java are extended in scala.

Scala is compiled into Java Byte Code which is executed by the Java Virtual Machine (JVM). The scala command is similar to the java command, in that it executes your compiled Scala code.

Scala Program:

There are two ways to writing scala programming which are as follows:

  1. Interactive Mode Programming
  2. Script Mode Programming

 

Interactive Mode Programming:

Open command prompt and type “scala” , this return following


C:\>scalaWelcome toScala version2.9.0.1 scala>

Type the below text to the right of the Scala prompt and press the Enter key:


scala> println("Welcome to scala programming!");

This will produce the following result:


Welcome to scala programming!

Script Mode Programming:

Create a scala file name as “ScalaProgram.scala” .


object ScalaProgram{

 /* This is my first scala program.   

* This will print 'This is my first scala program’ as the output   */ 

 def main(args:Array[String]){    

 println("This is my first scala program")}
}

 

 

 

Followings are compile and run program steps:

  1. Open notepad and add the code as above.
  2. Save the file as: ScalaProgram.scala.
  3. Open a command prompt window and go o the directory where you saved the program file. Assume it is C:\>
  4. Type ‘scalac ScalaProgram.scala’ and press enter to compile your code. If there are no errors in your code the command prompt will take you to the next line.
  5. Above command will generate a few class files in the current directory. One of them will be called class. This is a bytecode which will run on Java Virtual Machine (JVM).
  6. Now, type ‘scala ScalaProgram’ to run your program.
  7. You will be able to see This is my first scala program printed on the window.

C:\> scalac ScalaProgram.scala

C:\> scala ScalaProgram

This is my first scala program

Posted in Scala | Leave a comment

Meetup: Reactive Programming using scala and akka


In this meetup which was a part of our ongoing knolx session, i talked about reactive programming using scala and akka.

Reactive programming is all about developing responsive applications built on the top of event-driven, resilient and scalable architecture.

Below are the knolx slides.

I have also shown some examples regarding brief introduction to scala and akka. Please find the github repository here.

Posted in Scala | Leave a comment

SBT console to debug application


In this blog post, We will know how to debug application via sbt console. Suppose we want to do some initialization process before debugging the application. For example, database connection, importing packages etc. sbt configuration provide a nice way to make debug process easier.

There are some steps to debug Liftweb application via sbt console. First we have to initialize the database connection or initialize application before debugging. For that we have to run class Boot’s function boot.

Welcome to Scala version 2.10.3 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_51).
Type in expressions to have them evaluated.
Type :help for more information.

scala> import bootstrap.liftweb.Boot
import bootstrap.liftweb.Boot

scala> new Boot().boot
INFO - MongoDB inited: localhost/127.0.0.1:27017/typesafe
scala> import code.model._
import code.model._

scala> import com.foursquare.rogue.LiftRogue._
import com.foursquare.rogue.LiftRogue._

scala> val score = Score.createRecord.examtype("homework").score(83.5)
score: code.model.Score = class code.model.Score={examtype=homework, score=83.5}

scala> val student = StudentInfo("Devid",List(score))
student: code.model.StudentInfo = StudentInfo(Devid,List(class code.model.Score={examtype=homework, score=83.5}))

scala> Student.createBy(student)
res3: code.model.Student = class code.model.Student={name=Devid, age=0, _id=53f04f2688e0d70d73c0fb50, scores=List(class code.model.Score={examtype=homework, score=83.5}), address=}

scala> Student.where(_.name eqs "Devid").fetch
res4: List1 = List(class code.model.Student={name=Devid, age=0, _id=53f04f2688e0d70d73c0fb50, scores=List(class code.model.Score={examtype=homework, score=83.5}), address=})
scala> 

Now we don’t want to import packages or database connection initialization manually so sbt configuration setting Define the initial commands evaluated when entering the Scala REPL. Just define initial commands in build.sbt

initialCommands in console := """
    import bootstrap.liftweb._
    import code.model._
    import org.bson.types.ObjectId
    import net.liftweb.common._
    import com.foursquare.rogue.LiftRogue._
    new.Boot().boot
     """

now again run the REPL:

abdhesh@abdhesh-Vostro-3560:~/Documents/projects/knoldus/Rogue_Query$ sbt console
[info] Loading project definition from /home/abdhesh/Documents/projects/knoldus/Rogue_Query/project
[info] Set current project to Rogue_Query (in build file:/home/abdhesh/Documents/projects/knoldus/Rogue_Query/)
[info] Starting scala interpreter...
[info] 
INFO - MongoDB inited: localhost/127.0.0.1:27017/typesafe
import bootstrap.liftweb._
import code.model._
import org.bson.types.ObjectId
import net.liftweb.common._
import com.foursquare.rogue.LiftRogue._
Welcome to Scala version 2.10.3 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_51).
Type in expressions to have them evaluated.
Type :help for more information.

scala> val score = Score.createRecord.examtype("homework").score(83.5)
score: code.model.Score = class code.model.Score={examtype=homework, score=83.5}

scala> val student = StudentInfo("Devid",List(score))
student: code.model.StudentInfo = StudentInfo(Devid,List(class code.model.Score={examtype=homework, score=83.5}))

scala> Student.createBy(student)
res3: code.model.Student = class code.model.Student={name=Devid, age=0, _id=53f04f2688e0d70d73c0fb50, scores=List(class code.model.Score={examtype=homework, score=83.5}), address=}

scala> Student.where(_.name eqs "Devid").fetch
res4: List1 = List(class code.model.Student={name=Devid, age=0, _id=53f04f2688e0d70d73c0fb50, scores=List(class code.model.Score={examtype=homework, score=83.5}), address=})

Now There is no need to run initialization process manually once you define initialCommands in build.sbt.

Posted in Scala | Leave a comment

How to flatten nested tuples in scala


In a project which I’ve been working on, I encountered a situation to flatten a nested tuple but couldn’t come up with a way to do so, hence out of curiosity I started googling about it and came to the following conclusion.

As for an example I had a structure something similar to the one mentioned below, though not identical:


val structureToOperateOn = List(List("a1","a2","a3"), List("b1","b2","b3") , List("c1","c2","c3"), List(10,1,11))

and, supposedly I wanted to make structureToOperateOn something like this:


"a1", "b1", "c1", 10
"a2", "b2", "c2", 1
"a3", "b3", "c3", 11

So the first thing that came to my mind was to use foldLeft:


val operatedStructure = (structureToOperateOn.tail.foldLeft(structureToOperateOn.head)((a,b) => a zip b)).asInstanceOf[List[(((String,String),String),Int)]]

which resulted in something like this:


List(((("a1","b1"),"c1"),10), ((("a2","b2"),"c2"),1), ((("a3","b3"),"c3"),11))

Next, I thought of flattening the tuples and came across Shapeless. Although I think scala should have something to flatten tuples, the best way it could be done as of now is to use Shapeless library. Anyways, this is how flattening tuples using Shapeless works:


import shapeless._
import shapeless.ops.tuple.FlatMapper
import syntax.std.tuple._

object NestedTuple {
trait LowPriorityFlatten extends Poly1 {
implicit def default[T] = at[T](Tuple1(_))
}

object flatten extends LowPriorityFlatten {
implicit def caseTuple[P <: Product](implicit fm: FlatMapper[P, flatten.type]) =
at[P](_.flatMap(flatten))
}

val structureToOperateOn = List(List("a1","a2","a3"), List("b1","b2","b3") , List("c1","c2","c3"), List(10,1,11))
val operatedStructure = (structureToOperateOn.tail.foldLeft(structureToOperateOn.head)((a,b) => a zip b)).asInstanceOf[List[(((String,String),String),Int)]]

val flattenedTuples = operatedStructure map (tuple => flatten(tuple))   // This should be List((a1,b1,c1,10), (a2,b2,c2,1), (a3,b3,c3,11))

}

After messing around with nested tuples, I finally thought it’d be better to have an alternative way to get the required result instead of adding a new library in the project. Regardless, it could be very helpful in scenarios where you might get stuck and would want to ultimately flatten a tuple.

This was what I used as an alternative:


val operatedStructure = structureToOperateOn.transpose

which resulted in:


List(List("a1", "b1", "c1", 10), List("a2", "b2", "c2", 1), List("a3", "b3", "c3", 11))

So to conclude, you can use Shapeless in order to flatten complex nested tuples if need be.

 

 

Posted in Scala | Tagged , , , | 5 Comments

Liftweb: Implement cache


In this blog post, I will explain how to integrate cache on server.
Liftweb Framework provide nice way to implement cache to store data(objects) on server so all user can access that data. Lift uses the LRU Cache wrapping org.apache.commons.collections.map.LRUMap

Create Object for handling cache operations like create,get,update and delete the data from in-memory cache.
LRUinMemoryCache.scala

import net.liftweb.util.{ LRU, Props }
import net.liftweb.common._

/**
 * LRU Cache wrapping org.apache.commons.collections.map.LRUMap
 */

object LRUinMemoryCache extends LRUinMemoryCache

class LRUinMemoryCache extends LRUCache[String] with Loggable {

  def size: Int = 10

  def loadFactor: Box[Float] = Empty

/**
*Here we are setting the data in-memory cache
*/
  def init: Unit = {
    set("inMemoryData", "here you can put whatever you want")
    logger.info("cache created")
  }
}

//size - the maximum number of Elements allowed in the LRU map
trait LRUCache[V] extends Loggable {

  def size: Int

  def loadFactor: Box[Float]

  private val cache: LRU[String, V] = new LRU(size, loadFactor)

  def get(key: String): Box[V] =
    cache.synchronized {
      cache.get(key)
    }

  def set(key: String, data: V): V = cache.synchronized {
    cache(key) = data
    data
  }

def update(key: String, data: V): V = cache.synchronized {
    cache.update(key, data)
    data
  }

  def has(key: String): Boolean = cache.synchronized {
    cache.contains(key)
  }

  def delete(key: String) = cache.synchronized(cache.remove(key))

}

Create and store the data in in-memory cache at the deployment time:
Boot.scala

package bootstrap.liftweb

import code.lib.LRUinMemoryCache

/**
 * A class that's instantiated early and run.  It allows the application
 * to modify lift's environment
 */
class Boot {
  def boot {
    //Init the in-memory cache
    LRUinMemoryCache.init
  }
}

Now we are accessing in-memory cache


//Get data from Cache:
LRUinMemoryCache.get("inMemoryData")

//Update in-memory cache
LRUinMemoryCache.update("inMemoryData","Updated data has been set")

//Remove data from in-memory cache
LRUinMemoryCache.delete("inMemoryData")

Posted in Scala | Leave a comment