Tutorial #3: Getting started with HTTP Programming in Play Framework


We have already discussed about the Play development environment in Tutorial #1 and use of WebJars, jQuery, Bootstrap & Bootswatch with Play in Tutorial #2

So, In this blog we would discuss about HTTP programming in Play Framework which would drive us through rest of the tutorial series.  We would be running this blog as a series and we would be looking at various aspects of Play with this blog.

In this tutorial we will discuss about the following topics of HTTP programming in Play Framework:

  • Actions, Controllers and Results
  • HTTP routing
  • Manipulating results
  • Session and Flash scopes
  • Body parsers
  • Actions composition
  • Content negotiation

 

 

Now we know about HTTP Programming in Play. So we will follow the same HTTP Programming for the up coming  tutorials. We would look at how we can make it more usable and readable, then we would be adding next tutorials related to Play Framework. If you have any question then feel free to comment on the same :) Stay tuned.

Posted in Akka, Future, Play Framework, Reactive, Scala, Tutorial, Web | Tagged , , , , , , , , , | 3 Comments

Knolx Session: Role of FSM in Akka


Here I am going to explain what is FSM , why we should use FSM And features of FSM. I am going to explain FSM using live example.

Posted in Scala | Leave a comment

Digging Macroid – The first sod (Scala on Android)


Android, an operating system runs on millions of devices, based on Linux Kernel and and uses Dalvik/ ART as their process virtual machine with trace-based just-in-time compilation to run Dalvik “dex-code”, which is usually translated from the Java bytecode.Most of the application that runs on Android are made with Java , which is an imperative programming language with powerful features. However rise of the functional programming paradigm in recent years, make us think about taking alternative approach in building new softwares. When we talk about functional programming now a days, Scala always takes a special place in our thoughts because of its strong functional features. Scala however supports both functional and object features. Scala source code is intended to be compiled to Java bytecode, so that the resulting executable code runs on a Java Virtual Machine. Hence we can definitely have some second thoughts on building android apps with Scala, and there are libraries built already for it e.g vanilla android, scaloid etc. Recently I have gone through a new scala on android library called Macroid. In the recent scala conference Scala days 2015, the official android app was built with this library and after watching Nikolay Stanchenko’s presentation video I had the urge to go and play with it.

So Macroid is another library for making android apps with scala, having pros like concise code, fantastic concurrency support, advance dsls and with cons like build time according to the macroid docs. It definitely has strong features within it, however we have to reconsider the fact that learning to build android app with macroid would not be an easy task ;) , as most of the learning sources are based on java. One has to know how to build the app in java first or at least has to know what sort of components or activities need to use for it. I have tried my hands on macroid and will try to explain it. The ideal IDE for it is Intellij Idea or android studio(which is the same).

Android UIs are mainly based on xml files, all the view and viewgroup are mentioned on the xml file. However using macroid we can do these stuffs by avoiding xml upto and extent and coding directly in scala with a cleaner approach.

It uses Bricks, which defines the parts of the UI like layouts, buttons, Text etc. It gets denoted as follows w[Button], l[LinearLayout] etc. It uses tweaks to change the styles and behaviour of the widgets. It has something call tweaks, using what we change the behaviour of the widgets or layers. For examplehttp://

l[LinearLayout](
w[TextView] <~ text("Hey I think i work")
)

Here, l[LinearLayout] has defined the layout and within the layout we have a text box, so basically a widget for text view is created with w[TextView] and text is being put here using the tweak.

In this way it will be well composable for any complex view, with a cleaner and concise approach with macroid. For example

getUi(
l[DrawerLayout](
l[LinearLayout](
w[TextView] <~ text("Hey I think i work"),
l[FrameLayout]() <~ wire(fragmentContent) <~ id(Id.mainFragment) <~ fragmentContentStyle
) <~ contentStyle,
l[FrameLayout]() <~ wire(fragmentMenu) <~ id(Id.menuFragment) <~ drawerLayoutStyle
) <~ wire(drawerLayout) <~ drawerStyle
)

Here we are using different layout for the view as well as for each layer it has specific layer, and it is applied easily. (For the drawer layout click here). The DrawerLayout is the main layout here under which we have two more layouts, Linear Layout and Frame Layout. In the LinearLayout we have some text, and we have a FrameLayout inside, which gets the fragmentContentStyle applied. And on the LinearLayout the ContentStyle is applied. The other framelayout is got applied with drawerLayoutStyle and drawerStyle is applied to the main drawer layout.

DiggingMacroid

The example given here is just a portion of the complete app and we can see that doing it this way makes the code quite concise without playing with the xml files. In order to run the complete app we have to deal with some more stuffs like Activity, Contexts etc. In the next blog I will try to cover the remaining stuffs to build a complete app in Scala’s way and will explore the other advantages of it.

Posted in Mobile, Scala | Tagged , , | 2 Comments

Scala in Business | Knoldus Newsletter – April 2015


Hello Folks

We are back again with April 2015, Newsletter. Here is this Scala in Business | Knoldus Newsletter – April 2015

In this newsletter, you will get the business related news for Scala. How organisation are adopting Scala for their business, how Scala related technologies increasing the performance of application and how Scala is getting popular in the industry

So, if you haven’t subscribed to the newsletter yet then make it hurry and click on Subscribe Monthly Scala News Letter

Screenshot from 2015-04-27 18:16:11

Posted in Java, Agile, Cloud, Scala, Web, LiftWeb, Akka, Spark, Amazon EC2, MongoDB, JavaScript, Play Framework, Slick, Mockito | 1 Comment

AWS Services: AWS SDK on the Scala with Play Framework


playing-aws-scala

The following blog and attached code represent a simple example of Amazon Web Services in the Scala way with Play Framework using AWScala but in this blog I have implemented only Amazon Simple Storage Service (Amazon S3) functionalities.

AWScala: AWS SDK on the Scala REPL

AWScala enables Scala developers to easily work with Amazon Web Services in the Scala way.

Though AWScala objects basically extend AWS SDK for Java APIs, you can use them with less stress on Scala REPL or sbt console.


AWScala Supported Services


  • AWS Identity and Access Management (IAM)
  • AWS Security Token Service (STS)
  • Amazon Elastic Compute Cloud (Amazon EC2)
  • Amazon Simple Storage Service (Amazon S3)
  • Amazon Simple Queue Service(Amazon SQS)
  • Amazon Redshift
  • Amazon DynamoDB
  • Amazon SimpleDB

Amazon Simple Storage Service (Amazon S3)


package utils

import awscala._, s3._

object S3Utility extends S3Utility

trait S3Utility {

  implicit val s3 = S3()

  /**
   * Get all the available buckets
   *
   * @return
   */
  def getBuckets(): Seq[Bucket] = s3.buckets

  /**
   * Get the bucket by given name
   *
   * @param name The Bucket name
   * @return
   */
  def getBucketByName(name: String): Option[Bucket] = s3.bucket(name)

  /**
   * Create new bucket for given name
   *
   * @param name The Bucket name
   * @return
   */
  def createBucket(name: String): Bucket = s3.createBucket(name)

  /**
   * Create an object into given bucket by name
   *
   * @param bucket The Bucket
   * @param name The Object name
   * @param file The Object
   * @return
   */
  def createObject(bucket: Bucket, name: String, file: File): PutObjectResult = bucket.put(name, file)

  /**
   * Get the Object by given name from given bucket
   *
   * @param bucket The Bucket
   * @param name The Object name
   * @return
   */
  def getObject(bucket: Bucket, name: String): Option[S3Object] = bucket.getObject(name)

}
  val uploadService: UploadService

  def upload = Action(parse.multipartFormData) { implicit request =>
    val result = uploadService.uploadFile(request)
    Redirect(routes.Application.index).flashing("message" -> result)
  }
  /**
   * Get file from the request and move it in your location
   *
   * @param request
   * @return
   */
  def uploadFile(request: Request[MultipartFormData[TemporaryFile]]): String = {
    log.error("Called uploadFile function" + request)
    request.body.file("file").map { file =>
      import java.io.File
      val filename = file.filename
      val contentType = file.contentType
      log.error(s"File name : $filename, content type : $contentType")
      val uniqueFile = new File(s"/tmp/${UUID.randomUUID}_$filename")
      file.ref.moveTo(uniqueFile, true)
      if (Play.isProd) {
        try {
          val bucket = s3Utility.getBucketByName("test").getOrElse(s3Utility.createBucket("test"))
          val result = s3Utility.createObject(bucket, filename, uniqueFile)
          s"File uploaded on S3 with Key : ${result.key}"
        } catch {
          case t: Throwable => log.error(t.getMessage, t); t.getMessage
        }
      } else {
        s"File(${filename}) uploaded"
      }
    }.getOrElse {
      "Missing file"
    }
  }

Test Code for Controller and Service


ApplicationSpec.scala

"should be valid" in new WithApplication {
  val request = mock[Request[MultipartFormData[TemporaryFile]]]
  mockedUploadService.uploadFile(request) returns "File Uploaded"
  val result: Future[Result] = TestController.upload().apply(request)
  status(result) must equalTo(SEE_OTHER)
}

UploadServiceSpec.scala

"UploadService" should {
    "uploadFile returns (File uploaded)" in new WithApplication {
      val files = Seq[FilePart[TemporaryFile]](FilePart("file", "UploadServiceSpec.scala", None, TemporaryFile("file", "spec")))
      val multipartBody = MultipartFormData(Map[String, Seq[String]](), files, Seq[BadPart](), Seq[MissingFilePart]())
      val fakeRequest = FakeRequest[MultipartFormData[Files.TemporaryFile]]("POST", "/", FakeHeaders(), multipartBody)
      val success = UploadService.uploadFile(fakeRequest)
      success must beEqualTo("File uploaded")
    }

    "uploadFile returns (Missing file)" in new WithApplication {
      val files = Seq[FilePart[TemporaryFile]]()
      val multipartBody = MultipartFormData(Map[String, Seq[String]](), files, Seq[BadPart](), Seq[MissingFilePart]())
      val fakeRequest = FakeRequest[MultipartFormData[Files.TemporaryFile]]("POST", "/", FakeHeaders(), multipartBody)
      val success = UploadService.uploadFile(fakeRequest)
      success must beEqualTo("Missing file")
    }
}

AWS credentials! Make sure about environment or configuration


export AWS_ACCESS_KEY_ID=<ACCESS_KEY>
export AWS_SECRET_KEY=<SECRET_KEY>

Build and Run the application


  • To run the Play Framework, you need JDK 6 or later
  • Install Typesafe Activator if you do not have it already. You can get it from here
  • Execute ./activator clean compile to build the product
  • Execute ./activator run to execute the product
  • playing-aws-scala should now be accessible at localhost:9000

Test the application with code coverage


  • Execute $ ./activator clean coverage test to test
  • Execute $ ./activator coverageReport to generate coverage report

References :-


This is the start of AWS Services implementation, from next week onwards we would be working on this application to make it grow. We would look at how we can make it more functional, then we would be adding more AWS modules to it together. If you have any changes then feel free to send in pull requests and we would do the merges :) Stay tuned.

Posted in Akka, Amazon, Amazon EC2, AWS, AWS Services, Bootstrap, Bootswatch, Future, MultipartFormData, Play Framework, S3 | Tagged , , , , , , , , , | 3 Comments

Play Framework has lost its relevance. Or has it?


The last few weeks, rather months have been very interesting for Play Framework at least at Knoldus. We end up working and custom developing large sophisticated products and very niche reactive products. Over the last few months there has been a healthy (the part where people starting pulling each others hair is omitted on purpose) on whether we should be using the Play Framework at all. The usual suspects being the client side html generation hippies who would not buy the Play philosophy at all and then we have the Play Framework fan boys who would not code in anything but Play. I carried on this debate to the Scala Days conference in SFO and also discussed it with Typesafe’s Rich Dougherty (@richdougherty) to get an idea about his thoughts. We decided to do a rundown of all the pros and cons of one approach versus the other and this is the list that we came up with.

Client Side Rendering

Here, client code drives everything from reacting to user input, querying the server for data using APIs which in most cases would return JSON, presenting the resulting data back to the user. Backbone, Angular, Ember etc would be good illustration of this strategy.

Advantages of client side rendering

  1. High Re-usability – The back-end code only needs to spit out JSONs. There could be n number of clients for the same service who would be consuming these JSONs on the webapp level as well as mobile.
  2. Front end can be tested in isolation from the back end.
  3. The front end is totally independent of the back end and the back end can be swapped out to a different language. We did this when we converted the back-end of a product from Node.js to Scala. Read our ServiceSource case study.
  4. Saving power on the server – Well, I would debate this one, but this came up as an advantage that we are pushing the processing to the client machine than the server.

Server Side Rendering
Here, the server decides what the client should render. The pieces of HTML returned by the server may contain complex client-side behaviour, which may include client-side HTML generation. This behaviour however would be limited and might not suit well for all the desktop like client applications and front ends.

The advantages are as follows

  1. Code is type-safe, cleaner and easier to debug.
  2. There are set standards for writing server side code. Ok, with the advent of frameworks in js there are very well defined standards in js world as well, but how many of us follow them.
  3. It is faster to generate and send back HTML from server. Twitter moved from server to client only rendering and them moved back to server side rendering only to realize load time gain of 400% across different browsers.
  4. Server side rendering requires a constant time over load but client side rendering defines a linear increase with load.
  5. Content is visible to search engines for better indexing.
  6. There is less data going down the pipe.

In general the mantra to remember is that it is a bad practice to send JSON if all that we are doing on the front end is to incorporate that JSON in page’s DOM structure. Also it is a bad practice to send HTML to the client side when the client would need to parse the HTML and do some calculations with that.

So what is the best strategy?

In my opinion, it is a hybrid approach. That means instead of delivering JSON data and baking them into the template on client side, you could render the template on server side. Once the page is loaded, next the smaller interactions come into play, thus AJAX call has to append html templates to the DOM instead of processing JSON data.

Play has a robust templating engine which would allow us to define HTML generation in a type-safe manner and then render it on the client fast. Hot reload for all code, templates, config changes, etc allows you to iterate much faster. The stateless design of the framework helps you with performance and also enables you to write non-messy code. Play is also an amazing combination of best practices like non blocking and is built on Akka thereby getting the resilience and concurrency support. Finally LinkedIn, Guardian, Klout and other high scalability sites use play which endorses its value.

So does that mean Backbone and Angular have lost the battle? No, but they need to co-exist with Play. I would propose to always render the first view with Play templating and then make JS based calls to the server to generate JSON for rendering portions on the page based on user actions. Thoughts and brickbats welcome ;)

Posted in JavaScript, Play Framework, Scala | Tagged , , | 12 Comments

Conditional logging with Logback in Scala


Hello Folks

In my project , i got a scenario where I wanted conditional logging. I was using Logback framework for the same. I wanted to set different logging level for staging and production.

Either i could do manually changes in logback.xml for logging level for staging and production both. But this is not the good practice. Then I found the solution of implementing the conditions in logback.xml itself and that provided me the better and efficient solution.

First, we will see the way of implementing the conditions in logback.xml. There are 2 ways to do the same :

if-then form


<if condition="some conditional expression">
 <then>
 ...
 </then>
 </if>

if-then-else form


<if condition="some conditional expression">
 <then>
 ...
 </then>
 <else>
 ...
 </else> 
 </if>

Now we have learnt how to make conditions. Now we will see what conditional expressions we can use in logback.xml.

There are 3 ways to do this :

1. Using property() or p() :

Only context properties or system properties are accessible. For a key passed as argument, the property() or its shorter equivalent p() methods return the String value of the property.


property("someKey").contains("someValue")

or

p("someKey").contains("someValue")

2. Using isDefined() :

It is used to check whether a property is defined or not.


isDefined("someKey")

3. Using isNull() :

It is used to check whether a property is null or not.


isNull("someKey")

A full Example to make different logging level for staging and production.

<?xml version="1.0" encoding="UTF-8"?>
<configuration debug="true">
   <appender name="CON" class="ch.qos.logback.core.ConsoleAppender">
     <encoder>
       <pattern>%d %-5level %logger{35} - %msg %n</pattern>
     </encoder>
   </appender>
   <if condition='p("runMode").contains("prod")'>
     <then>
       <root level="warn">
         <appender-ref ref="CON" />
       </root>
     </then>
     <else>
       <root level="info">
         <appender-ref ref="CON" />
       </root>
     </else>
   </if>
</configuration> 

Set the property as : export runMode=prod

Cheers !!!

Posted in Agile, Akka, Best Practices, Cloud, Java | Tagged , , , , , | 1 Comment

Knolx : Starting with Ractive.Js


The slides covering the basics to start with Ractive.Js, All the practice and Documentation links are there in the slide as well as the references.

Posted in Scala | 1 Comment

How to run an application on Standalone cluster in Spark?


In this blog, we will run an application on a standalone cluster in spark.

cluster-overview

Steps
1. Launch the cluster.
2. Create a package of the application.
3. Run command to launch

Step-1:

To run an application on standalone cluster, we need to run a cluster on our standalone machine. for that refer this blog.

Click Here
Your Master and slaves, all should be alive. In my case, i have 3 slave instances with 1024 MB Mamory.

Step-2:

First of all we will create a package of the application. the package is a jar file of the application.
To create package we will follow these commands:

$ cd <path-of application>    //It will take us to the directory of the application.
$ sbt package             //It will create a package of the application.Screenshot from 2015-03-27 15:22:55

Screenshot from 2015-03-27 15:23:27

The package will be created in “target/scala-2.11/<application-name>.jar”. It is ready to use. As you can see in the picture.

Step-3:

To run an application we use “spark-submit” command to run “bin/spark-submit” script. It takes some options-:

 –class: The entry point for your application (e.g. org.apache.spark.examples.SparkPi)
–master: The master URL for the cluster (e.g. spark://knoldus-vostro-3546:7077)

Screenshot from 2015-03-27 14:45:39

–deploy-mode: Whether to deploy your driver on the worker nodes (cluster) or locally as an external client (client) (default: client)*
–conf: Arbitrary Spark configuration property in key=value format. For values that contain spaces wrap “key=value” in quotes (as shown).
application-jar: Path to a bundled jar including your application and all dependencies. The URL must be globally visible inside of your         cluster, for instance, an hdfs:// path or a file:// path that is present on all nodes.
application-arguments: Arguments passed to the main method of your main class, if any

To run this command we have to go to Spark folder, remember this is location of your spark directory (e.g-: My spark is located in this directory “/home/knoldus/Softwares/spark-1.3.0/”)

Screenshot from 2015-03-27 15:28:25

$ cd <path to spark folder>        //e.g. cd  /home/knoldus/Softwares/spark-1.3.0/
$ ./bin/spark-submit \
–master spark://knoldus-Vostro-3546:7077 \
–class sparkSql.SparkSQLjson \
/home/knoldus/Desktop/spark-1-3-0_2.11-1.0.jar        //here we are using spark-submit command with some options.

here –master spark://IP:PORT   (it can be your taken from your Spark master UI page. http://localhost:8080/)
–class (It is for the class file which you want to run. e.g. main class)

At last you have to give path of your application package (jar file)

Note: the sparkSQL.SparkSQLjson$.class is according to my jar file. you have to give it according to your package.
if your class file it in sparkSql/SparkSQLjson.class folder in the jar, so you have to write it like this, sparkSql.SparkSQLjson

You can take a look to your Spark UI http://localhost:8080 to check Running/Completed Applications.

This is how you can run an application on cluster.

Posted in Scala | 1 Comment

Setup a Apache Spark cluster in your single standalone machine


If we want to make a cluster in standalone machine we need to setup some configuration.

We will be using the launch scripts that are provided by Spark, but first of all there are a couple of configurations we need to set

first of all setup a spark environment so open the following file or create if its not available with the help of template file spark-env.sh.template

/conf/spark-env.sh

and add some configuration for the workers like


export SPARK_WORKER_MEMORY=1g
export SPARK_EXECUTOR_MEMORY=512m
export SPARK_WORKER_INSTANCES=2
export SPARK_WORKER_CORES=2
export SPARK_WORKER_DIR=/home/knoldus/work/sparkdata

Here SPARK_WORKER_MEMORY specifies the amount of memory you want to allocate for a worker node if this value is not given the default value is the total memory available – 1G. Since we are running everything in our local machine we woundt want the slave the use up all our memory.

The SPARK_WORKER_INSTANCES specified the number of instances here its given as 2 since we will only create 2 slave nodes.

The SPARK_WORKER_DIR will be the location that the run applications will run and which will include both logs and scratch space

with the help of above configuration we make a cluster of 2 workers with 1GB worker memory and every Worker use maximum 2 cores

The SPARK_WORKER_CORE will specified the number of core will be use by the worker

After setup environment you should add the IP address and port of the slaves into the following conf file

conf/slaves

when using the launch scripts this file is used to identify the host-names of the machine that the slave nodes will be running, Here we have standalone machine so we set localhost in slaves

Now start master by following command

sbin/start-master.sh

master is running on spark://system_name:7077 for eg spark://knoldus-dell:7077 and you can monitor master with localhost:8080

Screenshot from 2015-03-27 15:43:04

Now start workers for the master by the following commands

sbin/start-slaves.sh

Screenshot from 2015-03-27 15:43:30
now your standalone cluster is ready,use it with spark shell,open spark shell with following flag

spark-shell –master spark://knoldus-Vostro-3560:7077

you can also add some configuration of spark like driver memory,number of cores etc

Now run following commands in spark shell

val file=sc.textFile(“READ.md”)
file.count()
file.take(3)

Now you can see which worker work and which worker completed the task at master ui(localhost:8080)
Screenshot from 2015-03-27 15:53:30

Posted in Scala | 3 Comments