Continuous Integration : Integrating BuildKite with Your Scala Project


In a very broad sense, BuildKite (earlier called BuildBox) is a continuous integration server that allows you to keep working on your code while there is a CI box which is reporting about any issues. The problem with most web based CI servers is that they have to do a lot of magic beneath the hood to let you build your custom system on their environment.

What does this mean? This means that if your product uses the Riak database and mine uses PostgreSQL then a web based CI system would have to give me default installations of both of them so that both our products can be supported. Remember this is just 2 databases that we talked about. Now, bring in more databases, more external integration’s and to cap it, different versions of all of these for different products. Suddenly having a hosted CI is not really the space you want to be :)

Contrary to this, BuildKite seems to have pushed down these problems back to the development team. Ok, this is not necessarily as bad as it sounds. What this means is that now you have control over your build environment. You know exactly what instances and what versions you would be working with.

Conceptually, it is quite similar to any other CI environment that you would have worked with.
You have a Project on which you want to run CI. This could be your GitHub, Bitbucket or any other repository project. You need to specify a file which would get executed when the CI would trigger. For example in our case the file is build.sh It is IMPORTANT to note that this should be an executable script and buildkite should have rights to execute this script.

Selection_011
Now comes the interesting part. Which is the Agents. Agent is responsible for running the CI job on your CI server. The agent polls Buildkite looking for work. When a new job is ready to run, the agent will run the bootstrap.sh script with all the environment variables required to run the job.
This script is responsible for creating the build directory, cloning the repo, running the build script, and uploading artifacts.

Agent would do the following for any repository that it works with as a part of setting up the environment.

$ mkdir -p "/home/vikas/.buildbox/builds/vikas-dev-agent/knoldus/akka-roller"
$ cd "/home/vikas/.buildbox/builds/vikas-dev-agent/knoldus/akka-roller"
$ git clean -fdq
$ git fetch -q
$ git reset --hard origin/master
HEAD is now at 87763ca Update README.md
$ git checkout -qf "87763ca0bd0bdf3e905d6bd545693a14350779fd"

Once it has downloaded the latest code onto your CI box then it would run the script that you have mentioned. In our case, it ends up being src/main/resources/build.sh

For example, refer to the diagram below. We have 2 projects and 2 CI servers. On one CI server we have the agent running for Project 1 and on the other, we have the agent running for Project 2. Now when BuildKite gets the trigger for building then it invokes the running agent on either of the CI servers as necessary.
Selection_012

What I like?

  1. Controlled environment – We can set up our environment
  2. Ease of use
  3. Has Bells and whistles like build passing badges etc

What I dont like?

  1. It should be free for public projects on Github

Our public project which has been integrated with BuildBox is present here.

Posted in Architecture, Devops | 2 Comments

Knox Session : Working With Slick 2.1.0


In this session I am going to discuss  how  can   effectively work  with slick 2.1.0

 

 

Working code

Enjoy With Slick !!!!

Posted in Scala | 2 Comments

SCALA : Handling visibility of constructor fields


Constructor fields are parameters of class . In scala , we can define the visibility of parameters , visibility of parameters can be defined by “val” or “var” or without “val” or “var” .

For example : class Knoldus(val name:String)

Here, “val” defines the visibility of constructor field.

With val field

scala> class Knoldus(val name:String)
defined class Knoldus
scala> val knolObj=new Knoldus("Malti Yadav")
knolObj: Knoldus = Knoldus@59fc982f
scala> knolObj.name
res2: String = Malti Yadav
scala> knolObj.name="XYZ"
<console>:9: error: reassignment to val
knolObj.name="XYZ"
^

As we know that “val” is immutator so mutator method is not generated .In case of “val” field , we can not reassign value (scala> knolObj.name=”XYZ” ) .

With var field

scala> class Knoldus(var name:String)
defined class Knoldus
scala> val knolObj=new Knoldus("Malti Yadav")
knolObj: Knoldus = Knoldus@57391cbb
scala> knolObj.name
res0: String = Malti Yadav
scala> knolObj.name="XYZ"
knolObj.name: String = XYZ
scala> knolObj.name
res1: String = XYZ

In this case , we accessed the value and reassigned (mutated) the value because “var” is mutator .

Without val or val field

scala> class Knoldus(name:String)
defined class Knoldus
scala> val knolObj=new Knoldus("Malti Yadav")
knolObj: Knoldus = Knoldus@52efbabf
scala> knolObj.name
<console>:10: error: value name is not a member of Knoldus
knolObj.name
^

In this case , visibility of parameters are very restricted that why we can not access or set the value

Note : In “case class” , constructor fields are val by default .Case class rules are differ from above rules .

scala> case class Knoldus(name:String)
defined class Knoldus
scala> val knol=Knoldus("Malti Yadav")
knol: Knoldus = Knoldus(Malti Yadav)
scala> knol.name
res1: String = Malti Yadav
Posted in Scala | Leave a comment

Scala in Business | Knoldus Newsletter – November 2014


Hello Folks

We are back again with November 2014, Newsletter. Here is this Scala in Business | Knoldus Newsletter – November 2014

In this newsletter, you will get to know the popularity of Spark in big data. Spark is getting more attention in all over the world for fast data processing. People are getting much faster data processing speed with Spark. You will get to know some Scala best practices and many more things.

So, if you haven’t subscribed to the newsletter yet then make it hurry and click on Subscribe Monthly Scala News Letter

Screenshot from 2014-11-26 22:49:18

Posted in Agile, Akka, Amazon EC2, Cassandra, Clojure, Cloud, Future, Java, LiftWeb, MongoDB, News, Node.js, Non-Blocking, NoSql, Play Framework, Reactive, Scala, Spark, Tutorial, Web | Leave a comment

Building Reactive applications with Akka


Knoldus organized a Meetp up on Wednesday, 29 Oct 2014 at 5:00PM. Mr. Nilanjan Raychaudhuri from Typesafe presented this session on “Building Reactive applications with Akka”.
He did the session remotely from Germany and it was very well accepted by the audience at the meetup.

Nilanjan is a consultant/trainer and member of Play framework team. He works for Typesafe. He has more than 14 years of experience managing and developing software solutions in Java, Ruby, Groovy and also in Scala. He is zealous about programming in Scala ever since he got introduced to this beautiful language. He enjoys sharing his experience via talks in various conferences and he is also the author of the “Scala in Action” book.

We would like to thank Nilanjan for his time and his valuable insights.

Checkout this presentation to find how Akka help you to build Reactive applications.

Posted in Akka, Reactive, Scala, Tutorial | Tagged | Leave a comment

Easiest Way To Map Optional Nested Case Class with Slick in Scala


Few days ago, I had a scenario, in which I was supposed to map optional nested case class in slick using Scala.

case class Employee(emdId:String,name: String, record: Option[Record])
case class Record(subject: String, mark: Int)

I was trying to do this mapping the way, I have explained below.

class EmployeeSlickMapping(tag: Tag) extends Table[Employee](tag, "Employee") {
  def emdId = column[String]("emdId")
  def name = column[String]("name")
  def subject = column[String]("subject", O.Nullable)
  def mark = column[Int]("mark", O.Nullable)
  def record = (subject, mark) <> (Record.tupled, Record.unapply)
  def * = (emdId,name, address) <> (User.tupled, User.unapply)
}

But I was getting below compilation error.

Multiple markers at this line
    - No matching Shape found. Slick does not know how to map the given types. Possible causes: T in Table[T] does not match your * projection. Or you use an unsupported
     type in a Query (e.g. scala List). Required level: scala.slick.lifted.FlatShapeLevel Source type: (scala.slick.lifted.Column[String], scala.slick.lifted.Column[String],
     scala.slick.lifted.MappedProjection[platform3.models.mail.Record,(String, Int)]) Unpacked type: (String, String, Option[platform3.models.mail.Record]) Packed type: Any
    - not enough arguments for method <>: (implicit evidence$2: scala.reflect.ClassTag[platform3.models.mail.Employee], implicit shape: scala.slick.lifted.Shape[_ <:
     scala.slick.lifted.FlatShapeLevel, (scala.slick.lifted.Column[String], scala.slick.lifted.Column[String], scala.slick.lifted.MappedProjection[platform3.models.mail.Record,
     (String, Int)]), (String, String, Option[platform3.models.mail.Record]), _])scala.slick.lifted.MappedProjection[platform3.models.mail.Employee,(String, String,
     Option[platform3.models.mail.Record])]. Unspecified value parameter shape.
    - No matching Shape found. Slick does not know how to map the given types. Possible causes: T in Table[T] does not match your * projection. Or you use an unsupported
     type in a Query (e.g. scala List). Required level: scala.slick.lifted.FlatShapeLevel Source type: (scala.slick.lifted.Column[String], scala.slick.lifted.Column[String],
     scala.slick.lifted.MappedProjection[platform3.models.mail.Record,(String, Int)]) Unpacked type: (String, String, Option[platform3.models.mail.Record]) Packed type: Any
    - Implicit conversions found: => anyToToShapedValue()
    - not enough arguments for method <>: (implicit evidence$2: scala.reflect.ClassTag[platform3.models.mail.Employee], implicit shape: scala.slick.lifted.Shape[_ <:
     scala.slick.lifted.FlatShapeLevel, (scala.slick.lifted.Column[String], scala.slick.lifted.Column[String], scala.slick.lifted.MappedProjection[platform3.models.mail.Record,
     (String, Int)]), (String, String, Option[platform3.models.mail.Record]), _])scala.slick.lifted.MappedProjection[platform3.models.mail.Employee,(String, String,
     Option[platform3.models.mail.Record])]. Unspecified value parameter shape.

After beating my head two days, I found a solution by adding custom mapping.

class EmployeeSlickMapping(tag: Tag) extends Table[Employee](tag, "Employee") {
  def emdId = column[String]("emdId")
  def name = column[String]("name")
  def subject = column[String]("subject")
  def mark = column[Int]("mark")
  def record = (subject.?, mark.?).<>[Option[Record], (Option[String], Option[Int])](
    { mappedRecord =>
      mappedRecord match {
        case (Some(subject), Some(mark)) => Some(Record(subject, mark))
        case _ => None
      }
    },
    { result =>
      result match {
        case rec: Option[Record] => Some(rec.map(_.subject), rec.map(_.mark))
      }
    })
  def * = (emdId, name, record) <> (Employee.tupled, Employee.unapply)
}

It worked and now I am able to compile and run my code.

Posted in Java, Scala | Tagged , | 2 Comments

Introduction on Playframework


In this presentation , I have discussed some important features of Play Framework.

Posted in Scala | Leave a comment

Scala in Business | Knoldus Newsletter – October 2014


Hello Folks

This time I got bit late due to the festival week in India. But i have some intresting stuffs for you only.

We are back again with October 2014, Newsletter. Here is this Scala in Business | Knoldus Newsletter – October 2014

In this newsletter you will get to know how organizations are getting benefits by using Typesafe Reactive Platform, how Akka is helping to build scalable and fault tolerant applications and how Spark is doing fast processing than Hadoop.

So, if you haven’t subscribed to the newsletter yet then make it hurry and click on Subscribe Monthly Scala News Letter

Screenshot from 2014-10-29 23:53:34

Posted in Akka, Cassandra, Cloud, Future, Java, JavaScript, LiftWeb, MongoDB, News, Node.js, Non-Blocking, NoSql | 1 Comment

Knolx Session : Design Principles for Mobiles


In this presentation i have explained the principles and elements we have to take care during a website design mostly for mobile view.

Posted in Scala | Leave a comment

SBT-dependency tree


In this blog , I am going to describe how to view sbt dependency tree.  Last week I had a problem related to different cross version of a dependency. I knew the problem cause but  I had spent a day to know which dependency had brought that cross version of dependency. I did some study and browsing about that problem then I come across a sbt plugins as a potential solution of that problem.
In a project there is a chance of using same library but different version by multiple dependencies.  I also victim of dependency version conflict.  The good way is just draw a sbt dependency tree. Here is a sbt-plugin sbt-dependency graph is available for that.

Following are the steps to install and use sbt-dependency-graph
a) add plugin to project/plugins.sbt

addSbtPlugin("net.virtual-void" % "sbt-dependency-graph" % "0.7.4")

b) add sbt setting in build.sbt

net.virtualvoid.sbt.graph.Plugin.graphSettings

if project is multi module then add to Parent.scala:

object Parent extends Build{
lazy val parent = Project(id="parent", base=file(".")).settings(net.virtualvoid.sbt.graph.Plugin.graphSettings: _*)
}

Now run sbt commond:

$ sbt dependency-tree
+-org.jsoup:jsoup:0.2.2
[info]   | +-commons-lang:commons-lang:2.4
[info]   |
[info]   +-org.scala-lang:scala-library:2.11.0 (evicted by: 2.11.2)
[info]   +-org.scala-lang:scala-library:2.11.1 (evicted by: 2.11.2)
[info]   +-org.scalaz:scalaz-core_2.11:7.0.6 [S]
[info]     +-org.scala-lang.modules:scala-parser-combinators_2.11:1.0.1 [S]
[info]     | +-org.scala-lang:scala-library:2.11.0 (evicted by: 2.11.2)
[info]     | +-org.scala-lang:scala-library:2.11.1 (evicted by: 2.11.2)
[info]     |
[info]     +-org.scala-lang.modules:scala-xml_2.11:1.0.1 (evicted by: 1.0.2)
[info]     +-org.scala-lang.modules:scala-xml_2.11:1.0.2 [S]
[info]     | +-org.scala-lang:scala-library:2.11.0 (evicted by: 2.11.2)
[info]     | +-org.scala-lang:scala-library:2.11.1 (evicted by: 2.11.2)
[info]     |
[info]     +-org.scala-lang:scala-library:2.11.0 (evicted by: 2.11.2)

or 

$ sbt dependency-graph

or provide lot of options.for more info sbt-dependency-graph

Enjoy with  sbt  projects !!!

Posted in Scala | Leave a comment