Display long dynamic text into adjacent columns like newspaper layout

Folks, If we have quite a long text, and we want to show it into three adjacent columns, One way to implement it via three <div>’s  or <p>’s in HTML and adding CSS properties.

For example If we have a long text as following :

And We want to show it into three separate adjacent columns for better readability, it can be done as follow:



And furthermore for adjacent view, we have to write this following CSS Code :

    float: left;
    width: 400px;
    padding: 0 20px;

To apply vertical rule between these columns so that it can have better look&feel we have to write another CSS class as follow:

    border-right: 1px solid #cccccc;

and add it to first two divs only,

Even though with these CSS and html code we can not manage following scenario i.e

if the complete text to be displayed in P tag is rendered dynamically through a particular selectorin such case it can be complicated to measure out how this text should be break to display in 3 divs/columns. 

Unfortunately this is impossible to do with CSS and HTML without forcing column breaks at fixed positions, or restricting the markup allowed in the text, or using scripting.

CSS3 introduced some new property to make such layouts, In case you have noticed in newspaper,magazines etc  have block layout mode because people have trouble reading text if lines are too long, if it takes too long for the eyes to move from the end of the one line to the beginning of the next, they lose track of which line they were on.

This limitation is solved by adding new following CSS3 properties to extend the traditional block layout mode.

These properties solve the above discussed issue, the code becomes concise ,clear  and easy to implement instead of using complicated structure through JS. Here all those extra static divs are removed, using only single CSS class to manipulate display of content:

Multi-column Properties

  • column-width
  • column-count
  • column-gap
  • column-rule
  • column-span

For the above example context We just have write the following :



.main {
    column-count: 3;
    column-gap: 40px;
    column-width: 300px;

h2 {
    column-span: all;

Its O/p will be


To apply vertical rule between these column, we just have to add one more css property in main class i.e. column-rule, rather than adding an extra class for this.

Let’s explain the multiple column properties

  1. The column-width property specifies a suggested, optimal width for the columns.
  2. The column-count property specifies the number of columns an element should be divided into.
  3. The column-gap property specifies the gap between the columns. (e.g padding between columns)
  4. The column-rule property is a shorthand property for specifying color, width, and style of a vertical rule between all the columns. It displays a vertical rule between all the columns.
  5. The column-span property specifies how many columns an element should span across.


Posted in CSS, HTML, JavaScript, Mobile Development, multiple column property, Scala, web application, Web Designing | Leave a comment

Create a self-signed SSL Certificate using OpenSSL.

In this blog I’ll be giving a little bit of insight on SSL certificates and then how to create a self-signed certificate using OpenSSL.

Let’s start with  What is an SSL Certificate?

SSL stands for Secure Socket Layer. SSL is a global standard technology that creates encrypted communication between web browser and web server. It helps to decrease the risk of losing your personal information(e.g passwords, emails, credit card numbers etc.) .

To create this secure connection a SSL Certificate is used, which is installed on the web server. So, SSL Certificate is a bit of code on your web server that provides security for your online communications. SSL certificate also contains identification information(i.e your organisational information).

SSL Certificates mainly serves two functions:

  • Authenticates the identity of the servers(so that users know that they are not sending their information to wrong server).
  • Encrypts the data that is being transmitted.


Securing your application with an SSL certificate is extremely important. In most of the situations we require a trusted certificate(generated by CA-Certification Authority), but there are many cases where you can use a self signed certificate.

So, the next question comes is when to use a self-signed certificate?”

A self-signed certificate is a certificate that is signed by its own creator rather than a trusted authority. Self signed certificates are less trustworthy, since any attacker can create a self signed certificate and launch a man in the middle attack.

Self-signed certificates can be used at places like:

  • Intranet
  • Personal sites with few visitors
  • During development or testing phase of your application you can use self-signed certificate.

Never use a self signed certificate on applications that transfers valuable information like credit card numbers, bank account numbers etc.

When using a self-signed certificate, visitors will see the following  warning in their browser until the user permanently stores the certificate in their certificate store.


So, till now you got the insight on SSL certificates. Now lets see how to create one using OpenSSL.

Creating a self-signed certificate using OpenSSL

OpenSSL is a command line tool that is used for TLS(Transport Layer Security) and SSL(Secure Socket Layer) protocols.

Now let’s create the certificate:

  • Open terminal(linux).
  • Run the following commands:
  1. openssl genrsa -des3 -out server.key 2048
  2. openssl req -new -key server.key -out server.csr
  3. openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt

The first command will generate a 2048 bit(recommended) RSA private key . After running the command it will ask for the passphrase. If you want to create a key without the passphrase you can remove the (-des3) from the command.

The second command generates a CSR(Certificate Signing Request).  The CA will use the .csr file and issue the certificate, but in your case you can use this .csr file to create your self-signed certificate. Once you run the command, it will prompt you to enter your country, company name, etc.


If you want to configure your certificate for localhost you can give ‘localhost’ in Common Name field instead of  domain name.

The third command will create the self-signed x509 certificate suitable for use on web server.

So this is how you can create a self-signed certificated. In my next blog, I will be explaining KeyStore generation in PKCS12 Format.

Till then, enjoy !🙂





Posted in Scala, Security | Tagged , , , , , , , , , , , , | 2 Comments

Deep Dive Into Elasticsearch

In this presentation, we are going to discuss how Elasticsearch handles the various operations like insert, update, delete. We would also cover what is an inverted index and how segment merging works.

You can also watch the video on youtube:

Posted in Scala | Leave a comment

Introduction to AWS IAM

AWS IAM roles is a web service which gives you secured “Control Access” to AWS services for your users. IAM policies specify which services/actions are allowed or denied. You attach policies to group, users, roles which are then subject to permission that you define. In other words, IAM policies define what your user can do to your AWS services.

IAM is Identity and Access management which means which user has access to which services.

Policies can be granted either from AWS API programmaticaly or AWS management console. IAM gives you following features:

– Shared access to your AWS account.
– Granular permission.
– Secure access to your AWS resources.
– Identity Information.
– Integrated with many AWS resources.
– Free to use.

Ways to access IAM :

– AWS management console.

When to create IAM user :

– You crate an AWS account and you are the only person who works in your account.
– Create IAM user for individual who need access to your AWS resource, assign appropriate permission to each user and give him/her own credentials.
– When you want to use AWS CLI to work with AWS. CLI needs credentials to make calls to AWS. Create IAM user and give that user permission to run the CLI.

Usecase :

Allow each IAM user to access to each object in bucket

In the above diagram each user has access to his/her object in the bucket.
Instead of attaching policies to each user, policies can be attached at group level. After then we can add user to that group. The following policy allows a set of Amazon S3 permisson in bucketName/${aws:username} folder. When the policy is evaluated, the policy is replaced by requested username.

For example:
If Vikas sends a request to put an object, the operation is allowed only if Vikas is uploading to bucketName/Vikas folder.

 "Version": "2012-10-17",
 "Statement": [{
 "Effect": "Allow",
 "Action": [
 "Resource": "arn:aws:s3:::examplebucket/${aws:username}/*"

Note: When using policy you must specify the version in the policy.

Version :

Version element specify the current version of the policy language.
Must be specify before statement element.Current version : 2012-10-17.

Statement :

The Statement element is the main element of the policy. This element is required. The Statement element contains an array of individual statements. Each individual statement is a JSON block enclosed in braces { }.

Effect :

The Effect element is required and specifies whether the statement will result in an allow or an explicit deny. Valid values for Effect are Allow and Deny.

Action :

The Action element describes the specific action or actions that will be allowed or denied.Each AWS service has its own set of actions that describe tasks that you can perform with that service.

Resource :

The Resource element specifies the object or objects that the statement covers. Statements must include either a Resource or a NotResource element. You specify a resource using an ARN.

That’s all for now.

If you have any questions or suggestions, submit a comment below. Stay tuned for the next blog on cloud😉




Posted in Amazon, AWS, AWS Services, Cloud, Devops, S3, Scala | Tagged , | Leave a comment

Upgrading to Selenium 3 with Gecko Driver

In this blog I will be discussing about the latest version of selenium i.e Selenium 3 . To use selenium 3 , we need Gecko driver to run the test cases in Mozilla browser.

So,the first question that arises in our mind is “What is Gecko?” 

Gecko is the name of the layout engine developed by the Mozilla Project. … Gecko’s function is to read web content, such as HTML, CSS, XUL, JavaScript, and render it on the user’s screen or print it.

ThWebDriver protocol is implemented by Firefox using an executable called GeckoDriver.exe . It starts a server on your system. All your tests communicate to this server to run the tests. It acts as a proxy between the local and remote ends and translates calls into the Marionette automation protocol. To use Marionette or Firefox with Selenium 3 all you need is:

  1. Install geckodriver.exe on your system.
  2. Add path of geckodriver.exe in your code.
  3. Use Firefox in your code.

Currently I am using Firefox 49.0 & Ubuntu 16.04 LTS

Here is the sample code:

import org.openqa.selenium.firefox.FirefoxDriver;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.remote.DesiredCapabilities;

 * Created by swati on 18/10/16.

public class FirstTest {

    public static void main(String [] args){

        DesiredCapabilities capabilities = DesiredCapabilities.firefox();
        WebDriver driver = new FirefoxDriver(capabilities);
        String url = "http://www.amazon.in/";
        System.out.println("Successfully opened the website www.amazon.com");
        try {
        } catch (Exception e) {
        String title = driver.getTitle();
        int titleLength = driver.getTitle().length();
        System.out.println("Title of the page is : " + title);
        System.out.println("Length of the title is : "+ titleLength);
        String actualUrl = driver.getCurrentUrl();
        if (actualUrl.equals(url)){
            System.out.println("Verification Successful - The correct Url is opened.");
            System.out.println("Verification Failed - An incorrect Url is opened.");
            System.out.println("Actual URL is : " + actualUrl);
            System.out.println("Expected URL is : " + url);
        String pageSource = driver.getPageSource();
        int pageSourceLength = pageSource.length();
        System.out.println("Total length of the Page Source is : " + pageSourceLength);


You can find a demo project using selenium 3 with gecko driver on my repo : Selenium3withGeckoDriver



Posted in Java, JavaScript, sbt, Test, testing, tests, Web, web application | Tagged , , , , | Leave a comment

Cassandra Data Modeling – Primary , Clustering , Partition , Compound Keys

In this post we are going to discuss more about different keys available in Cassandra . Primary key concept in Cassandra is different from Relational databases. Therefore it is worth spending time to understand this concept.

Lets take an example and create a student table which had a student_id as a primary key column.
1) primary key 
create table person (student_id int primary key, fname text, lname text, dateofbirth timestamp, email text, phone text );

In Cassandra a table can have number of rows. Each row is referenced by a primary key also called row key. There are number of columns in a row but the number of columns can vary in different rows.
For example one row in a table can have three columns where as the other row in the same table can have ten columns. It is also important to note that in Cassandra both column names and values have binary types. Which means column names can have binary values such as string, timestamp or an integer etc. This is different from SQL databases where each row in an SQL table has fixed number of columns and column names can only be text.

We saw that the student_id was used as a row key to refer to person data.

2)Compound primary key:
As the name suggests, compound primary key is comprised of one or more columns which are referenced in the primary key. One component of compound primary key is called partition key where as the other component is called clustering key. Following are different variations of primary keys. Please note that C1, C2, C3,… and so on represent columns in the table.

C1: primary key has only one partition key and no cluster key.
(C1, C2): column C1 is a partition key and column C2 is a cluster key.
(C1,C2,C3,…): column C1 is a partition key and columns C2, C3 and so on make cluster key.
(C1, (C2, C3,…)): It is same as 3 i.e column C1 is a partition key and columns C2,C3,… make cluster key.
((C1, C2,…), (C3,C4,…)): columns C1, C2 make partition key and columns C3,C4,… make cluster key.

It is important to note that when compound key is C1,C2,C3 then the first key C1 becomes partition key and rest of the keys become part of the cluster key. In order to make composite partition keys we have to specify keys in parenthesis such as: ( ( C1,C2) , C3, C4).  In this case C1 & C2 are part of partition keys and C3, C4 are part of cluster key.

3) Partition key
The purpose of partition key is to identify the partition or node in the cluster which stores that row. When data is read or write from the cluster a function called Partitioner is used to compute the hash value of the partition key. This hash value is used to determine the node/partition which contains that row. For example rows whose partition key values range from 1000 to 1234 may reside in node A and rows with partition key values range from 1235 to 2000 may reside in node B as shown in figure 1. If a row contains partition key whose hash value is 1233 then it will be stored in node A.



4) Clustering key
The purpose of clustering key is to store row data in sorted order. The sorting of data is based on columns which are included in the clustering key. This arrangement makes it efficient to retrieve data using clustering key.

5) Example : –
To make these concepts clear we will consider example of school system.

>Create a keyspace with replication strategy ‘SimpleStrategy’ and replication_factor 1.
create keyspace Students_Details with replication = {‘class’ : ‘SimpleStrategy’, ‘replication_factor’:1};

>Now switch to students_details keyspace:
cqlsh> use students_details ;

>command to check the no of tables present in keyspace
cqlsh:students_details> desc TABLES;

>We will create a table student which contains general information about any student. Type the following create statement into cqlsh.
create table student (stuid int, avg_marks float, description text, primary key (stuid));

> Type the following insert statements to enter some data into this table.
insert into student (stuid, avg_marks, description) values (1,25.5,’student 1′);
insert into student (stuid, avg_marks, description) values (2,35.5,’student 2′);

>to view the details just inserted –
cqlsh:students_details> select * from student;

stuid | avg_marks | description
1 |      25.5 |   student 1
2 |      35.5 |   student 2

> We can see how Cassandra has stored this data under the hood by using cassandra-cli tool. Run cassandra-cli tool in a separate terminal window and type the following command on that terminal.( NOTE * Cassandra-CLI utility (deprecated)
Important: The CLI utility is deprecated and will be removed in Cassandra 3.0. For ease of use and performance, switch from Thrift and CLI to CQL and cqlsh.)

So if you using cassandra verison above 3.0 then use the below commands .

using the EXPAND Command in cqlsh , we can view the details info for the queries .
>EXPAND with no arguments shows the current value of expand setting.

cqlsh:students_details> EXPAND
Expanded output is currently disabled. Use EXPAND ON to enable.

>Enabling the expand command
cqlsh:students_details> EXPAND ON
Now Expanded output is enabled

>Now view the details inserted above- ( the studid will be present in red color in cqlsh, representing the primary key/row key)
>cqlsh:students_details> select * from student;

@ Row 1
stuid       | 1
avg_marks   | 25.5
description | student 1

@ Row 2
stuid       | 2
avg_marks   | 35.5
description | student 2

(2 rows)

We can see from the above output that the stuid has become the row key and it identifies individual rows.
cqlsh:students_details> select token(stuid) from student;

@ Row 1
system.token(stuid) | -4069959284402364209

@ Row 2
system.token(stuid) | -3248873570005575792

Also you can see that their are two tokens .

We can use columns in the primary key to filter data in the select statement. Type the following command in the cqlsh window:
select * from student where stuid = 1;

Now we will create another table called marks which records marks of each student for every day(say every day new exams and marks are recorded). Type the following command on cqlsh:

Continue reading

Posted in Best Practices, big data, Cassandra, database, NoSql, Scala | Tagged , , , , , , | 2 Comments

Knolx: Introduction to KnockoutJs

Hello all,

Knoldus organised a session on Friday, 07 October 2016. In that session, we had an introductory session on Knockout js.

Knockout is a JavaScript library that helps you to create a responsive display(UI). It is based on Model–View–Viewmodel (MVVM) pattern. It provides a simple two-way data binding mechanism between your data model and UI

The slides for the session are as follows:

And, you can also watch the video on youtube link is as follows:


Posted in Scala | Leave a comment

Knolx: Introduction to Apache Cassandra

Hello everyone,

Knoldus organized a KnolX session on Friday, 07 October 2016. In that session, we had an introductory session on Apache Cassandra.

Cassandra is a distributed database which allows us to store data on multiple nodes with multiple replicas in such a way that even if a node goes down, another node can take charge of that node.

The slides for the session are as follows:

And, the youtube video link is as follows:


Posted in Cassandra, Scala | Leave a comment

Schedule a Job in Play with Akka

Greetings to all ,

In this blog we will see how we can schedule a job after some time period in play using Scala .

I assume that reader of this blog have little knowledge about :

  • Akka
  • Play and
  • Scala

So here we start with an actor where we will write our job to be scheduled as follow :

Continue reading

Posted in Scala | Leave a comment

Cassandra with Spark 2.0 : Building Rest API !

In this tutorial , we will be demonstrating how to make a REST service in Spark using Akka-http as a side-kick  ;)  and Cassandra as the data store.

We have seen the power of Spark earlier and when it is combined with Cassandra in a right way it becomes even more powerful. Earlier we have seen how to build Rest Api on Spark and Couchbase in this blog post, hence this will be about how to do the same thing in Cassandra.

So lets get started with the Code:

Your build.sbt should look like this :

name := "cassandra-spark-akka-http-starter-kit"

version := "1.0"

scalaVersion := "2.11.8"

organization := "com.knoldus"

val akkaV = "2.4.5"
libraryDependencies ++= Seq(
"org.apache.spark" % "spark-core_2.11" % "2.0.0",
"org.apache.spark" % "spark-sql_2.11" % "2.0.0",
"com.typesafe.akka" %% "akka-http-core" % akkaV,
"com.typesafe.akka" %% "akka-http-experimental" % akkaV,
"com.typesafe.akka" %% "akka-http-testkit" % akkaV % "test",
"com.typesafe.akka" %% "akka-http-spray-json-experimental" % akkaV,
"org.scalatest" %% "scalatest" % "2.2.6" % "test",
"com.datastax.spark" % "spark-cassandra-connector_2.11" % "2.0.0-M3",
"net.liftweb" % "lift-json_2.11" % "2.6.2"


assembleArtifact in assemblyPackageScala := false // We don't need the Scala library, Spark already includes it

assemblyMergeStrategy in assembly := {
case m if m.toLowerCase.endsWith("manifest.mf") =&gt; MergeStrategy.discard
case m if m.toLowerCase.matches("meta-inf.*\\.sf$") =&gt; MergeStrategy.discard
case "reference.conf" =&gt; MergeStrategy.concat
case _ =&gt; MergeStrategy.first

ivyScala := ivyScala.value map {
_.copy(overrideScalaVersion = true)
fork in run := true

Database Access layer:

And your Database Access layer should look like this :

trait DatabaseAccess {

import Context._

def create(user: User): Boolean =
Try(sc.parallelize(Seq(user)).saveToCassandra(keyspace, tableName)).toOption.isDefined

def retrieve(id: String): Option[Array[User]] = Try(sc.cassandraTable[User](keyspace, tableName).where(s"id='$id'").collect()).toOption

object DatabaseAccess extends DatabaseAccess

Service Layer:

Now your routing file should look like this :

package com.knoldus.routes

import java.util.UUID

import akka.actor.ActorSystem
import akka.event.Logging
import akka.http.scaladsl.model._
import akka.http.scaladsl.server.Directives._
import akka.http.scaladsl.server.{ExceptionHandler, Route}
import akka.stream.ActorMaterializer
import com.knoldus.domain.User
import com.knoldus.factories.DatabaseAccess
import net.liftweb.json._
import java.util.Date
import net.liftweb.json.Extraction._

trait SparkService extends DatabaseAccess {

  implicit val system:ActorSystem
  implicit val materializer:ActorMaterializer
  val logger = Logging(system, getClass)

  implicit def myExceptionHandler =
    ExceptionHandler {
      case e: ArithmeticException =>
        extractUri { uri =>
          complete(HttpResponse(StatusCodes.InternalServerError, entity = s"Data is not persisted and something went wrong"))

  implicit val formats: Formats = new DefaultFormats {
    outer =>
    override val typeHintFieldName = "type"
    val typeHints = ShortTypeHints(List(classOf[String], classOf[Date]))

  val sparkRoutes: Route = {
    get {
      path("create" / "name" / Segment / "email" / Segment) { (name: String, email: String) =>
        complete {
          val documentId = "user::" + UUID.randomUUID().toString
          try {
            val user = User(documentId,name,email)
            val isPersisted = create(user)
            if (isPersisted) {
              HttpResponse(StatusCodes.Created, entity = s"Data is successfully persisted with id $documentId")
            } else {
              HttpResponse(StatusCodes.InternalServerError, entity = s"Error found for id : $documentId")
          } catch {
            case ex: Throwable =>
              logger.error(ex, ex.getMessage)
              HttpResponse(StatusCodes.InternalServerError, entity = s"Error found for id : $documentId")
    } ~ path("retrieve" / "id" / Segment) { (listOfIds: String) =>
      get {
        complete {
          try {
            val idAsRDD: Option[Array[User]] = retrieve(listOfIds)
            idAsRDD match {
              case Some(data) => HttpResponse(StatusCodes.OK, entity = data.headOption.fold("")(x => compact(render(decompose(x)))))
              case None => HttpResponse(StatusCodes.InternalServerError, entity = s"Data is not fetched and something went wrong")
          } catch {
            case ex: Throwable =>
              logger.error(ex, ex.getMessage)
              HttpResponse(StatusCodes.InternalServerError, entity = s"Error found for ids : $listOfIds")


This blog is in the continuation of building your rest services using Spark and Couchbase and here we just changed the datastore to Cassandra, and hence I did not explained each and every step. It contains just the simple implementation of REST API! If you want to know in detail please take a look here :

Scala, Couchbase, Spark and Akka-http: A combinatory tutorial for starters

In future , we will be continuing the same thing using Neo4j too😛.

So stay tuned !

You can find the code here on my github: shiv4nsh

If You have any questions you can contact me here or on Twitter: @shiv4nsh

I would be happy to help.

Till then

happy hAKKAing !  !  !


Posted in Akka, akka-http, apache spark, Cassandra, Scala, scalatest, Spark | Tagged , , , , , , , , , , , , , | 2 Comments