Send an Email Through the Amazon SES SMTP Interface with scala

The following procedure shows you how to use AWS Toolkit for IntelliJ to create an AWS SDK project and modify the scala code to send an email through Amazon SES.

In this getting started tutorial, you send an email to yourself so that you can check to see if you received it. For further experimentation or load testing, use the Amazon SES mailbox simulator.

Prerequisites – Before you begin, perform the following tasks:

  1. Verify your email address with Amazon SES – Before you can send an email with Amazon SES, you must verify that you own the sender’s email address. If your account is still in the Amazon SES sandbox, you must also verify the recipient email address. The easiest way to verify email addresses is by using the Amazon SES console. For more information, see Verification Procedures
  2.  Get your AWS credentials — You need an AWS access key ID and AWS secret access key to access Amazon SES using an SDK. You can find your credentials by using the Security Credentials page in the AWS Management Console. For more information about credentials, see Using Credentials With Amazon SES.
  3.  you shold have IntelliJ or Eclipse
  4.  Install the AWS Toolkit – you can use dependecies for AWS Toolkit libraryDependencies += "com.amazonaws" % "aws-java-sdk" % "1.11.152"
  5. Create a shared credentials file – you must create a shared credentials file.


The following procedure shows how to send an email through Amazon SES using the AWS SDK for scala.

  • Create an sbt scala Project in IntelliJ
  • Add the above dependency in to build.sbt
  • Replace the entire contents of AmazonSES MailingApi.scala with the following code:

The Whole code present HERE


Posted in Amazon EC2, AWS, AWS Services, Scala | Tagged , , | Leave a comment

Let’s Consume a Micro-service

Micro-services architecture is being widely adopted and guess what, Lagom is the most efficient way to achieve it.  While creating micro-services, we usually need to interact with other micro-services and consume their data. So the idea behind this blog is to guide you through the steps needed to integrate an unmanaged service.

Apparently, Integrating an external API in Lagom is challenging at first but it is actually very straightforward. Following are the steps:

Step 1: Register the API as unmanaged service in your managed service’s impl pom.xml


This will ensure that the service locater can find the external API to be used.

Step 2: Create an interface for the unmanaged service that could be used to communicate with that service.

public interface ServiceUService extends Service {
    ServiceCall getResultFromUnManagedService();

    default Descriptor descriptor() {
        return named("serviceU").withCalls(
                Service.restCall(GET, "/key/value/one/two",

This ensures that when our managed service calls “getResultFromUnManagedService” method then the service locater redirects the call to the unmanaged service’s API with the same route as we provide in this interface.

Continue reading

Posted in Java, Microservices | Tagged , , , , , | 1 Comment

Face detection was never this simple!

What is AWS Rekognition ?

Amazon Rekognition is an AWS service that makes it really easy for you to enable image analysis in your applications. With Rekognition, you can detect objects, scenes or faces and label them;recognize celebrities; and identify inappropriate content in images.

You can also search and compare faces,something which can be implemented for use cases such as employee verification and marking attendance. Rekognition’s API enables you to quickly add sophisticated deep learning-based visual search and image classification to your applications.

(Excerpt taken from

Rekog is a result of decade long research and deep learning of billions of images.

What does it have in store for us?

1) How easy is it to implement the API?

I made use of the AWS SDK which made this super simple. Let me give you an example of the code that would provide image recognition for you and label them:

DetectLabelsRequest request = new DetectLabelsRequest()
        .withImage(new Image()

try {

    DetectLabelsResult result = rekognitionClient.detectLabels(request);
    List  labels = result.getLabels();

    System.out.println("Detected labels for " + photo);
    for (Label label: labels) {
        System.out.println(label.getName() + ": " + label.getConfidence().toString());

} catch (AmazonRekognitionException e) {

2) How reliable is it?

I’ve tested it with DetectLabels and SearchFacesByImage(face recognition) and can only talk from my experience, but it is pretty damn reliable. You can change the `ConfidenceThreshold` to allow flexibility around this, but I found that 90% was a good match for taking various images in diverse environments.

3) Is it scalable?


With Amazon Rekognition, you only pay for the number of images you analyze and the face metadata you store.I’m availing the free tier usage as of now.

As with all AWS services, you pay for Rekognition on the tiered pricing model.

What can it do?

Common use cases for using Amazon Rekognition include the following:

  • Searchable image library – Amazon Rekognition makes images searchable so you can discover objects and scenes that appear within them.
  • Face-based user verification – Amazon Rekognition enables your applications to confirm user identities by comparing their live image with a reference image.
  • Sentiment and demographic analysis – Amazon Rekognition detects emotions such as happy, sad, or surprise, and demographic information such as gender from facial images.
  • Facial recognition – With Amazon Rekognition, you can search your image collection for similar faces by storing faces, using the IndexFaces API operation. You can then use the SearchFaces operation to return high-confidence matches. A face collection is an index of faces that you own and manage. Identifying people based on their faces requires two major steps in Amazon Rekognition:
    1. Index the faces.
    2. Search the faces.
  • Image Moderation – Amazon Rekognition can detect explicit and suggestive adult content in images. Developers can use the returned metadata to filter inappropriate content based on their business needs. These labels indicate specific categories of adult content, thus allowing granular filtering and management of large volumes of user generated content (UGC). For example, social and dating sites, photo sharing platforms, blogs and forums, apps for children, e-commerce sites, entertainment and online advertising services.
  • Celebrity Recognition – Amazon Rekognition can recognize celebrities within supplied images. Rekognition can recognize thousands of celebrities across a number of categories, such as politics, sports, business, entertainment, and media.

Amazon Rekognition : How It Works

The computer vision API operations that Amazon Rekognition provides can be grouped in the following categories:

  • Non-storage API operations – The API operations in this group do not persist any information on the server. You provide input images, the API performs the analysis, and returns results, but nothing is saved on the server. The API can be used for operations such as the following:
    • Detect labels or faces in an image. A label refers to any of the following: objects (for example, flower, tree, or table), events (for example, a wedding, graduation, or birthday party), or concepts (for example, a landscape, evening, and nature). The input image you provide to these API operations can be in JPEG or PNG image format.
    • Compare faces in two images and return faces in the target image that match a face in the source image.
    • Detect celebrities in images.
    • Analyse images for explicit or suggestive adult content.
  • Storage-based API operations – Amazon Rekognition provides an API operation that detects faces in the input image and persists facial feature vectors in a database on the server. Amazon Rekognition provides additional API operations you can use to search the persisted face vectors for face matches. None of the input image bytes are stored.

Rekognition in Action

The next question arises: how do we quickly set it up and get it working.AWS Rekog provides a free usage of upto 5000 API calls and upto 1000 images to be indexed (both limits on a per month basis).You would just need to signup for AWS and avail these under free tier usage.

I will be adding a detailed post on setting up Rekognition soon.For now, we’re going to see it in action.Check out the below video:

Posted in Scala | 1 Comment

Starting with Blockchain Chaincode using Golang

Chaincode, or a smart contract is a fragment of code written in supported languages like Java or Go that is deployed onto a network of HyperLedger Fabric peer nodes.

Chaincodes run network transactions which are validated and then appended to the shared ledger. In simple terms, they are encapsulation of business network transactions in a code.

In this blog, we will learn how to develop chaincode with GoLang for a blockchain network based on Hyperledger Fabric v0.6.

Development environment required for Chaincode development:

  • Go Go 1.6 install
  • Hyperledger Fabric
    • Use the following command to install Hyperledger fabric 0.6:

git clone -b v0.6

Implementing the chaincode interface

Continue reading

Posted in Scala | Tagged , , , , , , | 1 Comment

When akka stream meets RabbitMQ

“Reactive Streams is an initiative to provide a standard for asynchronous stream processing with non-blocking back pressure.”  – that is how reactive streams are defined in wikipedia. There are two other implementation to reactive streams then Akka-streams i.e. reactor and Netflix’s RxJava. However, since the reactive stream manifesto published the only mature implementation available is akka-stream. According to reactive manifesto the implementation must follow the following properties i.e. Responsive, Resilient, Elastic and Messaage driven.


Now the question arise why do we need reactive streams. Well wikipedia gives that answer too “The main goal of Reactive Streams is to govern the exchange of stream data across an asynchronous boundary—like passing elements on to another thread or thread-pool—while ensuring that the receiving side is not forced to buffer arbitrary amounts of data. In other words, back pressure is an integral part of this model in order to allow the queues which mediate between threads to be bounded.”. Though Kafka is a perfect fit for this, we are going to RabbitMQ with this. No doubt many would debate on using of RabbitMQ, however for different application requirement we need to use different tactics. The reason for using RabbitMQ instead of Kafka here is to avoid the hassle of setting up Kafka with Zookeeper. For a small application with less load, using kafka and zookeeper feels a bit overhead, I know you may debate over it but that is how I feel and many other too.

We have talked about reactive streams, akka streams and a bit of Kafka too, but let’s talk a bit about RabbitMQ too before we go to the implementation part. Well, RabbitMQ is a lightweight, easy to deploy messaging service. It supports multiple messaging protocols. It is the most widely deployed open source message broker. RabbitMQ can be deployed in distributed and federated configurations to meet high-scale, high-availability requirements.

Without further delay let us focus on implementing Akka-stream with RabbitMQ now. For the integration part we are going use open source library called op-rabbit. It is a high-level, type-safe, opinionated, composable, fault-tolerant library for interacting with RabbitMQ. It has different features like Recovery, Integration, Modular, Modeled, Reliability, Graceful shutdown and etc. Like we have mentioned earlier, combining an akka-stream rabbitmq consumer and publisher allows for guaranteed at-least-once message delivery from head to tail. In other words, don’t acknowledge the original message from the message queue until any and all side-effect events have been published to other queues and persisted. And we can do so with following code snippet

import com.spingo.op_rabbit.RabbitControl
import{ActorSystem, Props}
object OpRabbitController {
  implicit val actorSystem = ActorSystem("such-system")
  val rabbitControl = actorSystem.actorOf(Props[RabbitControl])
import com.spingo.op_rabbit._
import com.timcharper.acked.AckedSource
import play.api.libs.json.Format

object OpRabbitProducer {
  import OpRabbitController._
  implicit val workFormat = Format[Work]
  AckedSource(1 to 15).
    map(Message.queue(_, "queueName")).
case class Work(id: String)

This example is similar to what is mentioned in the library.

Basically this piece of code describes how we can produce the message and send it to the rabbitmq broker so that it could be consume by the consumer at the other end. Similarly in order to consume messages we can do the following.

import com.spingo.op_rabbit._
import Directives._

object OpRabbitConsumer {
  import OpRabbitController._
  implicit val recoveryStrategy = RecoveryStrategy.drop()
  channel(qos = 3),
    durable = true,
    exclusive = false,
    autoDelete = false)),
  runForeach { person =>
  def greet(person: Person) = {
case class Person(id: String)

Finally, we can see that how easy it is to integrate both of them. In order to use RabbitMQ with Akka-stream don’t forget to install RabbitMQ in your system. You can do so by going the this link. And for more detail on the op-rabbit you can visit there github repository.


Posted in Scala | Leave a comment

Finatra-swagger: Making the api documentation awesome and easy

Apart from speed, high-performance, fault-tolerance and concurrency, one more most important feature for an api is a clean, precise, interactive and user friendly documentation. Documentation plays a very important role in a rest api. The interactive documentation makes api more easy and understandable. Finatra is fast, testable, Scala services built on Twitter-Server and Finagle, similarly Swagger is a well known api documentation framework. In this blog we are going to discuss how to integrate Swagger UI (Api documentation) with Finatra to achieve awesome and easy documentation with high performance and concurrency.

Getting started:

There are many libraries available for swagger finatra support. For this example we are using swagger-finatra.

Let’s start implementing finatra-swagger sample application:

Step 1: Dependencies:

Here are the dependencies for finatra-swagger sample:

Scala: 2.12.0
Finatra-http: 2.10.0
Finatra-swagger: 2.9.0

Step 2: Build.sbt:

Here is the build.sbt for this sample project:

name := "finatra-swagger-sample"

version := "1.0"
scalaVersion := "2.12.0"
lazy val versions = new {
  val finatra = "2.2.0"
  val swagger = "0.7.2"
resolvers ++= Seq(
  "Twitter Maven" at ""
resolvers += "Sonatype OSS Snapshots" at ""
libraryDependencies += "com.twitter" %% "finatra-http" % "2.10.0" % "compile"
libraryDependencies += "com.jakehschwartz" % "finatra-swagger_2.12" % "2.9.0" % "compile"

libraryDependencies += "org.scalatest" %% "scalatest" % "3.0.3" % "test"

Step 3: Create a subclass of a SwaggerModule:

First of all, we need to create an object of swagger module to define basic information regarding api.
Here we need to create an object SampleSwaggerModule extending SwaggerModule which is defined under finatra-swagger library.
We can also define the api root information inside Swagger object using the Info object. The basic information defined can be description, version and title of  the api. You can see this information on top of the swagger document after running the application.
The security definition is used to add basic authentication inside swagger documentation if the api requires basic authentication.

object SampleSwaggerModule extends SwaggerModule {
  val swaggerUI = new Swagger()
  def swagger: Swagger = {
    val info = new Info()
      .description("The Knoldus / Knolder management API, this is a sample for swagger document generation")
      .title("Knoldus / Knolder Management API")
      .addSecurityDefinition("sampleBasic", {
        val d = new BasicAuthDefinition()

Step 4: Add DocsController into server:

DocsController is a class defined in finatra-swagger library which works as a glue between finatra server and swagger.
If you have used finatra earlier, you must be familiar with finatra HttpServer and how to configure http routes inside finatra servers.
Here we are adding DocsController with SampleController. SampleController include the actual application routes with swagger related descriptions.

object SampleApp extends HttpServer {

  override def configureHttp(router: HttpRouter) {
  override protected def modules = Seq(SampleSwaggerModule)

Step 5: Configure the endpoints using the SwaggerRouteDSL:

Finally, we have to configure the endpoints using SwaggerRoteDSL. To do that we have to extends SwaggerController which is defined in the finatra-swagger library and inject Swagger as a dependency.
To define a route instead of using get we are using getWithDoc method and passing route information like route summary, tag, parameter type and response type etc.

class SampleController@Inject()(s: Swagger) extends SwaggerController {
  implicit protected val swagger = s

  getWithDoc("/knolder/:id") { id =>
    id.summary("Read the detail information about the knolder")
      .routeParam[String]("id", "the knolder id")
      .responseWith[Knolder](200, "the knolder details")
      .responseWith(404, "the knolder not found")
  } { (request: Request) =>
    val knolderId: Int = request.getParam("id").toInt
    Knolder(Some(knolderId), "girish", "Consultant")

All the options for http calls like get, post, delete and put are available. You can find a full http CRUD application sample on the git repo:


Step 6: Running the application:

After finishing above steps we are ready to run the application and play with swagger document.

A. Clone git repo:

To run the app you can clone the git repo using the following command:

git clone

B. Running application:

You can run the  application using the following command:

sbt run

C. Hitting rest end points on the browser:

Once the application is running you can go to browser and hit the url:


After hitting the end point you will see a sample response on the browser.

D. Analyzing swagger document:

Once the routes are working as expected you can find the swagger document by navigating to url:


Here is a screen shot of swagger doc generated for this sample application:


To explore more, please go through the application here.




Posted in Scala, Swagger, Web, web application, Web Services | Leave a comment

Class and Object keyword In Scala Programming

Scala is hybrid language contain both functional and object oriented functionality. Scala was created by Martin Odersky.

Scala run on JVM, and you can also use java api with in scala program.

You can write Scala program with two keyword 1. class and 2. object.


class Car{
   def run(){
object App{
   def main(args: Array[String]) {
      println("hello World");
      val car = new Car();;


We know class is a blueprint of object than what is object keyword in scala?

Scala class same as Java Class but scala not gives you any entry method in class, like main method in java. The main method associated with object keyword. You can think of the object keyword as creating a singleton object of a class that is defined implicitly.

Continue reading

Posted in Scala | Tagged , , | Leave a comment

Integrating Kafka With Spark Structure Streaming

Kafka is a messaging broker system which facilitates the passing of messages between producer and consumer whereas Spark Structure streaming consumes static and streaming data from various sources like kafka, flume, twitter or any other socket which can be processed and analysed using high level algorithm for machine learning and finally pushed the result out to external storage system. The main advantage of structured streaming is to get the continuous incrementing the result as the streaming data continue to arrive.

Though the kafka has its own stream library and its best suitable for transforming a  kafka topic to topic whereas spark streaming are almost integrated with any type of system. For more detail you can refer to this blog.

In this blog i’ll cover an end to end integration of kafka with spark structured streaming by creating kafka as source and spark structured streaming as sink.

Continue reading

Posted in Apache Kafka, apache spark, Scala, Streaming | Tagged , , | 1 Comment

Spark Structured Streaming: A Simple Definition

“Structured Streaming”, nowadays we are hearing this term in Apache Spark ecosystem quite a lot, as it is being preached as next big thing in scalable big data world. Although, we all know that Structured Streaming means a stream having structured data in it, but very few of us knows what exactly it is and where we can use it.

So, in this blog post we will get to know Spark Structured Streaming with the help of a simple example. But, before we begin with the example lets get to know it first.

Structured Streaming is a scalable and fault-tolerant stream processing engine built upon the strong foundation of Spark SQL. It leverages Spark SQL’s powerful APIs to provide a seamless query interface which allows us to express our streaming computation in the same way we would express a SQL query over our batch data. Also, it optimizes the execution of our streaming computation to provide low-latency and continually updated answers.

Now, when we have defined Spark Structured Streaming, let’s see it’s example of it.

In this example we will compute the famous Word-Count but with a time-based window on it. To compute Word-Count in a particular time window, we will tag each line, received from network. with a timestamp that will help us in determining the window it falls into. So, let’s start coding:

First we have to import the necessary classes and create a local SparkSession, the starting point of all functionalities in Spark.

import java.sql.Timestamp
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.functions._

val spark = SparkSession

import spark.implicits._

Now, we have to create a streaming DataFrame that represents text data received from a network listening to localhost:9999, and transform the DataFrame to compute word-count over a window of 10 seconds which slides after every 5 seconds. Also, we have to tag each line, which we receive from network, with a timestamp.

val lines = spark.readStream
 .option("host", "localhost")
 .option("port", 9999)
 .option("includeTimestamp", true)

 val words =[(String, Timestamp)].flatMap(line =>
 line._1.split(" ").map(word => (word, line._2))
 ).toDF("word", "timestamp")

val windowedCounts = words.groupBy(
 window($"timestamp", "10 seconds", "5 seconds"), $"word"

lines DataFrame represents an unbounded table containing the streaming text data. Here each line, in the streaming text data, is a row in the table. Next, we have converted the DataFrame to a Dataset of (String, Timestamp) using .as[(String, Timestamp)], so that we can apply the flatMap operation to split each line into multiple words and tag each word with its timestamp. Finally, we have defined the windowedCounts DataFrame by grouping by the unique values in the DataFrame and aggregating them on the basis of their timestamp. Note that this is a streaming DataFrame which represents the running windowed word counts of the stream.

We now have to set up the query on the streaming data, i.e., specify a sink for it. So, that we can actually start receiving data, as without a sink a stream cannot work. For this, we set it up to print the complete set of counts (specified by outputMode("complete")) to the console every time they are updated. And then start the streaming computation using start().

val query = windowedCounts.writeStream
 .option("truncate", "false")


Now, we are ready to run our example. But, before that we have to run Netcat as a data server to send some data.

$ nc -lk 9999
got it

Now, in a different terminal when we ran the example and we get following output.


Here we can see that we are getting word-count computed over a window of 10 seconds which is sliding after every 5 seconds.

It’s that simple, isn’t it. I mean we just created a streaming application, although for a very naive use case (Word Count 😛 ), but at least we got an idea about Spark Structured Streaming.

However, this is not the end, it’s just the beginning. We will come back with more posts on Spark Structured Streaming where you will get to know it better. So, stay tuned 🙂


Posted in Scala, Spark, Streaming | Tagged , , , , | 1 Comment

Location Strategy- Routing in Angular2

Angular 2’s router is super easy to use. Angular 2 gives you the possibility of dividing your application into several views that you can navigate between through the concept of routing. Routing enables you to route the user to different components based on the url that they type on the browser, or that you direct them to through a link. This post will cover standard routing, route parameters and nested child routes in Angular2. With these basics we can build a great navigation experience for users.

Configuration and Declaring Routes

A routed Angular application has one singleton instance of the Router service. When the browser’s URL changes, that router looks for a corresponding Route from which it can determine the component to display.

A router has no routes until you configure it. The following example creates four route definitions, configures the router via the RouterModule.forRoot method, and adds the result to the AppModule‘s imports array.

The first thing that we need to do when setting up routes for any project is to define the routing table. We will start out with a basic home route that maps to the HomeComponent and then add another route to redirect the root path to our home route.

const homeRoutes: Routes = [
  { path: '', redirectTo: 'home', pathMatch: 'full' },
  { path: 'home', component: HomeComponent },
  { path: 'service', component: ServiceComponent }

Our route config defines all the routes in our application. The first route is our default home route. The second one is our serviceComponent. The path value is the path that we referenced in our template. We export our routes to be added to our App Module.

import { NgModule } from '@angular/core';
import { BrowserModule } from '@angular/platform-browser';
import { RouterModule } from '@angular/router';

import { routes } from './app.routes';
import { AppComponent } from './app.component';
import { ServiceComponent } from './about.component';
import { HomeComponent } from './home.component';

 imports: [
 declarations: [
 bootstrap: [ AppComponent ]
export class AppModule {

The appRoutes array of routes describes how to navigate. Pass it to the RouterModule.forRoot method in the module imports to configure the router.

Router outlet

Router outlet Acts as a placeholder that Angular dynamically fills based on the current router state. Given this configuration, when the browser URL for this application becomes /home, the router matches that URL to the route path /home and displays the HomeComponent after a RouterOutlet that you’ve placed in the host view’s HTML.

<!-- Routed views go here -->

Each Route maps a URL path to a component. There are no leading slashes in the path. The router parses and builds the final URL for you, allowing you to use both relative and absolute paths when navigating between application views.

Nested Child Routes

So we have the following routes, / and /service. Maybe our about page is extensive and there are a couple of different views we would like to display as well. The URLs would look something like /service and /service/item. The first route would be the default about page but the more route would offer another view with more details.

Continue reading

Posted in Angular Material, AngularJS, AngularJs2.0, Scala | Tagged , , , | Leave a comment