A Quick start to GraphQL and Sangria


GraphQL

GraphQL is a data query language developed  by Facebook in 2012 before it was publicly released in 2015. It is a query language for our API, and a server-side runtime for executing queries by using a type system we define for our data.

It gives us an alternative to REST and ad-hoc webservice architectures. It allows the clients to define the structure of the data required, and exactly similar structure of the data is returned from the server. It is a strongly typed runtime which allows clients to tell what data is needed. This avoids both the issues of over-fetching as well as under-fetching of data.
Continue reading

Posted in Scala | Leave a comment

Future vs. CompletableFuture – #1


This is Part 1 of Future vs. CompletableFuture. In this blog we will be comparing Java 5’s Future with Java 8’s CompletableFuture on the basis of two categories i.e. manual completion and attaching a callable method.

What is CompletableFuture?

CompletableFuture is used for asynchronous programming in Java. Asynchronous programming is a means of writing non-blocking code by running a task on a separate thread than the main application thread and notifying the main thread about its progress, completion or failure.

CompletableFuture implements two interfaces:

  1. Future
  2. CompletionStage

Continue reading

Posted in Future, Java | Tagged , , | Leave a comment

Jenkins for Continuous Integration


Jenkins is not a new term to almost all of us. It’s a continuous integration/continuous deployment server. Before starting off with Jenkins, let’s first understand what Continuous Integration is.

Continuous Integration (CI) is a development practice that requires developers to integrate code into a shared repository several times a day. Each check-in is then verified by an automated build, allowing teams to detect problems early.

Continue reading

Posted in Scala | 1 Comment

Easy data purge in Cassandra


Data purge is the key to keeping your database servers always have enough free space(ideally) to store the ever incoming humongous business data.The operational/transactional database needs to be cleaned up of old data which is no longer necessary as per your business rules.

Cassandra being one of the most popular databases on the planet does offer a really simplistic approach to support your data purge needs, that is, an expiration time for each of the records that goes inside. The technique is known as TTL(time to live).

After the expiration time has reached or time to live has ended,the record is automatically marked for deletion by what is known as putting a tombstone on that record. Continue reading

Posted in Scala | Leave a comment

APIGEE-EDGE: Playing With The Policies(Part1).


Apigee_logo.svg

 

Hi all,

In one of my previous blogs, I discussed Apigee, what it is and why is it required? We also discussed the various features and benefits of having Apigee. We saw how to deploy a proxy through the code rather than the Apigee UI deployment feature and many more. So if you like you can go through the basics of Apigee here Getting started with Apigee, just to refresh your knowledge on Apigee-Edge.

Moving forward, we’ll be focusing on some of the policies provided by Apigee. The policies that we’ll be focusing on are :

(1) Extract Variables Policy.

(2) Spike Arrest Policy.

(3) Response Cache Policy.

So before we start with the policies let’s just go through a few basic concepts, in form of question and answers. 🙂 😛

Ques1: What is a Policy in Apigee?

Ans:  A policy in Apigee is like a module that implements a specific, limited management function. Policies are designed to let you add common types of management capabilities to an API easily and reliably. Policies provide features like security, rate-limiting, transformation, and mediation capabilities. Apigee Edge enables you to ‘program’ API behavior without writing any code, by using ‘policies’, saving you from having to code and maintain this functionality on your own.

You’re not limited to the set of policy types provided by Apigee Edge. You can also write custom scripts and code (such as JavaScript and Node.js applications), that extend API proxy functionality.

Continue reading

Posted in Devops, Scala | Tagged , , , , , , , | Leave a comment

UDTs In Cassandra – Simplified!!


In most programming languages viz. Scala or Java we can play with object constructs i.e. we can create our own classes and create instances out of it. A similar construct is also provided by Cassandra and that is known as UDT which stands for User Defined Type.

User-defined types (UDTs) can attach multiple data fields, each named and typed, to a single column. The fields that is used to create user-defined type  may be any valid datatype, including collection or other UDTs. Once UDT is created , that can be used to define a column in a table.

Syntax to define UDT :-

CREATE TYPE student.basic_info (
  first_name text,
  last_name text, 
  nationality text,
  roll_no int,
  address text
);

student is the keyspace name here and we are creating basic_info as a type and this type contains five fields (first_name, last_name, nationality, roll_no, address) with type of each. Now , we can use this basic_info type to define a column in a table. in simple words, we can say we use UDT to store value as object in Cassandra that contains some fields within itself.

How to use UDT type column in a table :- 

CREATE TABLE student.student_stats 
( id int PRIMARY KEY, grade text, basics FROZEN<basic_info>); 

So, as you can see in above section we declared a table with name student_stats with three columns named as (id, grade, basics)  with datatype of each. The last column is UDT itself because the datatype of basics is which we declared as a type i.e. basic_info.

How to insert data in a table with UDT :-

INSERT INTO student.student_stats (id, grade, basics) VALUES (1,'SIXTH',{first_name: 'Kunal', last_name: 'sethi', nationality: 'Indian',roll_no: 101, address: 'Noida'});

This is how we insert data in a UDT column. It looks like we are giving a key value pair to specify which field value we are giving.

udt1

Here, I just want to share one more thing about insert statement that, in Cassandra, we can also insert the data in JSON format as below.
e.g :

INSERT INTO student.student_stats JSON'{"id":3, "basics":{"first_name":"Abhinav", "last_name":"Sinha", "nationality":"Indian", "roll_no":103, "address":"Delhi"}, "grade":"Tenth"}'

Lets take another case in which we will not insert the one of the field values of the UDT. Then the question is, will the value of that field be inserted or not ??

INSERT INTO student.student_stats (id, grade, basics) VALUES (2,'SIXTH', {first_name: 'Anshul', last_name: 'shivhare', nationality: 'Indian',roll_no: 102});

In this insert command, we are not passing the value of address field here so question arises how Cassandra will handle this and the answer to this question is ‘yes’, it will insert this value as a normal value but it will take the address field value as null . Every field value except primary key that we do not pass at the time of insertion, Cassandra will take it as null.

As you can see here address field value as null.

udt2 (1)

 

Now, we will go through with one more example where we will see how to fetch data from a UDT field with Java code example:

public class ReadData {

    public static void main(String args[]) throws Exception {

//queries
        String query = "SELECT * FROM student_stats";


//Creating Cluster object
        Cluster cluster = Cluster.builder().addContactPoint("127.0.0.1").withPort(9042).build();

        Session session = cluster.connect("folio_viewer");
        List udtValList = new ArrayList<>();
        List basicInfoList = new ArrayList<>();
        ResultSet result = session.execute(query);
        
        result.forEach(row -> {
            UDTValue udt = row.get("basics", UDTValue.class);
            udtValList.add(udt);
        });

        udtValList.stream().forEach(val -> {
                    BasicInfo basicInfo = BasicInfo.builder().firstName(val.get(0, String.class) != null ?
                            val.get(0, String.class) : "")
                            .lastName(val.get(1, String.class) != null ? val.get(1, String.class) : "")
                            .nationality(val.get(2, String.class) != null ? val.get(2, String.class) : "")
                            .rollNo(val.get(3, Integer.class) != null ? val.get(3, Integer.class) : 0)
                            .address(val.get(4, String.class) != null ? val.get(4, String.class) : "").build();
                    basicInfoList.add(basicInfo);
                }
        );
        basicInfoList.stream().forEach(val -> {
            System.out.println("_______________________________________________");
            System.out.println("FirstName :- " + val.getFirstName());
            System.out.println("LastName :- " + val.getLastName());
            System.out.println("Nationality :- " + val.getNationality());
            System.out.println("Roll Number :- " + val.getRollNo());
            System.out.println("Address :- " + val.getAddress());
            System.out.println("_______________________________________________");
        });
    }
}

In result object, we are getting a ResultSet and then we are performing iterations with the help of foreach and in each iteration we get one row each out of which we are extracting the UDT column basics and then casting that value into a UDTValue object.

UDTValue stores the fields in a sequential manner in the order they are present in a UDT column itself. To retrieve values from the UDTValue object, we just need to give the index number of the corresponding field e.g. val.get(3, Integer.class).

As you can notice from the UDT definition, rollno is the fourth field hence we are using the  index number 3 and the type is int so we are typecasting that particular field using Integer.class .

This is how we can get the data from UDT fields and one more thing to notice in this example is that we used lombok builder() method to create objects.

Hope, this blog will reduce your efforts in implementing UDTs. Related image


knoldus-advt-sticker


 

Posted in Scala | Leave a comment

Know a Knolder – Himani Arora!


Himani Arora.jpg

Image | Posted on by | Leave a comment

Do GIT Right – The 10 commandments!


In this blog post, I’ll be sharing with you the 10 commandments for doing GIT right. Let’s get to it!

1. The KISS principle – Keep It Simple Silly!

Commits should be small, clear and precise. For e.g., two different bugs should have two different commits. If there is a task that requires touching multiple files, break those changes into small logical commits with neat titles so that each commit has a minimum but logical file changes.

commit 96471f14d7940ce5d7bfa5f60baba4b930ad3763
Author: Pankhurie Gupta
Date: Sun Jan 14 00:14:31 2018 +0530

Task-1 | Updated file2 for task-1

commit 42a181456cef548c85294cc911f2ba1710779149
Author: Pankhurie Gupta
Date: Sun Jan 14 00:13:05 2018 +0530

Task-1 | Updated file1 for task-1
Continue reading

Posted in git, git-flow, github | Leave a comment

Cassandra Writes: A Mystery?


index
Apache Cassandra is a free and open-source distributed NoSQL database management system designed to handle large amounts of data across many commodity servers, providing high availability with no single point of failure.

It is a peer to peer database where each node in the cluster constantly communicates with each other to share and receive information (node status, data ranges and so on). There is no concept of master or slave in a Cassandra cluster.Any Node can be coordinator node for each query.

In this blog, we’ll take a look behind the scenes to see how Cassandra handles write queries. For Cassandra Basics and installation, you can refer to our earlier blog.

Writing in Cassandra

When a client performs a write operation against a Cassandra database, it processes data at several stages on the write path, starting with the immediate logging of a write and ending in with a write of data to disk:

  • Logging data in the commit log
  • Writing data to the memtable
  • Flushing data from the memtable
  • If a crash occurs before the MemTable got flushed to disk, the commit log is used to replay that data and to rebuild the MemTable. All data successfully written to disk during the flush operation is then removed from the commit log.
  • Storing data on disk in SSTables
  • SSTables are immutable and can not be altered/changed post data insertion.

write-path.png

Continue reading

Posted in big data, Cassandra, database, Scala | Tagged , , , , , | 2 Comments

Excerpt from What The Heck is BLOCKCHAIN!!


Blockchain has the potential to change the way that the world approaches big data, with enhanced security and data quality. Most of the people know nothing about how Blockchain technology works. For this very reason, I decided to give a gentle introduction to Blockchain technology.

Before describing Blockchain First, We will discuss the problem that Blockchain solved.

Suppose, Your friend say Alex calls you and say “Hey, I need some money”. You replied “Yeah Sure!! Sending $5000 to your account” and hung up.

You then call to your Account Manager and tell “Transfer the money to Alex account”. He replied “Sure Sir!”.

He checks the register whether you have enough balance or not as you have plenty of money. He creates an entry into the Register :-

JAN 9, 2018 2:00 PM

You------------------------------------------------Alex $5000

Continue reading

Posted in Scala | Leave a comment