Reading Time: 4 minutes Generics is the way by which we can generalising types and functionalities of the function, structure, etc. Sometimes, we face a problem in which we want to perform the same operation by different types, that time we can implement the same function twice for both types. It can add redundancy to our code. By the use of Generics, we can remove that redundancy.
Reading Time: 4 minutes As you may know in all the versions up to PostgreSQL 10, it was not possible to create a procedure in PostgreSQL. In PostgreSQL11, PROCEDURE was added as a new schema object which is a similar object to FUNCTION, but without a return value. Over the years many people were anxious to have the functionality and it was finally added in PostgreSQL. Traditionally, PostgreSQL has provided all Continue Reading
Reading Time: 4 minutes Effective execution of Program Increment is very essential to reap true benefits. In my previous blog, we have seen what Program Increment is all about and what are the main factors to be considered while planning a Program Increment. Let us see how effectively we can use 2 days. In this blog we will see how do we execute Program Increment Day 1 in detail Continue Reading
Reading Time: 3 minutes In software development, acceptance criteria is a way via which a client communicates their expectations to engineering team. Also, it acts as a list of conditions upon completion of which a software/app is marked as complete. Since acceptance criteria is an important part of software development, it becomes important to determine that the acceptance criteria is met by the software or not. This sub-discipline of Continue Reading
Reading Time: 4 minutes SAFe is no Magic except for Program Increment Planning. No event is as powerful in SAFe as Program Increment Planning. PI planning sets the platform for cadence for ART. In this blog we will see how to prepare for an effective Program Increment Planning session. Next Blog we will cover Day wise activities during the PI session. With large audience of 100 + people working Continue Reading
Reading Time: 5 minutes In this blog we will talk about why Malaria detection is important to detect early presence of parasitized cells in a thin blood smear. Introduction Malaria is a deadly, infectious mosquito-borne disease caused by Plasmodium parasites. These parasites are transmitted by the bites of infected female Anopheles mosquitoes. While we won’t get into details about the disease, there are five main types of malaria. Let’s Continue Reading
Reading Time: 3 minutes It’s not uncommon to acquire resources while developing and forgetting to close them. For example, InputStream is a resource that would be explicitly needed to be closed and hence can be often overlooked. To effectively work with our resources it is important that we return the resources as soon as we are done using them else they may lead to performance consequences. One of the Continue Reading
Reading Time: 6 minutes Data is evolving both in terms of quality and quantity in today’s enterprises and in the past few years, changes have occurred at a much faster pace. Not long ago, Big Data was considered the next big thing for digital transformation. Technologies like Hadoop and HBase made sense as batch processing of data was the norm. But things are not the same now. By the Continue Reading
Reading Time: 4 minutes Smart Pointers are the data structures that behave like a pointer while providing additional features such as memory management. Smart Pointers keep track of the memory that it points to, and are also used to manage other resources such as file handling and network connections.
Reading Time: 4 minutes In our previous blog of Apache Spark, we discussed a little about what Transformations & Actions are? Now we will get deeper into the topic and will understand what actually they are & how they play a vital role to work with Apache Spark? What is Spark RDD? Spark introduces the concept of an RDD (Resilient Distributed Dataset), an immutable fault-tolerant, distributed collection of objects Continue Reading
Reading Time: 3 minutes As organizations nowadays have a lot of data, which could be customer data or S3 or could be unstructured data from a bunch of sensors. The promise of Data Lake is to collect all data and dump it into the data lake. Through which you can actually get an insight into it. You can build powerful tools with it such as a Recommendation engine and Continue Reading
Reading Time: 3 minutes We all know the power of lazy variables in Scala programming. If you are developing the application with huge data then you must have worked with the Scala collections. Some mostly used collections are List, Seq, Vector, etc. Similarly, you must be aware of the power of Streams. The streams are a very powerful tool for handling the infinite flow of data and streams are Continue Reading