Studio-Scala

The Beginner’s Guide to Easy Parallel Programming With Scala Futures

Reading Time: 3 minutes Introduction We can build a Futures API into the Scala programming language, making parallel programming much easier than using threads, locks, and callbacks. The purpose of this blog post is to provide an overview of Scala’s Futures: how they work, how you can use them, and how you can use them to leverage parallelism in your code. Creating Futures We have imported under the name Continue Reading

The best CODE Migration TOOL for SCALA Explained

Reading Time: 2 minutes Introduction With Scalafix, you can focus on the changes that truly merit your time and attention rather than having to deal with easy, repetitive, and tedious code transformations. Scalafix is the best code migration tool for Scala and transforms unsupported features in source files into newer alternatives and writes the result back to the original source file. Scalafix may not be able to perform automatic Continue Reading

GREAT START WITH THE REST API

Reading Time: 2 minutes REST APIs give us lightweight, flexible ways to integrate applications. And have emerged as the most straightforward method for connecting parts in microservices architectures. What is a REST API? API stands for an application programming interface. API is a protocol that defines how applications or devices can connect to and communicate with each other. A REST conforms to the design principles of the REST. REST Continue Reading

Receivers in Apache Spark Streaming

Reading Time: 2 minutes Receivers are special objects in Spark Streaming. The receiver’s goal is to consume data from data sources and move it to Spark. We create Receivers by streaming context as long-running tasks on different executors. We can build receivers by extending the abstract class Receiver. To start or stop the receiver there are two methods:- onStart() This method contains all important things like opening connections, creating threads, Continue Reading

Cache and Persist in Apache Spark Dataframe

Reading Time: 2 minutes Spark computations are faster than map-reduce jobs. If we haven’t designed our jobs for reusing computations then our performance will degrade for billions and trillions of data. Hence, we may need to look at the stages and use optimization techniques as one of the ways to improve performance. cache() and persist() methods provide an optimization mechanism to store the intermediate computation of a spark data frame. So we Continue Reading

How data flow in React-Redux app

Reading Time: 4 minutes Introduction Today React JS is one of the most popular javascript framework library because it is so simple to code in react js and make web applications, but in the big react project we have to also use the redux to manage the states of our application. Starting with react-redux can be challenging and painful for the beginner’s and the most tricky thing in react-redux Continue Reading

Introduction to zio-json | Encoding Decoding

Reading Time: 2 minutes In this blog we discuss how encoding and decoding of data is perform in zio using the zio-json. According to ZIO documentation, we know ZIO is a library for asynchronous and concurrent programming that promotes pure functional programming. At the core of ZIO is ZIO, a powerful effect type inspired by Haskell’s IO monad.The ZIO[R, E, A] data type has three type parameters: R – Environment Type:- The effect requires an environment Continue Reading

Environment Variables in Angular Part 1

Reading Time: 3 minutes Introduction Environment Variables are those variables, whose value changes as per the environment we are in. This will help us to change some behavior of the App based on the environment.As we all know there are majorly three stages through which an application goes through before going into production. Namely thoes stages are development, testing, staging and production. We call these stages as Environments. And Continue Reading

AWS Fargate and How to create task definition,cluster and service

Reading Time: 5 minutes Introduction Hello readers, I’ll be covering about the details of AWS Fargate which is a compute engine for Amazon Elastic container service but before it let’s take a look at topics that we’ll be discussing in this blog.Firstly we’ll discuss Fargate,need for Fargate,how it works,few key concepts that you must know when you are dealing with Fargate and finally we’ll see the demo how to Continue Reading

Apache Beam: Introduction

Reading Time: 3 minutes Apache Beam is a unified programming model that handles both stream and batch data in the same way. We can create a pipeline in beam any of the following beam SDK’s (Python/Java/Go languages) which can run on top of any supported execution engine namely Apache Spark, Apache Flink, Apache Apex, Apache Samza, Apache Gearpump, and Google Cloud dataflow(there are many more to join in future). Continue Reading

Spring Cloud GCP for Pub/Sub

Reading Time: 3 minutes This is the first blog in the Spring Cloud GCP- Cloud Pub/Sub. In this blog, we will create an application to receive messages that are sent by the sender application. We will use Google Cloud Pub/Sub as the underlying messaging system. To integrate Google Cloud Pub/Sub with our application we will be using Spring. As part of this blog, we have to download the basic Continue Reading

Make better decisions with Google Cloud Document AI

Reading Time: 3 minutes Nearly all business processes today begin, include or end with a document. Most companies are sitting on the document goldmine. Thinking of which some are PDFs, emails, customer feedback, patents, contracts, technical documents, sensitive documents, HR files and the list goes on. These documents are only going to grow with time. Making sense of each document is difficult since a lot of these documents are Continue Reading

Smart Searching Through Trillion of Research Papers with Apache Spark ML

Great start with filter, map, flatMap, and for comprehension

Reading Time: 2 minutes Scala has a very large collection set. Collections are like containers that hold some linear set of values, and we apply some operations like filter, map, flatMap and for comprehension of the collections and manipulate them in a new collection set. filter Selects all elements of the collection that satisfy a predicate. Params: p- It used to test elements Returns: A new collection consisting of Continue Reading