Database

Apache Spark: Delta Lake as a Solution – Part II

Reading Time: 3 minutes Well, we have already covered the missing features in Apache Spark & also the causes of the issue in executing Delta Lake in Part1. However, today we will be talking about What Delta Lake is & how it provides the solution to all those problems discussed herein Delta Lake as a Solution: Part1.As we all know that Spark is just a processing engine, it doesn’t Continue Reading

Apache Spark: Delta Lake as a Solution – Part I

Reading Time: 3 minutes Today, everyone is talking about Delta Lake. Why? Ever tried to find the answer to this question? Yes or No doesn’t matter, don’t worry here in Part1 we will be discussing the same & also will be targetting the following questions: What are the features missing from Apache Spark? What kind of issues it causes in executing Data Lake? Answering the above questions will definitely Continue Reading

Apache Spark: Handle Corrupt/Bad Records

Reading Time: 3 minutes Most of the time writing ETL jobs becomes very expensive when it comes to handling corrupt records. And in such cases, ETL pipelines need a good solution to handle corrupted records. Because, larger the ETL pipeline is, the more complex it becomes to handle such bad records in between. Corrupt data includes: Missing information Incomplete information Schema mismatch Differing formats or data types Apache Spark: Continue Reading

Parsing database Query with Apache Calcite

Reading Time: 3 minutes Hey there, as a technical person sometimes we have to write the query of database and that looks good but we don’t know the query we wrote was syntactically correct or not. So in this blog, we parse the database query and test it using a test case with the help of Apache Calcite. So not wasting any time lets discuss about Apache Calcite and Continue Reading

Parse database query with JSQL Parser

Reading Time: 3 minutes Hi guys, as we discussed in the previous blog that is about parsing the database query, it is also an alternative for parsing the SQL queries. So write the query of database and that looks good to you but you don’t know the query you wrote was syntactically correct or not. In this blog, we parse the database query and test it using a test Continue Reading

Couchbase Disaster recovery

Couchbase – Enhance Database Performance

Reading Time: 5 minutes While transitioning from a Relational to a NoSQL Database, architects expect none or a minimal effect on performance with the scaling up of the size of data.  Dealing with a huge amount of data may be the USP of a Database, but still, we need to design things in order to make them run well at scale. In this blog, I’d try to explain what Continue Reading

Database Normalization :: Part 2

Reading Time: 6 minutes Introduction Normalization helps one attain a good database design and thereby ensures continues efficiency of the database. Normalization, which is a process for assigning attributes to entities, offers the following advantages: There are 7 types of Normal forms: In my previous blog, Database Normalization :: Part 1 I’ve discussed about first four.In this blog, we will be looking into 4NF, 5NF and DKNF. Fourth Normal Continue Reading

Database Normalization :: Part 1

Reading Time: 6 minutes Introduction Normalization helps one attain a good database design and thereby ensures continues efficiency of the database. Normalization, which is a process for assigning attributes to entities, offers the following advantages: There are 7 types of Normal forms: In this blog, we will be looking into the first four only, rest I’ll be covering in Part 2 of Database Normalization. First Normal Form (1NF) :- Continue Reading

Amazon EMR

Reading Time: 3 minutes Businesses worldwide are discovering the power of new big data processing and analytics frameworks like Apache Hadoop and Apache Spark, but they are also discovering some of the challenges of operating these technologies in on-premises data lake environments. They may also have concerns about the future of their current distribution vendor. Common problems of on-premises big data environments include a lack of agility, excessive costs, Continue Reading

Apache Spark: Tricks to Increase Job Performance

Reading Time: 2 minutes Apache Spark is quickly adopting the Real-world and most of the companies like Uber are using it in their production. Spark is gaining its popularity in the market as it also provides you with the feature of developing Streaming Applications and doing Machine Learning, which helps companies get better results in their production along with proper analysis using Spark. Although companies are using Spark in Continue Reading

Spark: ACID Transaction with Delta Lake

Reading Time: 3 minutes Spark doesn’t provide some of the most essential features of a reliable data processing system such as Atomic APIs and ACID transactions as discussed in the blog Spark: ACID compliant or not. Spark welcomes a solution to the problem by working with Delta Lake. Delta Lake plays an intermediary service between Apache Spark and the storage system. Instead of directly interacting with the storage layer, Continue Reading