Reading Time: 3 minutes In this blog, I will explain what Two-Way SMS is, how we can easily implemented using amazon Pinpoint. Here is the Architecture diagram of Two-Way SMS was infrastructure that we will use for setting up Two-Way Infrastructure.
Reading Time: 4 minutes Have you ever faced any use case or scenario where you’ve to load JSON data into the Snowflake? We better know JSON data is one of the common data format to store and exchange information between systems. JSON is a relatively concise format. If we are implementing a database solution, it is very common that we will come across a system that provides data in Continue Reading
Reading Time: 4 minutes We live in a world driven by data and every second we are processing a large amount of data, using it, analyzing it, and transforming it. Data is very essential for businesses these days. Therefore need for handling the Dynamically generating data is important. As the number, variety, and velocity of data sources grow, new architectures and technologies are needed. Technologies like Amazon Kinesis are Continue Reading
Reading Time: 4 minutes Hello folks, I hope you are having a productive day in this pandemic of COVID-19. Let’s move on to our next blog in the series of API automation. Most of us are doing automation using the tool Postman. So while performing automation with postman we have to integrate many other tools and APIs with Postman. Similarly to use the AWS APIs we need to create Continue Reading
Reading Time: 2 minutes This blog pertains to Cloning feature in Snowflake, and I will explain you all the things you need to know about these features with practical example. So let’s get started. Zero Copy Clone Cloning also Snowflake as Zero Copy Clone in Snowflake. It used to create a copy of a Table or Schema or a Database. In most database, in order to make a copy Continue Reading
Reading Time: 5 minutes This blog pertains to Time Travel and Fail-safe in Snowflake, and I will explain you all the things you need to know about these features with practical example. So let’s get started. Introduction to Time Travel Snowflake allows accessing historical data of a point in the past that may have been modified or deleted at the current time. Using time travel functionality a number of Continue Reading
Reading Time: 5 minutes In this blog, we will discuss loading streaming data into Snowflake table using Snowpipe. But before that, if you haven’t read the previous part of this blog i.e., Loading Bulk Data into Snowflake then I would suggest you go through it. As now we have been set so let’s get started and see what Snowpipe is all about. Introduction Snowpipe is a mechanism provided by Continue Reading
Reading Time: 5 minutes This blog pertains to Loading Data into Snowflake, and I will explain you about the various step involved in this process. So let’s get started. Before moving ahead, you can visit the blog on understanding the basic of Snowflake Data Warehouse in case you want to refresh your concepts. Now let’s talk about the actual topic for which you have click on this blog. To Continue Reading
Reading Time: 6 minutes In the CodeDeploy blog series, we are going to write two blogs the first blog covers the CodeDeploy theory-based and In the second blog, we will cover the full end-to-end automation practical of the application deployment using CodeDeploy and Jenkins. Let’s start aws CodeDeploy is basically a deployment service through which we can easily automate your deployment.
Reading Time: 3 minutes Most of the time writing ETL jobs becomes very expensive when it comes to handling corrupt records. And in such cases, ETL pipelines need a good solution to handle corrupted records. Because, larger the ETL pipeline is, the more complex it becomes to handle such bad records in between. Corrupt data includes: Missing information Incomplete information Schema mismatch Differing formats or data types Apache Spark: Continue Reading
Reading Time: 3 minutes Businesses worldwide are discovering the power of new big data processing and analytics frameworks like Apache Hadoop and Apache Spark, but they are also discovering some of the challenges of operating these technologies in on-premises data lake environments. They may also have concerns about the future of their current distribution vendor. Common problems of on-premises big data environments include a lack of agility, excessive costs, Continue Reading