Studio-DevOps

Automate deployment using AWS CodeDeploy

Reading Time: 6 minutes In the CodeDeploy blog series, we are going to write two blogs the first blog covers the CodeDeploy theory-based and In the second blog, we will cover the full end-to-end automation practical of the application deployment using CodeDeploy and Jenkins. Let’s start aws CodeDeploy is basically a deployment service through which we can easily automate your deployment.

Azure Monitor: Collect Logs and Metrics from On-Premises

Reading Time: 5 minutes In this blog we are going to discuss how we can collect logs and metrics from the Azure resource and on-prem infrastructure to the azure monitor

Databricks Deployment via Jenkins

Reading Time: 3 minutes In this blog, We will learn how do we create the Databricks Deployment pipelines to deploy databricks components(Notebooks, Libraries, Config files and packages) via a Jenkins.

Git first cover

Git Working Areas

Reading Time: 8 minutes Hello all, in this blog we will obtain an understanding of git internals. We will understand how our content moves within a git system. This will introduce us to the git working areas. We will see which git commands moves data to & from these areas. As a matter of fact, this blog will help boost our knowledge regarding git operations. It will also serve Continue Reading

Migration Assessment

Reading Time: 7 minutes The first step in migration is to calculate the cost of the move and the cost of what you are running in your current setup. This is useful if you’re planning a migration from an on-premises environment, a private hosting environment, another cloud provider, or if you’re evaluating the opportunity to migrate and exploring what the assessment phase might look like. The assessment phase is Continue Reading

Introduction to Cloud Migration

Reading Time: 4 minutes Cloud migration is the process of moving digital business operations into the cloud to leverage the advantages delivered by a successful digital transformation. Cloud migration is like a physical move, except it involves moving data, applications, and IT processes from some data centres to other data centres, instead of packing up and moving physical goods. Much like a move from a smaller office to a Continue Reading

Flink on Kubernetes

Reading Time: 3 minutes Introduction Apache Flink is a framework and distributed processing engine for stateful computations over unbounded and bounded data streams. The design of Flink is such as to run in all common cluster environments, perform computations at in-memory speed and at any scale. There are two Flink’s clusters: Flink session cluster and Flink job cluster. A job cluster is a dedicated cluster that runs a single job. The job is part of Continue Reading

Networking in Google Cloud Platform

Reading Time: 6 minutes Virtual Private Cloud Network or simply network is a virtual version of a physical network. In Google Cloud Networking, networks provide data connections into and out of cloud resources – mostly Compute Engine instances. Securing the networks is critical to securing the data and controlling access to the resources. Google Cloud Networking achieves flexible and logical isolation of unrelated resources through its different levels.

ChatOps : Make your life easy

Reading Time: 4 minutes Hey there ! You must have heared about a term called qaops. Its becoming a common term now a days in between software professionals. So in this blog lets discuss what is chatops ? where ,why and how should you use this in your environment.

Migrating from VM to Kubernetes Engine with Anthos

Reading Time: 6 minutes What if we had an automatic way to convert VMs to Containers. Does this sound like magic? Yes, ‘Migrate for Anthos’ does quite a bit of magic underneath to automatically convert VMs to Containers.

Apache Spark: Handle Corrupt/Bad Records

Reading Time: 3 minutes Most of the time writing ETL jobs becomes very expensive when it comes to handling corrupt records. And in such cases, ETL pipelines need a good solution to handle corrupted records. Because, larger the ETL pipeline is, the more complex it becomes to handle such bad records in between. Corrupt data includes: Missing information Incomplete information Schema mismatch Differing formats or data types Apache Spark: Continue Reading

Ip6tables firewall

Reading Time: 4 minutes Hello readers, this blog will teach you about ip6tables and its use with some basic use cases.we will also see that how ip6tables different from iptables. what is iptables?  Iptables is a Linux command line firewall that allows system administrators to manage incoming and outgoing traffic via a set of configurable table rules. iptable vs ip6tables Ip6tables is used to set up, maintain, and inspect the tables Continue Reading

Create Jinja templates in Python Script

Reading Time: 2 minutes Hi readers, In this blog, we will be looking at creating Jinja templates and passing variables in them using python script also how to use them for creating dynamic pages. Jinja as templating engine Jinja2 is a full-featured template engine for Python. Jinja is similar to the Django template engine but provides Python-like expressions while ensuring that the evaluation of templates is in a sandbox. Continue Reading