a man wearing a virtual reality headset

Provisioning an S3 bucket with Crossplane

Reading Time: 3 minutes Crossplane has been gaining popularity in recent times. Crossplane works well with Kubernetes in the frame. We can now manage the deployment of microservices in the k8s cluster and also the managed components of the cloud (cloud services) using a single tool. Moreover, you can configure the same deployment tool (like, Argocd) for the same. Pre-requisites AWS credentials You will have to get the base64 Continue Reading

How to Setup S3 Trigger with AWS Lambda ?

Reading Time: 3 minutes What is AWS Lambda? In Aws, we can run code using the computation service Lambda without setting up or maintaining servers. In essence, this suggests that AWS Lambda development may be done with no concern for setting up servers or other infrastructure. When data in an Amazon S3 bucket or an Amazon DynamoDB table changes, for example, your code can be launched in response using Continue Reading

How to deploy an S3 Bucket in AWS- using Terraform

Reading Time: 3 minutes In this blog, we will create a Terraform Script for deploying an s3 bucket in AWS. S3 bucket is a simple storage service in the AWS cloud. It allows to store and access any amount of data. It stores all the data as objects, that is, it is an object-based storage service. Using a terraform script, the entire infrastructure can be managed by declaring all Continue Reading


Connect S3 Bucket via VPC Endpoint in AWS

Reading Time: 3 minutes Introduction Connect S3  bucket via VPC Endpoint in AWS can help you access objects fast from S3 Bucket. A  VPC Endpoint establishes a connection to connect your VPC to supported AWS services privately, it doesn’t require public IP addresses, access over the Internet, NAT device, a VPN connection, or AWS Direct Connect. VPC Endpoint policy is an IAM resource policy attached to an endpoint for controlling Continue Reading

Saving Spark DataFrames on Amazon S3 got Easier !!!

Reading Time: < 1 minute In our previous blog post, Congregating Spark Files on S3, we explained that how we can Upload Files(saved in a Spark Cluster) on Amazon S3. Well, I agree that the method explained in that post was a little bit complex and hard to apply. Also, it adds a lot of boilerplate in our code. So, we started working on simplifying it & finding an easier way to provide a wrapper around Spark Continue Reading

S3Ninja an Introduction

Reading Time: 2 minutes S3Ninja is an emulator that emulates the S3API. S3Ninja provides an environment for your local system to support integration of upload a file, just as we do on S3. Currently it supports objects methods only like GET, PUT, HEAD, DELETE. S3Ninja can be used, to upload file on our local system instead of S3 to write integration tests that may integrate with upload of file Continue Reading

AWS Services: AWS SDK on the Scala with Play Framework

Reading Time: 3 minutes playing-aws-scala The following blog and attached code represent a simple example of Amazon Web Services in the Scala way with Play Framework using AWScala but in this blog I have implemented only Amazon Simple Storage Service (Amazon S3) functionalities. AWScala: AWS SDK on the Scala REPL AWScala enables Scala developers to easily work with Amazon Web Services in the Scala way. Though AWScala objects basically Continue Reading