Amazon S3

data codes through eyeglasses

Sending AWS CloudTrail logs to the S3 Bucket

Reading Time: 5 minutes Hello Readers!!! We are again back with a new exciting service of AWS i.e CloudTrail. This blog will show how to send AWS CloudTrail logs to the S3 bucket. So, before moving to the S3 bucket let’s first discuss what is AWS CloudTrail and its uses. AWS CloudTrail: AWS CloudTrail is a service by which you can track changes to your AWS resources, including Amazon Continue Reading

How to Transfer data between Amazon EC2 and S3

Reading Time: 3 minutes Hello Readers! In this blog, we will see how to transfer data between amazon ec2 and s3 buckets. Basically what we will do is we will upload files to s3 buckets by using the ec2 instance. EC2 is a computing service for AWS and S3 is an object storage service offered by AWS.  Let’s do it! Step1: Create an EC2 instance by using which you Continue Reading

Developing programming and coding technologies working in a software engineers.

How Caching and Invalidations in AWS CloudFront works

Reading Time: 3 minutes In my previous blog, We have talked about What is CloudFront with its works and in this blog, we will learn how CloudFront Caching and Invalidations works. CloudFront Caching Reducing the number of requests to our origin server directly is one of the goals of using CloudFront. Due to CloudFront caching, more objects are served from CloudFront edge locations, which are nearer to users. This Continue Reading

AWS CloudFront – A Quick Overview!!

Reading Time: 3 minutes In this blog, we’ll learn what is CloudFront with its working Demo. So, let’s get started. Amazon CloudFront is a web service that speeds up the distribution of our static and dynamic web content, such as .html, .css, .js, and image files, to our users. Our content is delivered by CloudFront through a global network of data centers known as edge locations. When a user Continue Reading

Apache Spark: Read Data from S3 Bucket

Reading Time: < 1 minute Amazon S3 Accessing S3 Bucket through Spark Edit spark-default.conf file You need to add below 3 lines consists of your S3 access key, secret key & file system spark.hadoop.fs.s3a.access.key “s3keys” spark.hadoop.fs.s3a.secret.key “yourkey” spark.hadoop.fs.s3a.impl org.apache.hadoop.fs.s3a.S3AFileSystem

How to create a bucket on Amazon S3 and getting security credential keys?

Reading Time: 3 minutes Amazon S3 has a simple web services interface that you can use to store and retrieve any amount of data, at any time, from anywhere on the web. This blog describes : how you can create buckets on S3, getting credential keys, where should you keep your credential keys. CREATION OF BUCKET First of all, you need to sign up in aws S3 after that Continue Reading

How to upload any file on Amazon S3 using Rust?

Reading Time: 3 minutes Welcome everyone to the file upload on Amazon S3 using the Rust. Amazon S3 [Amazon Simple Storage Service] provides virtually limitless storage on the internet. For the bucket creations and security credentials please refer to my last blog. This blog explains following requests using Rust: sending a request to aws S3 bucket, list of objects in the bucket, putting  an object in the bucket, deleting Continue Reading