QuickTip: Integrating Amazon S3 in your Scala Product

Table of contents
Reading Time: < 1 minute

This post is supposed to be a quick cheat sheet of integrating your Scala product with Amazon S3. The pre-requisites are that you have a valid S3 account and have the keys to access the account.

We have put our S3 credentials in our application.conf. We use the Typesafe Config for managing our configurations. So our example entries would be

Now, we need to have the AWS dependency in the build.sbt file

And then the code is pretty straight forward, where we create the AWS client

and then use it for various operations on the S3. Let us look at Upload, Delete and checking if a file exists

Written by 

Vikas is the CEO and Co-Founder of Knoldus Inc. Knoldus does niche Reactive and Big Data product development on Scala, Spark, and Functional Java. Knoldus has a strong focus on software craftsmanship which ensures high-quality software development. It partners with the best in the industry like Lightbend (Scala Ecosystem), Databricks (Spark Ecosystem), Confluent (Kafka) and Datastax (Cassandra). Vikas has been working in the cutting edge tech industry for 20+ years. He was an ardent fan of Java with multiple high load enterprise systems to boast of till he met Scala. His current passions include utilizing the power of Scala, Akka and Play to make Reactive and Big Data systems for niche startups and enterprises who would like to change the way software is developed. To know more, send a mail to hello@knoldus.com or visit www.knoldus.com

Discover more from Knoldus Blogs

Subscribe now to keep reading and get access to the full archive.

Continue reading