Have you ever used S3 bucket with your Rust application? If not! then this blog provides decorous knowledge to work with S3 bucket using Rust Programming Language.
In this blog, I will show you how to connect your Rust application with S3 and upload files into it as well.
AWS S3 bucket is just cloud storage, used to upload the data(photos, documents, videos etc.)

Here, we are going to use rust library called as rust-s3 to create a digital bound between the S3 bucket and the Rust programming language.
To upload the data into S3 Bucket we have to follow certain steps:
Step 1: Use rust-s3 crate.
Step 2: Instantiate a Bucket.
Step 3: Upload file to Bucket.
To upload the data into the s3 bucket you all need to have your S3 bucket credentials like bucket name, bucket region, access key, and secret key.
Step 1:
So, to use the functionality of the rust-s3, we have to put the rust-s3 crate into cargo.toml file.
In cargo.toml
[dependencies]
rust-s3 = "0.11.0"
Step 2:
Now, we need to use the functionalities of rust-s3 by importing the packages and functions with the help of ‘use’ keyword.
Then, we’ll instantiate the bucket.
In main.rs
use s3::bucket::Bucket;
use s3::credentials::Credentials;
use s3::region::Region;
The first line provides the structure of Bucket declared inside the bucket module. The structure of Bucket contains parameters which are,
In bucket.rs module (mod bucket -> bucket.rs)
pub struct Bucket {
///bucket name
pub name: String,
///bucket region
pub region: Region,
///bucket credentails
pub credentials: Credentials,
}
The second line provides the structure of Credentials declared inside the credentials module, the structure is,
In credentials.rs (mod credentials -> credentials.rs)
pub struct Credentials {
/// AWS public access key.
pub access_key: String,
/// AWS secret key.
pub secret_key: String,
/// Temporary token issued by AWS service.
pub token: Option<String>,
_private: (),
}
The third line provides the enum of Region declared inside the region module,
In region.rs (mod region -> region.rs)
pub enum Region {
/// us-east-1
UsEast1,
/// ap-south-1
ApSouth1,
.
.
.
Custom(String),
}
These structures have their own implementations, you can explore the internal functionality of the structures by rust-s3 doc .
After importing the packages now we have to set the credentials of the bucket to create the instance of it.
Here, we use dummy data to instantiate a bucket, you have to use your bucket’s credentials. Let’s create a dummy data of the bucket.
///bucket name
const BUCKET: &str = "test_bucket";
///bucket region
const REGION: &str = "ap-south-1";
///bucket credentials
const ACCESS_KEY: String = "ACCESSKEYEXAMPLE"
const SECRET_KEY: String = "SECRETEXAMPLEKEY"
fn main(){
///Initialize Credentials directly with key ID, secret key, and optional token
let credentials: Credentials = Credentials::new(ACCESS_KEY,SECRET_KEY,None,None);
///parsing string to Region type
let region: Region = REGION.parse().unwrap();
/// Instantiate a new `Bucket`.
let bucket: Bucket = Bucket::new(bucket_name, region, credentials);
}
Bucket::new() is used to instantiate a new bucket.
The return type of the Bucket::new() is Bucket itself which is the instance of the S3 bucket, with the help of this instance we are able to perform crud operations in the S3 bucket.
The implementation of the Bucket::new() is :
pub fn new(name: &str, region: Region, credentials: Credentials) -> Bucket {
Bucket {
name: name.into(),
region,
credentials,
}
}
The above describes the implementation of the Bucket::new().
Here, we put the arguments in the into the fields of the struct bucket and returns the bucket instance.
Step 3:
After setting up all the parameters related to the bucket, now are ready to upload the text into the bucket.
So, to upload the text into the bucket we need:
1. Instance of the Bucket
2. Data(text, file, etc) that we are going to upload into the S3 bucket
Here we use the put method of the Bucket to upload data into the S3 bucket.
If S3 receives multpile put requests for same object then it overwrites the object into S3 bucket and gives success response, instead of giving failure response of duplicate object.
use fs;
///file to upload into s3 bucket
let file: Vec<u8> = fs::read("file_path");
///method to upload file into s3 bucket
let (_, code) = bucket.put("/test.file", &file, "text/plain").unwrap();
///validate the success response of the put method
assert_eq!(200, code);
This code, will upload data into S3 bucket and return success Http code which is 200 on successful upload or returns failure code if an error encounters while uploading. The implementation of put method is:
pub fn put(&self, path: &str, content: &[u8], content_type: &str) -> S3Result<(Vec<u8>, u32)> {
let command = Command::Put {
content,
content_type,
};
let request = Request::new(self, path, command);
request.execute()
}
This method includes three parameters which are:
1. path: where the data is going to be stored in the s3 bucket (includes file name)
2.content: the content of the file in &[u8](reference of the vector) type
3.content_type: type of file which is going to upload
And it includes:
enum Command: takes content and content_type in its put variant, (Command::Put) and returns an enum of command type, for more detail about this click here
struct Request: puts bucket(&self), path, and command in Request::new() and get the instance of Request and then fires the execute method of struct Request (request.execute()), for more detail about this click here
References:
docs.rs/rust-s3/0.11.0/s3
durch/rust-s3


