How To Run Custom Binaries On AWS Lambda? [2 Ways]

Reading Time: 2 minutes

When you get into serverless more, you get tempted to know how to run custom binaries on AWS Lambda because AWS Lambda does not support every framework or runtime so you have to create your own makeshift solutions using the currently available runtimes.

Before we go on and add our own binaries, we first need to learn about AWS Lambda and how it actually works.

AWS Lambda

What Is AWS Lambda?

AWS Lambda is a serverless computing service. Using Lambda you can transform your application into self-contained containers that run on one of the supported runtimes.

Of course, serverless doesn’t mean no servers, it just means you don’t have to manage or create those servers. So when you create a Lambda function, AWS automatically takes care of the infrastructure for you.

How Does AWS Lambda Work?

First things first, all available Lambda runtimes on AWS are Linux based so you cannot have a Windows Lambda.

Whenever you run a Lambda function, in the background, AWS packages your code and runs it inside a container that is part of a cluster that again, is managed by AWS.

AWS also allocates the container some memory that it needs to run and then it charges you for the amount of memory your function used times the duration of its execution. No Lambda can run for more than 15 mins.

How To Run Custom Binaries On AWS Lambda?

Sometimes, the available Lambda runtimes don’t even support some basic binaries. So coming back to the real question, how can we run our custom binary? There are 3 ways.

We know that the max package size for Lambda is 50 MB so if we want to handle larger files then we need to use S3.

We can upload our binary on S3 and then during runtime, download it. You can do it using Python like this:

import json
import os
import boto3

def load_file_from_S3(key, bucket):
    """Download file from S3 to /tmp/ folder"""
    local_path = key
    filename = f'/tmp/{local_path}'
    s3 = boto3.client('s3')
    s3.download_file(bucket, key, f'/tmp/{local_path}')


def lambda_handler(event, context):
    """Main function run when Lambda is invoked"""
    key_in = ''
    bucket_in = ''
    
    load_file_from_S3(key_in, bucket_in)

    os.system('chmod 755 /tmp/my-binary')
    
    output = os.popen(f"/tmp/my-binary [command]).read()

According to AWS, the definition of a layer is:

A Lambda layer is a .zip file archive that can contain additional code or other content. A layer can contain libraries, a custom runtime, data, or configuration files. Use layers to reduce deployment package size and to promote code sharing and separation of responsibilities so that you can iterate faster on writing business logic.

That means, we can essentially put our binary in an existing layer and let Lambda know about it by setting the PATH.

For example, if you’ve added your binary in a folder called my_custom_binary then you’ve to set PATH like the following:

/var/lang/bin:/usr/local/bin:/usr/bin/:/bin:/opt/bin:/opt/my_custom_binary

Written by 

Mohit Saxena is a Software Consultant having experience of more than 2 years. He is always up for new challenges and loves to aggressively move towards completing the software requirements. On a personal front, he loves to climb mountains and is a big foodie.