From Serverless to Stateful Serverless

Serverless to Stateful serverless
Reading Time: 5 minutes

Hi all. I hope that you all must have heard of Serverless architecture. It’s quite popular and gaining a lot of attraction. In this blog, I’ll drive your attention towards Serverless architecture, and, then to Stateful Serverless architecture. We will see the pros and cons of such an architecture, followed by the concept of stateful serverless. Thus the title is, From Serverless to Stateful Serverless. So, let’s start with Serverless.

What is Serverless?

Serverless or Serverless Computing is an execution model where the cloud provider (AWS, Azure, or Google Cloud) is responsible for executing a piece of code by dynamically allocating the resources. Serverless allows us to build and run applications, services without thinking about servers. Further, it eliminates infrastructure management tasks such as server or cluster provisioning, operating system maintenance, and capacity provisioning.

Serverless encompasses two different but overlapping areas:

  1. Baas(Backend-as-a-Service): Applications that significantly or fully incorporate third-party cloud hosted applications and services, to manage server side logic and state. Typically rich-client applications – single page web-apps or mobile apps using vast ecosystem of cloud accessible databases(Firebase), authentication services(Auth0) & so on.
  2. Faas(Functions-as-a-Service): Serverless can also mean applications where the application developer writes server side logic, but, unlike traditional architectures, the application runs in stateless compute containers. For eg:- AWS-Lambda.

Our topic is more focused towards the FaaS aspect of serverless computing. We will continue to explore more in that direction.


FaaS lets developer write and update a piece of code on the fly. It would execute only in response to an event. For example, a user clicking on an element in a web application. This makes it easy to scale code and is a cost-efficient way to implement microservices. In the serverless world, we typically need to adopt to a more microservice based architecture. Here’s a visualization of how this should transform.

From Monolithic to serverless transformation.
We typically tend towards microservice architecture for FaaS.

FaaS functions are naturally stateless i.e., those that provide a purely functional transformation of their input to their output.

FaaS Characteristics

Let’s go through some characteristics of FaaS.

  1. Faas is about running backend code without managing our own server systems or our own long-lived server applications.
  2. Faas are regular applications when it comes to Language and Environment. They do not require coding in a specific framework or library.
  3. Deployment is very different from the traditional systems. We have no server applications to run ourselves. In FaaS we upload the code for our function to the FaaS provider & the provider does everything else necessary for provisioning resources, instantiating VMs, and etc.
  4. Horizontal scaling is fully automatic, elastic, and managed by the provider. The vendor itself handles all underlying resource provisioning and allocation. The compute containers executing our functions are ephemeral. Faas provider creates and destroys them purely driven by runtime needs.
  5. In FaaS, the provider defines few event types through which the functions can be triggerd. Like, a scheduled task, or a stream input, etc. Most providers trigger functions as a response to inbound HTTP requests.

Cold start vs Warm starts

As I have mentioned above, compute containers are ephemeral. Providers create and destroy them as per the runtime needs. This brings us to the concept of cold and warm start for our functions.

  1. Cold Start: Our Functions can execute inside of a compute container only. Triggering of an event brings up such a container, with some assosciated latency. This is Cold start, and, generally takes more time to excute a function completely. 
  2. Warm Start: When the function execution completes, we can still keep the container up for a little while. If another event triggers during this time, the container will respond far more quickly. Typically known as a Warm Start. Resulting in a significant performance boost over cold start.


Faas Funtions have significant restrictions when it comes to local state. i.e., data that we should store in variables in memory, or data that we write to a local disk. We do have such storage available with us but there is no guarantee that such state persists across multiple invocations. We can’t assume that the state from one invocation of a function will be available to another invocation of the same function.

Thus, we say FaaS functions are stateless. Though, it is more accurate to say that any state of a FaaS function that requires persistence, has to be externalized of the FaaS function instance. Such state oriented functions will make use of a database, a cross-application cache(eg. Redis), or a network file/object store(eg. S3), etc to store state across requests. In other words, this approach requires dependecy on external units from outside of the container.

Certainily, this brings us to a very interesting point which has become a need of the hour. Can we bring state to serverless functions? In other words, can we move from stateless serverless computing to stateful serverless computing? The answer is yes, and the result, is a stateful serverless architecture.

Stateful Serverless

Lightbend is promoting and working on “Cloudstate – Next Generation Serverless”, which is a successor to Serverless. It is an ongoing project at Lightbend that paves the way for serverless 2.0. What challenges it aims to solve? Well, there are two major challenges with serverless:

  1. FaaS functions & compute containers are ephemeral, stateless and short lived. This makes it problematic to build general-purpose data-centric cloud-native applications since it is simply too costly — in terms of performance, latency, and throughput.
  2. FaaS Functions have no direct accessibility, they can’t communicate directly with each other using point-to-point communication. They always need to resort to publish-subscribe, passing all data over some slow and expensive storage medium.

Cloudstate architecture addresses the above-mentioned challenges with serverless. The Cloudstate reference implementation is built on top of Kubernetes, Knative, Graal VM, gRPC, and Akka. There’s a growing set of client API libraries for different languages. Since the cloudstate project uses grpc as a mode of communication, it makes it easy for the function to be written in any language supporting grpc. For more on the cloudstate project refer to the documentation provided here.

That’s all for now, but this is just the begining. Stay tuned for more content coming up on Cloudstate in the subsequent blogs.


We saw what it means to be serverless. Firstly, we saw what are the advantages of using serverless or typically FaaS. Secondly, we generalized the notion of state in a serverless architecture. After that, we saw the areas that needs to be addressed for the creation of a more distributed system leveraging serverless. As a result, we conclude that stateless serverless is powerful but what must come next is the stateful serverless. So, to fulfil the need of stateful serverless Lightbend came with the project Cloudstate.

In conclusion, I hope that this blog gives us a clarity on what we have right now with us in the form of Serverless Computing, and what to achieve next. Hopefully, this post helps. Above all, don’t forget to add your doubts or suggestions in the comments section. 🙂 Until next time.


  1. Project Cloudstate – Lightbend

Written by 

Prashant is a Senior Software Consultant having experience more than 5 years, both in service development and client interaction. He is familiar with Object-Oriented Programming Paradigms and has worked with Java and Scala-based technologies and has experience working with the reactive technology stack, following the agile methodology. He's currently involved in creating reactive microservices some of which are already live and running successfully in production, he has also worked upon scaling of these microservices by implementing the best practices of APIGEE-Edge. He is a good team player, and currently managing a team of 4 people. He's always eager to learn new and advance concepts in order to expand his horizon and apply them in project development with his skills. His hobbies include Teaching, Playing Sports, Cooking, watching Sci-Fi movies and traveling with friends.