Today we are to going to talk about a very important aspect of reactive web applications development. It’s not from the alien world, we always talk about it but we rarely want to deal with it. It’s Cache. In today’s mammoth scalable architecture, we are mostly surrounded by big design issues and somewhere here and there we neglect benefits of using a very useful concept of Cache.
Here, our superman of reactive-world Lightbend Inc. gave us the solution to this problem as well, Akka-Http Cache. Though it has been designed on top of Caffeine framework which in itself is highly efficient Caching solution based on Java 8, it provides us the capability to implement caching in highly concurrent or Asynchronous environment, which makes it special as scale and concurrency are inseparable for building any robust application to handle Zillions of users.
Akka-Http provides us cache solution in 2 different forms: Request-Response Caching (also called caching directives) and Objects Caching. In this post, we will discuss Object Caching in Akka-HTTP.
Many a times, there are requirements for caching heavy computation generated objects which are to be served to many client requests. In such cases cache saves us from recomputing such objects again and again instead we can directly serve the requests from the value from the cache which is saved in cache on first request arrival.
This is very well handled by Akka-Http caching solution supported by Caffeine under the hood.
Let’s roll up our sleeves and write some real code to explain it better.
This is the driver object our Akka-HTTP application containing the main method. We have instantiated Cache object here with implementation class LfuCache which is the implementation of Least Frequently Used cache strategy. This is a frequency based caching strategy where eviction of cache object depends upon the frequency of access. Internally an access counter is maintained which is incremented with each access to the cache. This counter has a threshold value of 15 and if the counter is further increased all the values are downsampled. It simply means access counter is maintained for the small time window, this allows saving storage space for access counter which is only 4 bits now. For cache insert policy it follows TinyLFU (based on Bloom Filter theory) which is a very efficient approach for cache insertion based on the time window and decides eviction of keys in the cache. For further reading, you can refer to TinyLRU.
Akka-Http played a smart move and didn’t provide set and get as two different methods. Instead to set and get the cache, it gave a higher order function getOrLoad(Key, Function to Compute Cached Value), where the cached value is a Future wrapped function to compute the value corresponding to the key.
It’s not over yet, superheroes should take care of supervillains. Here, we are talking about Concurrent requests want to cache same objects into Cache, how to handle this situation?
This whole process is asynchronous and it avoids to cache multiple copies of same cache object. Only for first access of cache future is put into the cache and for subsequent requests either if future is completed then the value is returned else future is returned from the cache.
Here, every time a product is added to the cart, total cart value is updated by recomputing using the price of the new product added into the cart and correspondingly cache key CART_VALUE is also refreshed with the new value. But every time cart value is accessed we are not going to recompute total value. We will simple lookup for CART_VALUE in the cache and return the value.
Here compute cart value is representing a heavy computation task which we want to avoid to run for every single request instead we want to serve it from cache to avoid redundant computation. In real life scenario, it can be accessing some third-party API which we want to avoid if there is no change in response as per business use case.
For complete application with details please click here.
For further reading from Akka-Http Lightbend Inc. official documentation please click here.