Lagom 1.4: What’s new?

Lagom
Table of contents
Reading Time: 7 minutes

Lagom is a reactive microservice framework, which is increasingly becoming the go-to solution for building a microservice-based architecture for industry-wide services. In case you still haven’t heard of it, this blog post will not only familiarize you with the idea behind Lagom but will also walk you through some history of Lagom since its 1.0 launch, what’s new in the latest stable release, and some budding ideas that you might see in future releases.

Are we building Microservices or Microliths?

Lagom

The common mistake that people make in migrating from a monolithic framework to microservices is that they end up creating microliths. Microlith was a term coined by Jonas Bonér, which is nothing but kind of a distributed monolithic architecture, in which we just divide our monolithic service into smaller services and let them communicate with each other directly, through rest endpoints. In doing so, we are still creating a dependency between the services such that if service A is consuming real-time data from service B, and if service B were to go down, then essentially service A also goes down.

What is a microservice, really?

What is a microservice, really?

The core idea behind the microservice-based architecture is that the services should be fully decoupled. In order to achieve that, we need to focus on building an event-driven system. That is, we don’t let our services communicate directly. Instead, we introduce a broker in between, which consumes events produced by one service, and publishes those events to another. In doing this, we are decoupling our services and maintaining events in the broker, instead of communicating via direct rest calls, ultimately building up the resiliency of the services. So if service A consumes events from service B, and service B goes down, though it will not get the new events published by service B, it will still continue to consume the events that were already published to the broker by service B, before it went down, thus continuing to function.

Why go for Lagom when we have Akka and Play?

Lagom

We have already seen that the best way to implement a microservice is to embrace the event-first design. Now, the question arises: why introduce Lagom when we already have Akka and Play available to us? What was the need of developing Lagom, when it itself is based entirely on the two existing libraries? The reason is very straight-forward. Because, even though Akka and Play are very strong and self-sufficient in building a microservice, they are both designed to be used independently, and do not outline the key principles of microservice architecture. Secondly, if we are building microservices using Akka and Play, we have to understand all the intermediate steps – we’ll have to take care of the serialization, the event sourcing, persistence, broker configurations, etc. And these steps are all essentially the same everytime we set up a microservice. If we are building a microservice environment, we’ll always be building more than two microservices. Developers building microservices directly on Akka and Play end up adding a huge amount of boiler-plate code in their services, which is a redundant effort and requires in-depth knowledge of both libraries.

Lagom framework puts all those pieces together and gives us a simple set of libraries to create our microservices, without digging deep into Akka and Play, redundantly. It reduces all the boiler-plate code in developing an architecture consisting of multiple microservices.

Any more perks or just a common library?

Lagom: Not too little, not too much. Just right.

In addition to reducing our effort of writing boiler-plate code and pre-requisite knowledge, Lagom also comes with a fully integrated developer experience that helps us avoid the hassle of starting each service one by one in separate terminals – which, depending on our use-case, may include our microservice, Kafka, Cassandra, etc.

Lagom facilitates us with:

  • A persistence API built on top of Akka Persistence, that helps to focus on domain modeling,
  • The Topic API, which was initially aimed to support Kafka, but was redesigned so that we can also use other options, and
  • The Service Descriptor API, which is a typesafe way of connecting to the different services, so we can create clients.

So, what did we mean by integrated developer experience again? When we develop a microservice, essentially, we don’t just develop one service. We develop at least two, rather most of the time, we have many of them. And as developers, we would want to be able to integrate these services easily and test them during the implementation period. How easy it would be if I could just open my IDE, code over all those different microservices, start them all together on my machine by a single command and test the end to end processing of it all, without having to deal with the hassle of deployment configuration and integration with external dependencies just to test the logic.

Lagom makes it all possible. For instance, say we want to use Cassandra for event sourcing and Kafka as our broker. When we’ll start our new Lagom project with say, two services, the Lagom development environment will fire up the embedded Cassandra and embedded Kafka for us. So while we develop and test our logic, the dependencies that we may need in future are already in place for us.  And in case we aim to work with a relational database, Lagom allows us to use H2DB for our development and testing environment. So while developing our service logic, without worrying about firing up the external dependencies such as Kafka or Cassandra, we just hit the lagom:runAll command on our console, and test our APIs. Lagom takes care of it all, giving us a smooth developer experience.

Lagom 1.4: So finally, what’s new?

The history of Lagom is not that long. But over the past 20 months, Lightbend has published 5 minor releases of Lagom with little but significant feature upgrades and bugfixes. Let’s walk through some major additions over the time before we proceed to discuss what’s Lagom 1.4 has in store.

Evolution

The first Lagom release was limited to SBT as the build tool and Cassandra as its database. In v1.1 releases, Maven support was also added, because most of the Java community dealt with maven projects. Later on, in v1.2, the Broker API was introduced, with implementation for Kafka. Later on, JDBC support was also added to include support for relational databases. In v1.3 releases, Lightbemd added the support for JPA as well, in order to facilitate consumption of events in the form of fields in a relational database. In v1.3, Lightbend also introduced the Scala API, since the first Lagom release only had support for Java API.

In Lagom 1.4, there are a lot of features and bug fixes, which are mainly improvements on the past versions and are not immediately visible on the surface.

  • Lagom moved to Akka 2.5 and Play 2.6. There are a lot of new features in Akka 2.5 that facilitate our Lagom service in a good way. For instance, when we shut down Lagom, it’s very important that we shut down gracefully. Especially when we are working in a clustered environment. Because in a cluster, we’d want to optimize the way we move the entities from one node in the cluster to another. So, in v1.4, Lagom shut down system is based on the Akka coordinated-shutdown, which is a new feature introduced by Akka 2.5.
  • Also, when we are deploying Lagom in a cluster, the persistence entities are spread over our clusters, a feature which is based on an Akka feature known as Akka sharding. Cluster sharding has data that has to be shared between the nodes. In Lagom 1.4, that data is now shared using CRDTs (Conflict-free Replicated Data Types) to ensure eventually consistent shard placement and global availability via node-to-node replication and automatic conflict resolution. This is also a new feature in Akka 2.5. (If you’re building a new application using Lagom 1.4, we recommend using distributed data, so as to utilize CRDT feature in Akka to manage your cluster, which is a big improvement over the previous system that was using journal)
  • Lagom 1.4 also adds Slick read side support. Lagom uses the concept of CQRS, so we have a write side and a read side, and the events – they get persisted in the journal. Before 1.4, we had the write support for JDBC and support for JPA. But as Scala developers, we had to write our own slick codes to do the same. With v1.4, we still have to write the code for our tables, but now with the Slick read side support in place, we can just create a read side processor based on slick, and it takes care of the offset management for us.

Other upgrades include added support for Akka HTTP server engine, the inclusion of RDBMS persistence in Lagom Testkit, support for Scala 2.12 and many more which are described in brief here.  If you want to migrate your project from Lagom v1.3 to v1.4, refer to the official migration guide here.

What’s in store for future releases?

Whats next?

Version 1.4 took a long time in its release as compared to its predecessors, since it focussed on improving the services working under the hood. In future, Lagom is expected to have more frequent upgrades with further improvement on the developer experience.  Also presently, if we try to embed Lagom Persistence API in a non-Lagom project, we will end up having dependency conflicts in our Play/Akka projects. Future upgrades will also aim at making the persistence API an independent library. This will allow us to take our good old java project, introduce the Lagom persistence API and build a CQRS system within itself.

Lagom community is growing fast and is very welcoming for any kind of suggestions or open source contributions. To explore more, visit the Lagom website, and for suggestions and queries, feel free to dive into Gitter/Lagom.

If you have any questions, queries or suggestions for me, please drop a comment below and I’ll be happy to answer. And if you found this useful, do like & share this blog, we are all here to learn! Cheers. 🙂

References:

  1. Lightbend podcast: What’s New For Reactive Microsystems In Lagom 1.4
  2. Lagom 1.4 Migration Guide

knoldus-advt-sticker

Written by 

Enthusiastic - Quick learner - Love challenges - Never say never attitude

2 thoughts on “Lagom 1.4: What’s new?8 min read

Comments are closed.

Discover more from Knoldus Blogs

Subscribe now to keep reading and get access to the full archive.

Continue reading