A Beginner’s Guide to Deploying a Lagom Microservice on Kubernetes

Reading Time: 4 minutes

lagom_and_kafka_modified

Both Lagom and Kubernetes are gaining popularity quite fast. Lagom is an open source framework for building reactive microservice systems in Java/Scala. And, Kubernetes (or K8s in short) is an open-source system for automating deployment, scaling, and management of containerized applications. Together they make an excellent stack for developing Reactive microservices of production grade.

We have already seen a lot of blogs on Lagom on this site, like:

In this blog post, we will take a closer look at the steps helping us deploy our Lagom application, built using Java & Maven, to Kubernetes.

We all know that Lagom is a distributed microservices framework. It means it can be deployed over a cluster of machines where all of them will interact with each other for handling requests, R/W data, or maintaining state. For this Lagom takes help of Akka Clustering which provides a fault-tolerant decentralized peer-to-peer based cluster with no single point of failure. Since the cluster is peer-to-peer based, the communication between all machines has to be ensured. Now, how to do that let’s see through our Lagom Restaurant example.

Step 1: Add Akka Cluster Manager & Bootstrap dependencies

We have to start with adding Akka Cluster Manager & Bootstrap dependencies in the pom.xml file:

<!-- akka management (cluster formation) -->
<dependency>
<groupId>com.lightbend.akka.management</groupId>
<artifactId>akka-management_2.12</artifactId>
<version>0.18.0</version>
</dependency>
<dependency>
<groupId>com.lightbend.akka.management</groupId>
<artifactId>akka-management-cluster-bootstrap_2.12</artifactId>
<version>0.18.0</version>
</dependency>

Akka Management is a suite of tools for operating Akka Clusters. Whereas, Akka Cluster Bootstrap helps to form a cluster by using Akka Discovery to discover peer nodes. It is an alternative to configuring static seed-nodes in dynamic deployment environments such as on Kubernetes or AWS. It is built on the flexibility of Akka Discovery, leveraging a range of discovery mechanisms depending on the environment we want to run our cluster in. This leads us to our next step, i.e., adding Akka Discovery suite.

Step 2: Add the Akka Discovery suite

Akka Discovery provides a simple interface around various ways of locating services, such as DNS or using configuration or key-value stores like zookeeper, consul etc. For our Lagom example, we will use the DNS method to locate services over Kubernetes. So, we have to add the following dependency in the pom.xml:

<!-- akka discovery (dns) -->
<dependency>
<groupId>com.lightbend.akka.discovery</groupId>
<artifactId>akka-discovery-dns_2.12</artifactId>
<version>0.18.0</version>
</dependency>

Step 3: Add the Reactive Lib tool

Reactive Lib (reactive-lib) is a component of Lightbend Orchestration for Kubernetes which is a developer-centric suite of tools that helps us deploy Reactive Platform applications to Kubernetes or DC/OS. So, we have to add the following dependency in the pom.xml:

<!-- for reactive-lib kubernetes api -->
<dependency>
<groupId>com.lightbend.lagom</groupId>
<artifactId>api-tools_2.12</artifactId>
<version>1.4.8</version>
</dependency>
<dependency>
<groupId>com.lightbend.rp</groupId>
<artifactId>reactive-lib-akka-cluster-bootstrap_2.12</artifactId>
<version>0.9.2</version>
</dependency>
<!-- reactive-lib service locator -->
<dependency>
<groupId>com.lightbend.rp</groupId>
<artifactId>reactive-lib-service-discovery-lagom14-java_2.12</artifactId>
<version>0.9.2</version>
</dependency>

Now, we are all set in terms dependencies required to run our Lagom application on Kubernetes.

Step 4: Update Configuration

Next step is to configure our Lagom application. For this, we have to add the following content to the application.conf file:

play {
akka.actor-system = menu
modules.enabled += com.knoldus.lagom.sample.restaurant.menu.impl.MenuModule
http.secret.key = none
}
lagom.persistence.ask-timeout = 10s
menu.cassandra.keyspace = menu
cassandra.default {
## list the contact points here
contact-points = ["10.0.2.2", "10.0.2.2", "10.0.2.2"]
## override Lagom’s ServiceLocator-based ConfigSessionProvider
session-provider = akka.persistence.cassandra.ConfigSessionProvider
}
cassandra-journal {
contact-points = ${cassandra.default.contact-points}
session-provider = ${cassandra.default.session-provider}
keyspace = ${menu.cassandra.keyspace}
}
cassandra-snapshot-store {
contact-points = ${cassandra.default.contact-points}
session-provider = ${cassandra.default.session-provider}
keyspace = ${menu.cassandra.keyspace}
}
lagom.persistence.read-side.cassandra {
contact-points = ${cassandra.default.contact-points}
session-provider = ${cassandra.default.session-provider}
keyspace = ${menu.cassandra.keyspace}
}
lagom.circuit-breaker {
default {
# Enable/Disable circuit breaker.
enabled = on
# Number of failures before opening the circuit.
max-failures = 10
# Duration of time in open state after which to attempt to close
# the circuit, by first entering the half-open state.
reset-timeout = 30s
# Duration of time after which to consider a call a failure.
call-timeout = 30s
}
}
lagom.persistence.read-side {
# how long should we wait when retrieving the last known offset
offset-timeout = 5s
# Exponential backoff for failures in ReadSideProcessor
failure-exponential-backoff {
# minimum (initial) duration until processor is started again
# after failure
min = 3s
# the exponential back-off is capped to this duration
max = 30s
# additional random delay is based on this factor
random-factor = 0.2
}
# The amount of time that a node should wait for the global prepare callback to execute
global-prepare-timeout = 30s
# Specifies that the read side processors should run on cluster nodes with a specific role.
# If the role is not specified (or empty) all nodes in the cluster are used.
run-on-role = ""
# The Akka dispatcher to use for read-side actors and tasks.
use-dispatcher = lagom.persistence.dispatcher
}
akka {
actor {
provider = cluster
}
cluster {
shutdown-after-unsuccessful-join-seed-nodes = 40s
}
discovery {
method = kubernetes-api
kubernetes-api {
pod-label-selector="app=menu"
}
}
io {
dns {
resolver = async-dns
async-dns {
provider-object = com.lightbend.rp.asyncdns.AsyncDnsProvider
resolve-srv = true
resolv-conf = on
}
}
}
management {
http {
hostname = ${?POD_IP}
port = 10002
bind-hostname = 0.0.0.0
bind-port = 10002
}
cluster.bootstrap {
contact-point-discovery {
required-contact-point-nr=1
}
}
}
remote.netty.tcp {
hostname = ${?POD_IP}
port = 10001
bind-hostname = 0.0.0.0
bind-port = 10001
}
}
lagom.cluster.exit-jvm-when-system-terminated = on
play.modules.enabled += com.lightbend.rp.servicediscovery.lagom.javadsl.ServiceLocatorModule
play.server.http {
address = 0.0.0.0
port = 9000
}

Above configurations will let Lagom application know that it has to use Kubernetes API (kubernetes-api) for Akka Discovery and other details.

Step 5: Bind Akka Cluster Manager & Bootstrap to Module

At last, we have to bind Akka Cluster Manager and Cluster in our Lagom application’s module file. Like this:

package com.knoldus.lagom.sample.restaurant.menu.impl;
import akka.actor.ActorSystem;
import akka.management.AkkaManagement$;
import akka.management.cluster.bootstrap.ClusterBootstrap$;
import com.google.inject.AbstractModule;
import com.google.inject.Inject;
import com.knoldus.lagom.sample.restaurant.menu.api.MenuService;
import com.lightbend.lagom.javadsl.server.ServiceGuiceSupport;
import com.typesafe.config.Config;
import play.Application;
import play.Environment;
public final class MenuModule extends AbstractModule implements ServiceGuiceSupport {
private final Environment environment;
private final Config config;
public MenuModule(final Environment environment, final Config config) {
this.environment = environment;
this.config = config;
}
@Override
protected void configure() {
if (environment.isProd()) {
bind(AkkaManagerAndClusterStarter.class).asEagerSingleton();
}
bindService(MenuService.class, MenuServiceImpl.class);
}
static class AkkaManagerAndClusterStarter {
@Inject
AkkaManagerAndClusterStarter(final Application application, final ActorSystem actorSystem) {
if (application.isProd()) {
AkkaManagement$.MODULE$.get(actorSystem).start();
ClusterBootstrap$.MODULE$.get(actorSystem).start();
}
}
}
}

Now, we are all set in terms of code. The only part left is to create a Docker image of our Lagom application and deploy it on Kubernetes. For demo purpose, we are going to use Minikube.

Step 6: Start Minikube

# Start Minikube
(minikube delete || true) &>/dev/null && minikube start --memory 2048 && eval $(minikube docker-env)

Above command will delete any old started Minikube and start a new instance of Minikube with 2GB memory. Also, it will export some environment variables required for Minikube.

Step 7: Create Docker Image

Now, we have to create the Docker image of our Lagom application. In our Lagom Restaurant example, we are using Maven Docker Plugin to ease our work:

# Build Docker Image
eval $(minikube docker-env)
mvn clean package docker:build -P kubernetes

Here we are building docker image with Kubernetes profile. It is done in order to make application flexible enough to be deployed on any environment, like, Marathon, DC/OS, or AWS.

Step 8: Run Lagom Application

The last step is to run the Lagom application’s image inside Minikube. For this first, we have to apply Role-Based Access Control over K8s. This will let us exercise fine-grained control over how users access the API resources running on our K8s cluster.

---
#
# Create a role, `pod-reader`, that can list pods and
# bind the default service account in the `default` namespace
# to that role.
#
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pod-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "watch", "list"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: read-pods
subjects:
# Note the `name` line below. The first default refers to the namespace. The second refers to the service account name.
# For instance, `name: system:serviceaccount:myns:default` would refer to the default service account in namespace `myns`
- kind: User
name: system:serviceaccount:default:default
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io

So, just apply RBAC via the following command:

# Create RBAC
kubectl create -f lagom-on-k8s-rbac.yaml

Next, we have to create our Lagom application’s configuration:

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: menu
labels:
app: menu
spec:
replicas: 1
template:
metadata:
labels:
app: menu
spec:
containers:
- image: "menu-impl:latest"
imagePullPolicy: IfNotPresent
name: menu
ports:
- containerPort: 9000
- containerPort: 10001
name: "akka-remote"
- containerPort: 10002
name: "akka-mgmt-http"
---
apiVersion: v1
kind: Service
metadata:
name: menu
labels:
app: menu
spec:
ports:
- name: "http"
port: 9000
nodePort: 31001
targetPort: 9000
- name: "akka-remote"
port: 10001
protocol: TCP
targetPort: 10001
- name: "akka-mgmt-http"
port: 10002
protocol: TCP
targetPort: 10002
selector:
app: menu
type: NodePort

And apply it using the following command:

# Apply configuration
kubectl create -f lagom-k8s-app.yaml

This will start our Lagom application which we can verify by hitting http://192.168.99.100:31001/menu.

Conclusion

The Akka Cluster Management & Bootstrap and Reactive Lib suite(s) have made it very simple to deploy our Lagom application(s) on Kubernetes or DC/OS. All it requires is few configurations/dependencies and we are good to go.

The above process was for Java/Maven Lagom application(s). For readers looking for Scala/SBT Lagom application(s), please refer to the blog post written by Yannick De Turck: Lagom 1.4 and Kubernetes Orchestration. It explains the process very nicely. And, for readers looking for an example, please refer to code repo: lagom-on-k8s.

I hope you found this blog post informative. If you have any feedback then please leave a comment below.

knoldus-advt-sticker

Written by 

Himanshu Gupta is a software architect having more than 9 years of experience. He is always keen to learn new technologies. He not only likes programming languages but Data Analytics too. He has sound knowledge of "Machine Learning" and "Pattern Recognition". He believes that best result comes when everyone works as a team. He likes listening to Coding ,music, watch movies, and read science fiction books in his free time.

Discover more from Knoldus Blogs

Subscribe now to keep reading and get access to the full archive.

Continue reading