A Beginner’s Guide to Deploying a Lagom Service Without ConductR

Reading Time: 2 minutes
lagom-Knoldus
Lagom

How to deploy a Lagom Service without ConductR? This question has been asked and answered by many, on different forums. For example, take a look at this question on StackOverflow – Lagom without ConductR? Here the user is trying to know whether it is possible to use Lagom in production without ConductR or not. To which the best answer that came up was – “Yes, it is possible!”. Similarly, there are other forums too where we can find an answer to this question.

However, most of them just give us a hint about it or redirect us to Lagom’s documentation, i.e., Lagom’s Production Overview. But none of them provide a one-stop solution to it which is easy to use and is as simple as running a java program from a command line interface.

So, we decided to find a solution for it and share it. In this blog post, we will guide you towards deploying a Lagom Microservice in production without using ConductR with a simple java -cp command. Now, let’s take a look at the steps.

Step One – Configuring Cassandra Contact Points

If you are planning to use dynamic service location for your service but need to statically locate Cassandra, which is obvious in Production, then modify the application.conf of your service. Also, disable Lagom’s ConfigSessionProvider and fall back to the one provided in akka-persistence-cassandra, which uses the list of endpoints listed in contact-points. Your Cassandra configuration should look something like this-

cassandra.default {
## list the contact points here
contact-points = ["127.0.0.1"]
## override Lagom’s ServiceLocator-based ConfigSessionProvider
session-provider = akka.persistence.cassandra.ConfigSessionProvider
}
cassandra-journal {
contact-points = ${cassandra.default.contact-points}
session-provider = ${cassandra.default.session-provider}
}
cassandra-snapshot-store {
contact-points = ${cassandra.default.contact-points}
session-provider = ${cassandra.default.session-provider}
}
lagom.persistence.read-side.cassandra {
contact-points = ${cassandra.default.contact-points}
session-provider = ${cassandra.default.session-provider}
}

Step Two – Providing Kafka Broker settings

Next step is to provide Kafka broker settings if you plan to use Lagom’s streaming service. For this, you need to modify the application.conf of your service, if Kafka service is to be statically located, which is the case when your service acts only like a consumer, otherwise, you do not need to give following configurations.

lagom.broker.kafka {
service-name = ""
brokers = "127.0.0.1:9092"
client {
default {
failure-exponential-backoff {
min = 3s
max = 30s
random-factor = 0.2
}
}
producer = ${lagom.broker.kafka.client.default}
producer.role = ""
consumer {
failure-exponential-backoff = ${lagom.broker.kafka.client.default.failure-exponential-backoff}
offset-buffer = 100
batching-size = 20
batching-interval = 5 seconds
}
}
}

Step Three – Creating Akka Cluster

At last, we need to create an Akka cluster on our own. Since we are not using ConductR, we need to implement the joining yourself. This can be done by adding following lines in application.conf.

akka.cluster.seed-nodes = [
"akka.tcp://MyService@host1:2552",
"akka.tcp://MyService@host2:2552"]

Now, we know what configurations we need to provide to our service, let’s take a look at the steps of deployment. Since we are using just java -cp command, we need to package our service and run it. To simplify the process, we have created a shell script for it.

#!/bin/bash
set -e
echo "Going to App directory"
APP_DIR=/path/to/lagom-service
cd $APP_DIR
echo "Building Lagom dist"
sbt "project lagom-impl" "clean" "dist"
APP_UNIVERSAL=$APP_DIR/lagom-service/lagom-impl/target/universal
rm -rf $APP_UNIVERSAL/lagom-impl-0.1-SNAPSHOT
unzip $APP_UNIVERSAL/lagom-impl-0.1-SNAPSHOT.zip -d $APP_UNIVERSAL
echo "Setting configurations"
APP_LIB=$APP_UNIVERSAL/lagom-impl-0.1-SNAPSHOT/lib
APP_CLASSPATH=$APP_LIB/*
JAVA_OPTS=""
JMX_CONFIG=""
PLAY_SECRET=none
CONFIG_FILE=/path/to/application.conf
CONFIG="-Dplay.crypto.secret=$PLAY_SECRET -Dlagom.cluster.join-self=off -Dorg.xerial.snappy.use.systemlib=true -Dconfig.file=$CONFIG_FILE"
PLAY_SERVER_START="play.core.server.ProdServerStart"
exec java -cp "$APP_CLASSPATH" $JAVA_OPTS $JMX_CONFIG $CONFIG $PLAY_SERVER_START
view raw deploy-lagom.sh hosted with ❤ by GitHub

For a complete example, you can refer to our GitHub repo – Lagom Scala SBT Standalone project.

I hope you found this blog helpful. If you have any suggestion or question, then please comment below.


knoldus-scala-spark-services
Knoldus-Blogs

Written by 

Himanshu Gupta is a software architect having more than 9 years of experience. He is always keen to learn new technologies. He not only likes programming languages but Data Analytics too. He has sound knowledge of "Machine Learning" and "Pattern Recognition". He believes that best result comes when everyone works as a team. He likes listening to Coding ,music, watch movies, and read science fiction books in his free time.

2 thoughts on “A Beginner’s Guide to Deploying a Lagom Service Without ConductR3 min read

  1. Hi Himanshu Thank you for sharing this blog.I am trying to deploy multiple lagom services without ConductrR. I am trying to follow your blog for my Project.I am having hard time in service discovery. for multiple services so that they can locate each other.What solutions did you use for service discovery for multiple Lagom services.In first phase I am planning to run all of the services on one machine .I am trying to find a solution where each of my services can locate each other dynamically.If you have any suggestion or pointer can please share that with me.I would be really grateful to you if you can also mention that which approach did you use in your project for service discovery.

    Regards
    Qasim Raza

Comments are closed.