The Twelve-factor app principle with Lagom framework

Reading Time: 11 minutes

lagom

Well, the Twelve-factor app principle is not new for software development. It was drafted by developers at Heroku and was first presented by Adam Wiggins circa 2011. It’s a methodology for building software as a service application and these best practices are designed to enable applications to be built with portability and resilience when deployed to the web.

120042874-120043015

You can easily find a lot of blogs on the twelve-factor app principles with pretty good content, but my intention here is to share my experiences with Scala eco-system in terms of Twelve-factor app principles. Lagom is a well-known framework for microservice based architecture and if you are familiar with it, you can map most of the concepts from Lagom development and deployment directly.

  1. Codebase (“One codebase tracked in revision control, many deploys“)

    In terms of development, most of the time a large software system can consist of multiple services using various languages, technologies and code bases. Though the number of codebases depends on multiple factors, its always good to have a single codebase for developing multiple services. Offcourse it has few advantages and side effects as well.

    So let’s talk about the benefits of having a single codebase for all the services first:

    I. Less boilerplate code as multiple services can share the code which is sharable like the domain models. In this way, the models are defined in a single module and more than one modules/services can use it according to the requirements and dependencies.

    II. Increased code re-usability as we can define utilities, helper classes, common classes, and services in common modules and use them as dependencies at multiple places. It dramatically reduces development time and helps in keeping coding standards and practices uniformed in the overall project.

    III. Easy to incorporate new standards as we can define a lot of things in common modules that can be used by multiple modules. For an example, we can define spark job as a class having certain parameters or configurations that can be used everywhere we need to run a spark job. In this way introducing a new configuration, parameter or standard will become easy and can be reflected on multiple modules by making a change at one place only.

    IV. Increased API handshakes as a single model will be used in multiple services, this will reduce the chances of breaking certain API handshakes between multiple services. For an example, we output of one service is input for other service and the models are defined in a common module, then we need to make a change at one place only and it will reflect on every place it is being used.

    Using a single codebase there are few drawbacks as well:

    I. Large and difficult to manage as the large applications contain a lot of modules and services.
    II. Compilation, test cases and builds take larger time which feels un-necessary if there are a lot of services within the repository and only a few are under development.
    III. Potential side effects while making changes in modules having multiple dependencies. To overcome that we need to keep well-defined test suit.
    IV. Overhead of breaking thing at a more granular level as to make them more reusable we need to break the larger modules into smaller ones.

    How Lagom supports a single codebase?

    We have seen the advantages and disadvantages of a single codebase. Now let’s talk about how the Lagom framework supports a single codebase for multiple microservices.

    So, the Lagom is designed in a way so that we can develop multiple microservices in parallel with keeping then dependent on the service level. To overcome issues related to build time, its good to write your own scripts for validating code changes and customize build tool pipelines according to the interrelated modules. One more approach of running test cases according to the changes in code also being popular these days.

  2. Dependencies (“Explicitly declare and isolate dependencies“)

    Dependencies are one of the most important parts of the microservice based application architecture. Most of us are familiar with the dependencies and how to use them in projects. Usually, most of the developer follow the approach of isolating dependencies within a service by defining the multi-layer module dependencies in a multi-module project.

    How to achieve explicitly declared and isolated dependencies in a Lagom project?

    Just like usual Scala/Java multi-module projects, Lagom applications can also have multiple modules defined with the multiple microservices dependent on each other. The concept of defining dependencies in a Lagom project is pretty similar to a java or scala multimodule projects.

    You can define and categories dependencies using multi-layer modules to keep dependencies isolated and for responsibility segregation as well. In this way, you can use dependencies according to the requirements. Though this practice increases the overhead of distributing, aggregating and maintaining dependencies within the project it also provides pretty clear and isolated dependency segregation for multiple services.

  3. Config (“Store config in the environment”)

    The configuration of an application should be stored in the environment itself as committing credentials with code is not recommended by the best practices.

    How Lagom handles configuration as environment variables?

    In Lagom applications, the typesafe config provides a standard way to load your configs at runtime according to the requirements. A Lagom application uses an application.conf file to declare all your configurations which can be defined on module level or service level according to the use cases. If the configurations are specific to one module then they can be defined within that module only and can be overwritten in case of requirements. The Lagom applications are usually deployed using either Conductr (deprecated) or marathon. The way of deployment can be different for different use cases. But the environment variables can be overwritten using every method. Considering the marathon for deployment a scala Lagom application requires a JSON configuration that can be generated either by RP tool or manually with only single configuration file aggregating all the configuration from sub-modules.

    It’s not necessary to commit that file with your code as it can be generated by RP tool again on demand basis. Once you have the marathon config ready you can overwrite your environment variables on the JSON and deploy the application using the marathon JSON.

  4. Backing services (“Treat backing services as attached resources“)

    Before drilling into it, we need to understand what a backing service is. So, any service that is consumed by the application as part of the normal operation is a backing service. It can be a database like Cassandra, a third party service like Twitter or any other micro-service within the system as well.

    According to twelve-factor app principle The code for a twelve-factor app makes no distinction between local and third-party services. If your application needs a code change to point to a new location of the same service, it’s not serving the 4th rule of the twelve-factor app. To achieve this, you need to develop your application in such a way so that you can point to any location by making changes into your configurations or environment variables.

    How Lagom supports backing services?

    As we are discussing fro the beginning Lagom is developed to support microservice based architecture and it treats all the microservices as a service only that can change their locations dynamically (hosts and ports). Lagom provide a service locator to find the dynamic addresses of the services. In terms of databases, you can define the service points using the configurations only. For customized third-party services you should be making sure that you are defining the contact points in the configuration only.

  5. Build, release, run (“Strictly separate build and run stages”)

    Considering that the code written on IDE’s cannot be deployed directly to the various environments like testing, staging or production. The deployment takes place in various steps.
    1. Build:  The code needs to be packaged in a way so that it can easily ship into multiple environments. It can be packaged in the form of a jar, bundle, package etc. Generating the executable jar or bundle from the code is called a build.
    2. Release: Once we have the executable in a microservice based architecture we must be having a configuration specific to the various environments. A combination of build generated by build state and the configuration is called a release.
    3. Run stage: Once the release is generated we can run specific processes to run the release.

    How Lagom provide Build, release and run stages?

    Lagom provides these stages in different ways. You can either deploy the Lagom application using Maraton or Kubernetes which are container orchestration tools. In a Lagom application, you can create a docker image using a simple docker script. To generate the configuration specific to the application you can use another tool called RP tool. Right now, the RP tool is available only for Scala-based Lagom application. If you are working in with Java you can create the configuration manually.

    Once you have the docker image you can upload it to any docker repo and fetch it according to environments. Offcourse, one image can be used in multiple environments depending on the configuration provided, but its good to separate out your deployments based on environments. On Marathon or Kubernetes, you can copy paste the configurations with the docker image url/address to fetch the required image and start the application with a scale factor depending on the application load or traffic.

  6. Processes (“Execute the app as one or more stateless processes”)

    According to the twelve-factor app principle, the processes should be stateless and share-nothing. Any data that needs to persist must be stored in a stateful backing service, typically a database. The memory space or filesystem of the process should be used as a brief, single-transaction cache.

    How Lagom supports Processes?

    Lagom is based on Akka actor model, and also supports CQRS and event sourcing which are kind of dependent on the current state of the application. So, the state provides a very important role in a Lagom based application. It’s very similar in Scala-based applications as well as Akka in one of the strongest framework of Scala-based applications for handling concurrency and parallelism.

    A Lagom application can by default use multiple databases for storing the state, event and user data. The use of Akka persistence makes things better in terms of persisting the current state and recovering the state in case of failure and restarts.  If your application is storing the state on its own then you should take care that the application is able to fetch the state from the database at the beginning and able to store it into a database before a graceful shutdown.

  7. Port binding (“Export services via port binding“)

    Unlike the applications which are executed inside a web server container, the twelve-factor app is completely self-contained and does not rely on runtime injection of a web server into the execution environment to create a web-facing service. The application exports HTTP as a service by binding to a port, and listening to requests coming in on that port.

    How Lagom provides port binding?

    In a Lagom application you have to implement a microservice and by default, you have to create an API for the service that can be bind to any port of your choice. This is common for all the microservices developed in Lagom. You have a host and port assigned to each service that can be accessed by browser or rest clients.

    In build.sbt you can define the ports as follows:

    lazy val usersImpl = (project in file("usersImpl"))
      .enablePlugins(LagomJava)
      .settings(lagomServicePort := 11000)
  8. Concurrency (“Scale-out via the process model“)

    In the twelve-factor app, processes are a first class citizen. The processes take strong cues from the Unix process model for running service daemons. Using this model, the developer can architect their app to handle diverse workloads by assigning different type of work to different process type. For example, HTTP requests may be handled by a web process, and long-running background tasks handled by a worker process.

    The share-nothing, horizontally partitionable nature of twelve-factor app processes means that adding more concurrency is a simple and reliable operation. So the application is easy to scale out.

    How Lagom provides Concurrency?

    The Lagom is built on Akka and handling concurrency in Lagom is a pretty easy task depending on use cases. As Scala and Java are JVM based languages and reserves the resources at the time of application start. To provide concurrency in a Lagom application, a developer can choose between the use of actors or futures. Lagom supports all the Akka features for highly concurrent program development. Using different execution contexts and dispatchers we can easily separate out CPU bound and IO bound tasks and execute them in a controlled way.

    The most important point of a twelve-factor app is, the application should be easy to scale out. Lagom provides a pretty user-friendly way to scale out using scale factors during deployment. A Lagom application can be scaled easily by making changes into the configuration only.

  9. Disposability: (“Maximize robustness with fast startup and graceful shutdown”)

    A twelve-factor app should be started or stopped at a moment’s notice. It should strive to minimize startup time as short startup time provides more agility for the release process and scaling up and it aids robustness. The processes should shut down gracefully and should also be robust against sudden death, in the case of a failure in the underlying hardware.

    How Lagom supports Disposability?

    In a Lagom application, the startup time is minimal as the executables are already available. Using Marathon or Kubernaties we can easily start a Lagom application with variable scale. The startup is pretty quick of a Lagom application. Similarly, Lagom supports graceful shutdowns. To handle graceful shutdowns in a Lagom application with custom Akka actors, we have to take care of processing existing messages in actor mailboxes. One thing t

  10. Dev/prod parity (“Keep development, staging, and production as similar as possible“)

    As a developer all of us has experienced a substantial gap between development and production environments. Twelve-Factor app categories these differences into 3 categories and the twelve-factor is designed for continuous deployment by keeping the gap between development and production small.

    1. The time Gap: The module/project a developer currently working in might take days, weeks or even months to go into production. According to the twelve-factor app, this should be as small as possible. A developer may write code and deploy it on production within a few hours or even minutes.

    2. The Personnel Gap: Usually the developers write code and ops engineers deploy it. To make personnel gap small developers who write code should be closely involved in deployment and monitoring part.

    3. The tool Gap: Developers might be using different tools/stack that is being used in production. To make the tool gap small, keep development and production as similar as possible.

    How Lagom provide Dev/Prod Parity?

    This specific rule applies more on the deployment side, and with a Lagom application, we can easily setup CI/CD to keep various environments updated.
    Once setup, Lagom applications are easy to deploy with limited efforts. So a developer can easily deploy the Lagom application whenever required. The CI/CD setup resolves the time gap issue with the applications.

    Lagom provide the RP tool for easy deployment and can be set up easily for generating docker images that can be run from Marathon and Kubernetes easily.

    As a developer, we should make sure that the development and production environment should be as similar as possible in terms of tools, stack, resources and load. This rule applies for the staging and testing environments as well.

  11. Logs (“Treat logs as event streams“)

    The logs play a very important role in monitoring and debugging of the application. A twelve-factor app never concerns itself with routing or storage of its output streams. It should not attempt to write to or manage log files. Instead, each running process writes its event streams to stdout and the developers can view this stream in the foreground of their terminal to analyze apps behavior. In staging and production, each process stream will be captured by the execution environment and redirected towards persistent storage for long term visualization.

    How Lagom provide Logs as event streams?

    The Lagom applications do not manage the logs by themselves instead in a Lagom application we can use logging frameworks like logback or slf4j to do that stuff. Individual Lagom nodes can publish their logs to an external file and using tools like logstash and file beat we can aggregate the logs and push to ELK cluster for the analysis and alerting.

  12. Admin processes (“Run admin/management tasks as one-off processes“)

    This part is specific to maintenance works. Developers often wish to do administrative or maintenance tasks for the app, such as running database emigrations, console, running one time scripts. The twelve-factor app focuses on running admin/management tasks as one-off processes. If the admin processes need to perform periodically then its good to make them automated so that they don’t need to execute manually on all servers and its identical. This promotes having REPL languages for an application.

    How Lagom provide admin processes?

    Once the Lagom application is ready to deploy you can use any language to run such admin/maintenance processes on the server. The scripts can be the part of codebase itself and managed inside a single repo.

    That’s all for this blog, hope that was helpful to understand the twelve-factor app approach in terms of Lagom application development and deployment.

References:
Twelve-factor app principles & Lagom Framework

This article was first published on the Knoldus blog.”

Knoldus-Scala-Spark-Services

Written by 

Girish is a Software Consultant, with experience of more than 3.5 years. He is a scala developer and very passionate about his interest towards Scala Eco-system. He has also done many projects in different languages like Java and Asp.net. He can work in both supervised and unsupervised environment and have a craze for computers whether working or not, he is almost always in front of his laptop's screen. His hobbies include reading books and listening to music. He is self motivated, dedicated and focused towards his work. He believes in developing quality products. He wants to work on different projects and different domains. He is curious to gain knowledge of different domains and try to provide solutions that can utilize resources and improve performance. His personal interests include reading books, video games, cricket and social networking. He has done Masters in Computer Applications from Lal Bahadur Shastri Institute of Management, New Delhi.