Suitable Tech stack for Reactive Fintech

Table of contents
Reading Time: 7 minutes

The FinTech industry is growing very rapidly and becoming more vital. This industry has also become an appealing alternative to traditional banking.

With the rise of the FinTech industry, there is always a battle going around the use of technology that fits the best. Those technologies are always used and promoted which not only suit the business needs, but are also easy to maintain, easy to work on, and easily scalable. 

Below is the list of a few technologies that are the best fit for building the Fintech application.


Scala is an object-oriented functional programming language that is strongly typed. The code written in Scala language is small and precise. It was first released in 2004 with the hard work of Prof. Martin Odersky and was also one of the earliest languages which ran on the JVM platform. Though it runs on the top of JVM, it is still much more powerful than Java and is much more flexible, concise, and has elegant syntax. Code written in Scala is less buggy and requires less maintenance as compared to Java or any other JVM-based language.

The features of the Scala language includes:

  • Scala language is statically typed (but feels dynamic) and also supports a multi-paradigm model (Object-oriented, Functional, Imperative)
  • Scala language has many functional programming features like – Immutability, Pattern matching, Type Inference, case class for defining the domain entities, Higher-Order functions, currying, Lazy evaluation, etc.
  • With its immutable collections and concurrent support, it is heavily used in data-intensive applications.
  • It is a very powerful language having elegant, concise, and readable syntax. And it also has a rich set of collections (both immutable and mutable) having rich APIs and methods.
  • Lines of code written in Scala are 40-50 percent less (in some cases, even more) as compared to Java.
  • Scala also works extremely well with the thousands of Java libraries that have been developed over the years.
  • It is used for server-side applications (including microservices), big data applications, and can also be used in the browser with Scala.js
  • The latest release of Scala3 adds more power to the Scala language. It is much simpler, stable, and predictable. This version gives a lot of new features and also reduces the compile-time, saving resources and thus, the capital.

With all of these features of the Scala language, the demand for this language is growing exponentially in the Fintech Industry. A lot of companies are migrating their source code to Scala and adapting Scala as their default programming language for building any product.

Apache Kafka

Apache Kafka is an open-source platform used for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications. Kafka was initially developed by LinkedIn and was later open-sourced in 2011.

Kafka is based on the publisher-subscribe pattern where the system can have multiple publishers (producers) and multiple subscribers (consumers). Producers publish messages to the Topic and consumers then subscribe or consume messages from the Topic. Kafka is very different from the traditional pub-sub approach because the traditional approach doesn’t support multiple consumers. Kafka allows multiple independent applications reading from data streams to work independently at their own rate.

The features of Apache Kafka includes:

  • Kafka is capable of running on a distributed clustered environment that allows data to be distributed across multiple servers, making it scalable beyond what would fit on a single server.
  • Kafka delivers messages of any type and size at network limited throughput with latencies as low as 2ms.
  • Kafka is capable of storing streams of data safely in a distributed, durable, and fault-tolerant cluster.
  • Kafka supports mission-critical use cases with the guaranteed ordering of messages, no message loss, and efficient exactly-once processing.
  • Kafka’s out-of-the-box Connect interface integrates with hundreds of event sources and event sinks including Postgres, JMS, Elasticsearch, AWS S3, and more.
  • Kafka is the best fit for the Microservice architecture for asynchronous communication between two microservices that match the requirements of modern microservices architecture.

Kafka is the platform that is used by more than 80 percent of Fortune 100 companies, 70 percent of top Banking institutions, and more than 80 percent of the Telecom industry; which makes it a de-facto standard to use Kafka for messaging queue and data streaming pipelines.


Akka is an open-source toolkit for building highly concurrent, distributed, and fault-tolerant applications on the JVM. 

Akka is based on the concepts of the Actor Model on the JVM. The actor in Akka is simply an object that receives messages and takes the appropriate actions as defined. The actors in Akka communicate with each other via messages only. The actors can have their own state which can be reliably maintained without explicitly worrying about synchronization or race conditions.

Akka features include:

  • Akka is capable of handling up to 50 million messages per second on a single machine and approximately 2.5 million actors in just 1 GB of heap space.
  • Akka actors can help build systems that scale up, using the resources of a server more efficiently, and out, using multiple servers.
  • Akka is completely based upon the concepts of Reactive Manifesto and thus allows to write a system that can self-heal and stay responsive in case of any failures. The supervision strategy is the straightforward mechanism of providing fault-tolerant behavior.
  • Akka can be distributed easily on multiple clusters and data centers providing eventual consistency on the fly without any code modification.
  • Akka actors are fully asynchronous and non-blocking by their nature and provide a great platform for building microservices.
  • Akka is written in Scala language but has all the APIs and interfaces available in Java and Scala language.

Akka is gaining a lot of attraction in the FinTech industry these days because it is very lightweight, flexible, and highly fault-tolerant.
Akka has a lot of other libraries built on top of the Akka module which are- Akka Persistence, Akka Streaming, Akka Cluster, Akka HTTP.

Play Framework

Play Framework is an open-source web application framework that follows the Model View Controller architecture pattern. It is used to build highly scalable, lightning-fast applications with an ease unparalleled on the JVM.
Play is a stateless, asynchronous, and non-blocking framework that uses an underlying fork-join thread pool to do work-stealing for network operations, and can leverage Akka for user-level operations.

Features of Play:

  • Play framework is built on Akka, providing predictable and minimal resource consumption (CPU, memory, threads) for highly scalable and distributed applications.
  • Play is completely stateless which enables horizontal scaling, ideal for serving many incoming requests without having to share resources (such as a session) between them.
  • Play framework handles incoming requests asynchronously out of the box, making it much faster and responsive than traditional Java-based web frameworks.
  • Play just works out of the box but is also highly configurable, so it does not force any specific pattern on engineers.
  • Play supports dynamic compilation of source code, which means no need to restart the server, simply wait for a few seconds to see the effect. This feature makes Play more developer-friendly and also saves time.
  • Play uses CPUs more efficiently.  In fact, many developers find that Play works so well that they now require fewer servers to run the application with more efficiency.


Lagom is an opinionated open-source framework for building systems of Reactive microservices. Lagom builds on Akka and Play, proven technologies that are in production in some of the most demanding applications today. Lagom tools and APIs simplify the development and deployment of a system that includes microservices.

Lagom’s integrated development environment allows focusing on solving business problems instead of wiring services together. A single command builds the project, starts supporting components and your microservices, as well as the Lagom infrastructure.

Lagom has several advantages:

  • Lagom offers a guided, Reactive approach to building RPC-style, backend microservices and implementing Event Sourcing and persistence strategies that are designed to be deployed at scale.
  • Lagom framework uses the Play Framework, an Akka message-driven runtime, Kafka for decoupling services, Event Sourcing, and CQRS patterns, and support for monitoring and scaling microservices in the container environment.
  • Lagom offers an especially seamless experience for communication between microservices. Service location, communication protocols, and other issues are handled by Lagom transparently, maximizing convenience and productivity.
  • Lagom provides a pretty user-friendly way to scale out using scale factors during deployment. A Lagom application can be scaled easily by making changes to the configuration only.


Apache Spark is a big data processing engine that can quickly perform tasks on very large data sets and can also work in distributed clustered environments. Spark is based on the concept of map-reduce algorithm. It is different from classic Map-reduce frameworks like Hadoop in a way that all the calculations are in-memory and evaluated lazily making up to 2-3 times faster than the traditional map-reduce algorithm, and that too with lesser resources. Apache Spark is capable of sorting 100TB of data in just 23 minutes, making a world record, while the previous world record was 72 minutes.

Some of the features of Spark are:

  • Spark data models provide resiliency in distributed clustered environments making the application fault-proof.
  • Spark performs much faster by caching data in memory across multiple parallel operations by reducing disk I/O operations to a minimum.
  • Spark allows loading data from multiple data sources like CSV files, JSON files, RDBMS, Parquet files, Hive tables, etc.
  • Spark comes with a unified stack that is capable of doing Batch Processing, Stream Processing, Graph Processing, and Machine Learning processing.
  • Spark Graph Processing module (i.e. Spark GraphX) provides computation on Graph Structured datasets and performs Graph analytics.
  • Spark Machine Learning module (i.e Spark ML) allows computation on the dataset to make machine learning models easy and scalable. It can capture the Market behavior, apply ML algorithms, and provide the best and accurate prediction about the Market.
  • Spark SQL mixes SQL queries with Spark programs. Spark SQL module allows querying the Spark structured datasets. It also allows the execution of SQL queries alongside complex analytic algorithms using tight integration properties of Spark SQL.
  • Spark provides high-level APIs in Java, Python, Scala, and R language which makes developers program in their most comfortable language.


Kubernetes is an open-source container orchestration engine for automating deployment, scaling, and management of containerized applications. After Docker came into the picture, the application deployment strategy has dramatically changed and shifted from heavy jar-based deployment to a lightweight container-based deployment.
In large systems, there are hundreds or thousands of services and each service has its own scaling requirements depending on traffic. So to manage the large set of containers, there requires a proper orchestration strategy and then Kubernetes plays an important role to automate the deployment, scaling, and management of containers smoothly.

There following features of Kubernetes:

  • Kubernetes progressively rolls out changes to your application or its configuration, while monitoring application health to ensure it doesn’t kill all your instances at the same time. If something goes wrong, Kubernetes will roll back the change for you.
  • Automatically places containers based on their resource requirements and other constraints, while not sacrificing availability. Mix critical and best-effort workloads in order to drive up utilization and save even more resources.
  • Scale application up and down with a simple command, with a UI, or automatically based on CPU usage.
  • Restarts containers that fail, replaces and reschedules containers when nodes die, kills containers that don’t respond to user-defined health checks, and doesn’t advertise them to clients until they are ready to serve.
  • Deploy and update secrets and application configuration without rebuilding the image and without exposing secrets in the configuration.
  • Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods and can load-balance across them. No need to modify the application to use an unfamiliar service discovery mechanism.
  • Automatically mount the desired storage system, whether from local storage, a public cloud provider such as GCP or AWS, or a network storage system such as NFS, iSCSI, Gluster, Ceph, Cinder, or Flocker.

Also published on Medium.

Written by 

Harshit Daga is a Sr. Software Consultant having experience of more than 4 years. He is passionate about Scala development and has worked on the complete range of Scala Ecosystem. He is a quick learner & curious to learn new technologies. He is responsible and a good team player. He has a good understanding of building a reactive application and has worked on various Lightbend technologies like Scala, Akka, Play framework, Lagom, etc.