Use Case for Message-driven Reactive Systems

Reading Time: 5 minutes

​​Reactive programming is an asynchronous programming paradigm, concerned with streams of information and the propagation of changes.

Reactive Architecture is nothing more than the combination of reactive programming and software architectures. Also known as reactive systems, the goal is to make the system responsive, resilient, elastic, and message-driven.

In this blog, we will discuss Message-driven architecture with an example of DishWashers:

Let’s assume you have a stack of dishes that need to be cleaned. You use a pipelined process of transformations: Rinsing each dish, scrubbing it in soapy water, rinsing it again to remove the soap and put the dish into the dishwasher. This process is purely synchronous (all being handled by one processor/person) and sequential (cannot be reordered), as you can only do one task at a time.

Or you could loosen sequential guarantees and try to batch the dishes through each transformation in the pipeline by rinsing all of the dishes first, scrubbing all of the dishes next, rinsing all of them again and then placing them all into the dishwasher as a group. In doing so, you may have increased performance marginally by increasing the locality of the data (each dish) to the place of execution (the scrubbing sponge, the dishwasher, etc), but the performance is bound by the fact that you are still only a single processor doing all of the work.

Imagine you have a good friend who also loves dishwashing. Knowing how much he or she also enjoys washing dishes, you invite that person over to your house to help out one evening. You’ve now created a thread pool – multiple threads of execution which may or may not contribute to the work. When that person arrives, you show them the stack of dishes and ask them if they would start washing them. Your friend goes to the sink and starts working dishes through the pipelined process. You have now spawned asynchronous work. If you stand behind them and wait for them to finish and do not do anything else, you are a blocked thread.

Instead of standing around, you could go have a lemonade, and now you am non-blocking but also not productive to the task at hand. And your friend is likely becoming quite irritated with you. Moreover, unless they are a considerably more efficient washer of dishes that you are (a faster processor, for example), the work is not getting done much faster.

At this point, you join your friend in performing the work. Your friend is responsible for grabbing a dish from the stack, rinsing it and scrubbing it. You take the dish from them at that point, rinse it again and put it into the dishwasher. You am now non-blocking and productive to the task, but by staging the work this way, we have shared resources that affect our ability to do our work optimally. As the thread of execution responsible for handling work delegated by your friend, you have to wait for each dish, which could take an indeterminate amount of time to be scrubbed depending on how dirty it is (the essence of CPU-intensive work). This is the essence of concurrency, typically over the shared mutable state.

The way to ameliorate(make better) concurrency and contention are to increase footprint. Assume if you had another sink, you could take a stack of dishes and go do your work independently of your friend, and that would be more efficient. You are forking the work by grabbing a stack of dishes and will have a join when we need to reassemble the dishes into the dishwashing machine. However, like parallel collections, the fork and join phases are still concurrent – we must divide the data, the dishes, between ourselves to be processed, and we must join the data (again, the dishes) in the transformed collection (the dishwasher).  It could still take longer than if we did it sequentially if the time to fork and join the work is too high.

We need for our work to be as parallel as possible in order to maximize our processing efficiency. To do this, we need to increase our footprint even more. It would be ideal if we could somehow broker the dishes to both your friend and you without trying to figure out who is doing what, or for us to steal work from a common queue. And if you have two dishwashers, you don’t have contention on a single join point. This increase in footprint does mean additional cost, but the decrease in concurrency and increase in parallelism means that your scalability has become linear – as you add more processors/sinks/dishwashers. The value of this increased scalability and efficiency may well justify the increase in the cost of commodity hardware to the business.


In the end, the ultimate goal is to create asynchronous, non-blocking and parallelized execution with minimal points of concurrency. By brokering the work, you are treating each dish (or batch of dishes) as a message, where you do not care which pipeline performs the transformation (washing of dishes).

In this way, Message Driven architectures are the essence of Reactive applications. Hope you liked the blog and understood the case of message Driven Architecture. Stay tuned!! 🙂


Must Read -> Message-Driven Architecture Lightbend Blog

Written by 

Charmy is a Software Consultant having experience of more than 1.5 years. She is familiar with Object Oriented Programming Paradigms and has familiarity with Technical languages such as Scala, Lagom, Java, Apache Solr, Apache Spark, Apache Kafka, Apigee. She is always eager to learn new concepts in order to expand her horizon. Her hobbies include playing guitar and Sketching.