Akka dispatchers are extremely important in Akka framework. They are directly responsible for optimal performance, throughput and scalability.
Akka supports dispatchers for both event-driven lightweight threads and thread-based Actors. For thread-based Actors each dispatcher is bound to a dedicated OS thread.
Default dispatcher is a single event-based dispatcher for all Actors created. The dispatcher used is this one:
For many cases it becomes mandatory to group Actors together for a dedicated dispatcher, then we can override the defaults and define our own dispatcher.
Setting the Dispatcher
Normally we set the dispatcher in Actor itself
Or we can set it in the ActorRef
There are different kind of dispatchers
- Priority event-based
It binds dedicated OS thread to each Actor. The messages are posted to LinkedBlockingQueue which feeds messages to dispatcher one by one. It has worst performance and scalability. We also cannot share it among actors. Although Actors do not block for threads in this case.
The ExecutorBasedEventDrivenDispatcher binds a set of Actors to a thread pool backed up by a BlockingQueue. The dispatcher must be shared among Actors. This dispatcher is highly configurable and here we can specify things like ‘type of queue’, ‘max items’ , ‘rejection-policy’.
It is meant for handling messages when priorities are assigned to messages. It is done by using PriorityExecutorBasedEventDrivenDispatcher. It requires a PriorityGenerator as an attribute in its constructor.
Let’s look at an example where we have a PriorityExecutorBasedEventDrivenDispatcher used for a group of messages fired on an actor.
If we execute the code, high-priority messages are served first even when low-priority fired before the high-priority messages. This is how the output will appear if executed by sbt on command line:
‘ExecutorBasedEventDrivenWorkStealingDispatcher’ is one of my favorite dispatcher. It redistributes work to actors that use the same dispatcher and that do not currently have any messages the mailbox. It is a great way to increase performance of the system.
Usual way to use it is to create an Actor companion object to hold the dispatcher and then set it in Actor explicitly.
Lets look at the code example which uses this dispatcher.
Now lets have a look at a code listing for ExecutorDispatcherApplicationExample. It’s constituents are a Scala Object ExecutorDispatcherApplicationExample. Two Pairs of Scala Class and Object (Object is there to share dispatchers) and last three case classes which corresponds to message passing between Actors. Functionally ProcessorUsingExecutorBasedDispatcher is a master actor as it spawns workers and ProcessorUsingExecutorBasedDispatcherWorker is a worker.
The Application ExecutorDispatcherApplicationExample starts the ProcessorUsingExecutorBasedDispatcher Actor and passes a SimpleMessage to it.
ProcessorUsingExecutorBasedDispatcher Actor uses ExecutorBasedDispatcher and it does so by using a companion object. Please notice that the dispatcher of the object is assigned to self.
The receiving block in ProcessorUsingExecutorBasedDispatcher “case SimpleMessage(msg) =>” starts the workers and a Router over it. It also finally fires “parallelCounter” number of messages to the router.
The Worker Actor ProcessorUsingExecutorBasedDispatcherWorker also uses ExecutorBasedDispatcher. Its received block “case SimpleRequest(msg) =>” replies back to ProcessorUsingExecutorBasedDispatcher after sleeping off for a while.
The Actor ProcessorUsingExecutorBasedDispatcher receives the response and it continue to receive it till all responses are received from Worker Actor. Finally it calculates total time elapsed for parallel process execution.
We also can code this using Work Stealing Dispatcher. The complete code example is given below:
That is all about dispatchers, they are a powerful way of managing performance, throughput and scalability of the system. All code examples are here at my github repository. It is a sbt project. The command
will ask for the program to run, just enter the number and see how it executes.