As a niche consulting and development organization we end up in a lot of enterprises who would like to modernize, would like to build web-scale products but they are not ready to look beyond Spring. It is a strongly debated topic and there are reasons for still going the Spring way but the reasons are less and less as we go ahead.
For starters, this is what Eugene had to say
Spring was great for Java in its day but you can achieve far more with Traits and “Cake Pattern” instead of IOC/DI (Scala you do not need Spring framework). Akka concurrency + parallelism is more better than Spring. Even if you can have Spring employing Akka – your microservice would perform better with less bloat.
and on the same thread, Vlad mentioned
Now Akka vs Spring. It is 2017; Spring should not exist. It was born out of incompetence, misunderstanding and misery, and belongs to Java world of the past.
Anyway, let us try to get into some objective analysis of some of the parameters which matter the most
I) Reactive
- Akka: fully resilient, elastic and responsive and message-driven; the model for the Reactive Manifesto
- Spring: as of Spring 5, being advertised as “reactive”, but new asynchronous capabilities do not equal reactive, traditional pre 5 spring highly synchronous and blocking, state of the art in building monoliths
II) Modeling
- Akka: think in terms of actors, representing things, mapping to domain driven design;
- Akka presents your cloud as a virtual supercomputer, fitting billions of actors, across many nodes
- Spring: think in terms of objects, operating in an MVC world, limited to a single node
III) Application State
- Akka: persistent actors, distributed across a cluster, contain in-memory state and the ability to make real-time decisions without contention; share events and keep state private
- Spring: the state is contained in the database, represented as CRUD; contention between spring nodes against the state in a central database can be a big issue
IV) Resilience
- Akka: actors are resilient and have failure strategies, clustered nodes are resilient, actors will be moved to healthy nodes; Akka separates failure logic and makes it a first class citizen
- Spring: try/catch spaghetti
V) Persistence
- Akka: persistent actors uniformly shard across the cluster, enabling singleton, non-replicated behavior, backed by the immutable event log
- Akka: persistence store backed by many plugins
- Spring: limited to CRUD, against your choice of database
VI) Elasticity
- Akka: seamless horizontal and vertical scaling
- Spring: roll your own solution to scale with additional stateless nodes, no ability to be elastic with the current running footprint
VII) Standards
- Akka: modules default to event sourcing, CQRS, domain driven design, and eventual consistency, utilizing a reactive programming model
- Spring: none of the above standards is a natural fit and would need to be built in the endpoints, in other words, someone will be in the framework business once you incorporate all of this in Spring
VIII) Integrations
- Akka: reactive systems paradigm using asynchronous, non-blocking service to service communication, fully decoupled
- Akka: Alpakka library for asynchronous integrations
- Spring: service to service real-time callout heck-fire, brittle with chained failures
IX) Learning Curve
- Akka: medium/high, mostly associated with the move to a reactive programming model
- Spring: medium/high for Spring 5, which is a significant change over previous versions
To be fair to Spring, let us give some points where it deserves
Developers availability
This is where Spring would win hands down. There are tons of developers doing imperative Java and if you want to be building legacy code today then that is a choice that you have to make.
Thanks for sharing your opinion in public forums, where others can share their opinion as well. Indeed its one of the important topic being discussed during digital transformation.
Unfortunately, I don’t share the same opinion as you. Because I don’t think the comparison is apple to apple. The real comparison I would make is more on Spring vs Lagom space and not directly on Spring vs Akka. IMO Akka/Actors are becoming more and more fundamental building blocks instead of directly leveraging it. E.g Frameworks like Lagom or Play leverages Akka under-the-hood and provides higher order APIs for developers to build applications quickly.
Before going too far, let me give a little background about my journey, which taught me a different experience.
I am from a typical enterprise IT development background. Though I officially operate as a part of the architecture group, I consider my trade at the core as “Engineering”.
Around 2011/12-ish (arguably at the peak of SoA hype-cycle), we went thru the selection process to pick few frameworks for our application development needs.
In a nut-shell, we ended up picking
1. Spring framework ecosystem (mainly core Spring + Spring WS) for web services development.
2. At the same time, we were also in need of a scalable distributed framework to perform complex business logic in a highly parallel way. We ended-up picking Akka even though it was very new at that time. Because we really liked it for simple architecture. We really liked the Actor programming Model, which allowed to the break down the business logic into nice smaller components and enabled communication b/w them via messages. At that point, even Akka clustering was in the alpha/beta stage. Only Akka remoting was available if you needed clustering capability. We ended-up leveraging Akka remoting.
Both ended-up serving us very well for more than 5 years. We saved arguably millions of dollars in licensing fees by leveraging commodity hardware and open-source. Those days, Akka was marketed as a distributed application development framework. Nobody in the industry was even close to referring them as micro-services or containers etc. Internally we were debating if TypeSafe can release an HTTP gateway for Akka, we could try using Akka based components for web services as well. At that time Spray was very early and it was outside of TypeSafe. (Later TypeSafe acquired Spray and it became as Akka HTTP). At the same time, we also considered using Vertx (it had similar characteristics of event-loop based architecture. Remember it wasn’t called as non-blocking at that point. It was called ‘Thread Per Request’ model).
Though we didn’t use Akka / Spray for web services development, we were closely watching the Akka space. I was even running a small community on Google PLUS for Akka (https://plus.google.com/communities/102964963992429601710). We also were one of the early adopters of the Reactive Manifesto campaign.
Now let me share my opinion on the areas the author had compared in your blog.
Reactive :
It’s true that Akka had made Reactive Programming Model integrated with its core first when compared to Spring. Spring didn’t bundle the reactive programming model into their core framework till Spring 5. But in all honesty, the original idea of reactive programming came from three main companies which developed the specification (TypeSafe, Pivotal and Netflix). Each of them had their own reference implementation. For TypeSafe, it became part of Akka. For Pivotal, it became part of project-reactor and for Netflix, it became part of RxJava. Needless to say, pivotal developed project-reactor to test the theory & the response from the spring community, so that it can be made part of spring framework later if necessary. So from the very start spring had a handle on the ‘reactive’ spectrum.
Your assessment of “Spring 5, being advertised as “reactive”, but new asynchronous capabilities do not equal reactive” really surprised me. As far as I know, any framework which adheres to reactive streams specification should be qualified to call as ‘fully reactive’. Technically you can even build a fully reactive application leveraging standard Java APIs, without using any of these frameworks as JVM itself comes with a reference implementation of reactive streams API. Unfortunately, it’s too low-level for a typical developer to operate on it.
I do respect Akka team’s vision in creating the specification and implementing it as part of Akka. But it doesn’t mean that the latecomers are not fully reactive.
One natural advantage which Akka had is the easiness with which they implemented reactive model due to the actor model architecture, where the message passing becomes events and actors becomes managed components by the underlying execution engine to provide resiliency etc. But IMO that reason alone not sufficient to call other frameworks as ‘not fully reactive’. I hope the author did n’t mean to say anybody who is not leveraging actor-based programming model is not ‘true reactive’.
Modeling :
IMO Actors don’t map to a thing directly in DDD. You can implement ‘things in DDD’ using actors as in the same way you can implement using components in Spring. Whether its Aggregate Type or Value Type or Repository or Entity, there is no direct one to one mapping which makes modeling easier on both frameworks. Your comment (“Akka presents your cloud as a virtual supercomputer, fitting billions of actors, across many nodes”) reflects better on Elasticity than Modeling. More on this later.
You can build a DDD application comfortably using both frameworks. In fact, I would even argue its far easier to model and build a complex enterprise application, which is normally a combination of greenfield components + brownfield components + legacy applications using spring, as spring provides a rich ecosystem for all nature of application development (Support for EAI, Message-driven, Services, Relational as well NoSql integration, serverless, REST etc).
Application State :
IMO its far easier and recommended to build your services stateless as much as possible. Leave the state to a Data Store. Because handling state is hard (transactions, durability, serializability, concurrency plays a major role, which is most of the time over-looked and end-up spending a big price later). Developers shouldn’t consider taking application state into their hand without clearly analyzing the price they will be paying.
Your comment “distributed across a cluster, contain in-memory state and the ability to make real-time decisions without contention; share events and keep state private” doesn’t reflect the true complexity involved in the proper entity design developers have to go thru on the Akka side. Transactional boundary comes with the Aggregate Root in the DDD model is very hard to implement outside by managing the transactions outside the data store (not impossible, but not easy either). Have a look at the PersistenceFactory implementation under the hood and what developers have to do to enable Akka to manage it carefully. Essentially any update for a given entity ID will be forced to go to the corresponding predefined actor’s mailbox, which is running on a given node on a given cluster. That’s how serializability is maintained. As I said IMO is too much overhead on the application side, which could be done easily by keeping the state on a data store (Relational or NoSQL depending on the need).
Resiliency
This is where things get interesting. If you argue that Actors are resilient, because Akka manages them and roll them into a healthy node etc, then its valid argument. But IMO that’s not the right abstraction level to handle resiliency in the way you had explained. If my running service is failed or not active, Akka is not the only supervisor who can do that. There are far better alternative solutions which are much more portable than Akka. E.g. A container cluster manager which can check the health of a running container and move them to a healthy node can do the exact same job irrespective of a spring application or Akka application or for that matter any other containerized application (go or java or .NET etc).
This is the space where Akka is not very explicit IMO. I do have a feeling that Akka in some ways indirectly competes with the whole container and cluster management system. wherein some ways it comparable Akka Actors running on two different nodes managed by Akka clustering vs Two running containers managed by a cluster manager like Mesos or Kubernetes. I can achieve the resilience you had mentioned in a higher level abstraction way better than Akka. If you study the Service Mesh-based solutions, it’s even better for this kind of cross-component communication and resiliency.
IMO Spring doesn’t lack this capability because that’s not its focus. It’s not even aiming to solve. Even if it does, probably I would prefer to give the responsibility of moving a failed component to another healthy node to the cluster manager, who can do a better job in a much more portable way.
Persistence
This goes hand in hand with the application state. As I mentioned, Akka tries to maintain the state outside of data store, where your component/service is no more stateless (irrespective whether you manage the state explicitly or Akka does it for you. The fact is that the state is managed outside of Data Store(where somebody else has to handle the transactions, concurrency, serializability etc). IMO NoSQL DataStore can do a far better job handling these instead of a framework taking it explicitly (again not impossible, but not easy).
Also, I don’t agree with Akka provides many plugins on the persistence side. Try implementing “persistent actors uniformly shard across the cluster, enabling singleton, non-replicated behavior, backed by the immutable event log” on a data store other than Cassandra and relational, say Document Store like Couchbase or Graph Store.
Elasticity
This goes hand in hand with Resiliency. A good container cluster manager can do a far better job on elastic scaling. That is a much more portable and diversified solution for any containerized workload. Again the question boils down to the level of abstraction where you want to have elasticity. Actor level or container(process) level?
Standards
IMO none of the “event sourcing, CQRS, domain driven design, and eventual consistency” are standards. These are all different programming patterns you apply based on your need.
You can build a good DDD application with Spring too.
You can build a nice eventually consistent application with spring too (Spring + Kafka). You can build an event sourced based application with Spring too (Spring + AxonDB) etc. You can build a nice CQRS application with Spring too (Spring + Axon). Use the right combination of libraries.
Integrations :
IMO I don’t think its a fair assessment. Spring provides a far more wide spectrum of integration libraries and patterns than Akka. Irrespective of green field or brown field or legacy. If your assessment is purely based on blocking vs non-blocking, then its true only before spring 5. But no more. Actually, Alpakka provides a library to integrate with Spring ecosystem, so that it can play nicely with the larger spring ecosystem.
Learning Curve:
Learning is a lifelong endeavor every developer has to go thru, especially in the software world where the rate of change is high. So I am fine with the complexity as long it deserves.
In summary, I feel the comparison is biased it towards Akka and it was not apple to apple comparison.
If the author believes Actor based programming model is the only model which is going to be successful in the future, then its a different story. At that point is not Akka vs Spring. It’s actor vs another programming model. If that’s the case, Erlang should have been the most successful language for the last 15+ years. Irrespective of its commendable success on highly reliable/scalable telephone switching space, it wasn’t adopted widely for general purpose application development.
The other thing IMO the author doesn’t sufficiently high-light is the duality of Akka ecosystem. Like I said earlier, Akka is not a pure play application development framework like Spring. It plays more as a building block for creating distributed applications( which was its original goal). I believe that’s why even light bend doesn’t promote Akka as a framework to build micro-services or web applications. It has built Lagom for microservices or play for web applications (utilizing Akka under the hood). Like I mentioned in the beginning, Lagom vs Spring is a far better comparison than Akka vs Spring. Play vs Spring MVC is a far better comparison than Akka vs Spring MVC. In Lagom Vs Spring case, spring wins hands down on many aspects (which should be a separate blog on its own. I have some material prepared comparing those two early this year. Maybe I shall publish a separate post on Medium regarding this topic. Hopefully soon).
Last but not least, “looking not beyond spring” doesn’t necessarily mean “inertia”. It could very well mean the lack of sufficient advantages the alternate option provide. Simply it’s an engineering trade-off. Name one framework which is more than 15 years old, still adopted all cloud-native libraries or best practices and become defacto tool for developing microservices within the JVM ecosystem. I am confident that majority of the companies use spring to develop micro-services, not because they liking sticking with legacy or simply due to the availability of more developers, It simply because it’s still one of the very few frameworks catches-up and innovates faster. It’s quite easily observable as long as you are ready to look at the right place.
Finally, my personal opinion is that Akka is not a bad framework. IMO Actor model is not a higher level abstraction model that I need to build every enterprise application on it. Use where it’s warranted. Don’t force it everywhere.
>>Unfortunately, I don’t share the same opinion as you. Because I don’t think the comparison is apple to apple. The real comparison I would make is more on Spring vs Lagom space and not directly on Spring vs Akka.
I understand. I would suggest Spring v/s Akka is more apple to apple than Spring v/s Lagom. Lagom vs Spring boot probably would be more apt.
>>Before going too far, let me give a little background about my journey, which taught me a different experience.
Your journey sounds fantastic and amazing. Congratulations on all the successes. For almost 10 years I too was in the Spring world when Rod Johnson came out with expert one on one book pre-spring on which a lot of spring was eventually based. It was a whiff of fresh air at that time with bloated J2EE, but that said, see where Spring has come today. In 2010 we started on the Reactive stack and have never looked back.
Now let me share my opinion on the areas the author had compared in your blog.
Reactive :
>>Your assessment of “Spring 5, being advertised as “reactive”, but new asynchronous capabilities do not equal reactive” really surprised me.
With reactive I mean the 4 tenets of Reactive , which are https://www.reactivemanifesto.org/ If the framework provides resiliency, elasticity, is based on event driven paradigm and is responsive, then yes it is reactive.
Modeling :
>>>You can build a DDD application comfortably using both frameworks. In fact, I would even argue its far easier to model and build a complex enterprise application, which is normally a combination of greenfield components + brownfield components + legacy applications using spring, as spring provides a rich ecosystem for all nature of application development (Support for EAI, Message-driven, Services, Relational as well NoSql integration, serverless, REST etc).
I would agree that you can model DDD in all ecosystems be it Spring, Akka, or .Net frameworks. The key is what lends you easily do that. For example, between Scala and Java, you could write imperative and mutable code in Scala but it does not lend itself easily to do that. It pushes you in the direction of best practices by default
Application State :
>>Your comment “distributed across a cluster, contain in-memory state and the ability to make real-time decisions without contention; share events and keep state private” doesn’t reflect the true complexity involved in the proper entity design developers have to go thru on the Akka side.
Again, the argument here is that the actor model makes it much easy to do that. With every actor working with a single thread at any given time and doing only one thing, it takes care of a lot of threading issues and contentions which haunt us in other frameworks
Resiliency
>> This is where things get interesting. If you argue that Actors are resilient, because Akka manages them and roll them into a healthy node etc, then its valid argument. But IMO that’s not the right abstraction level to handle resiliency in the way you had explained. If my running service is failed or not active, Akka is not the only supervisor who can do that.
Absolutely, you can roll in your own solutions and plumb in various technologies together but Akka gives it to you under the hood for use by default, no work needed.
>> If you study the Service Mesh-based solutions, it’s even better for this kind of cross-component communication and resiliency.
You can easily use the service mesh solutions running Akka cluster with Kubernetes. That is at a macro level. At the micro level, Akka is giving you all of that within the single JVM. We use service mesh solutions like Consul and Istio all the time but at a macro level and for different needs
>>IMO Spring doesn’t lack this capability because that’s not its focus. It’s not even aiming to solve.
Sure, that is fine
Persistence
>>This goes hand in hand with the application state. As I mentioned, Akka tries to maintain the state outside of data store, where your component/service is no more stateless (irrespective whether you manage the state explicitly or Akka does it for you. The fact is that the state is managed outside of Data Store(where somebody else has to handle the transactions, concurrency, serializability etc). IMO NoSQL DataStore can do a far better job handling these instead of a framework taking it explicitly (again not impossible, but not easy).
The state is safe in an Akka actor because of the actor design, which does only one thing at one time
>>Also, I don’t agree with Akka provides many plugins on the persistence side. Try implementing “persistent actors uniformly shard across the cluster, enabling singleton, non-replicated behavior, backed by the immutable event log” on a data store other than Cassandra and relational, say Document Store like Couchbase or Graph Store.
There are multiple plugins to use Akka persistence with couchbase and other databases, here are 2 examples
https://github.com/akka/akka-persistence-couchbase
https://github.com/Product-Foundry/akka-persistence-couchbase
Elasticity
>>This goes hand in hand with Resiliency. A good container cluster manager can do a far better job on elastic scaling. That is a much more portable and diversified solution for any containerized workload. Again the question boils down to the level of abstraction where you want to have elasticity. Actor level or container(process) level?
Again, yes, as I said that you plumb together a solution with various technologies and then make it work or you can use something which is provided by default to make your life easy. Scalability is built into the Akka ecosystem, both vertical and horizontal
Standards
>>You can build a good DDD application with Spring too.
Agreed
>>You can build a nice eventually consistent application with spring too (Spring + Kafka). You can build an event sourced based application with Spring too (Spring + AxonDB) etc. You can build a nice CQRS application with Spring too (Spring + Axon). Use the right combination of libraries.
Absolutely agreed, as I said it is a comparison of what makes it easier and pushes you in the direction of best practices. Lagom comes bundled with all of this together and the goodies or default circuit breakers etc so that you do not have to worry about any of this. It also provides you guard rails so that you can develop your micro services with the best practices built in for you.
Integrations :
>> If your assessment is purely based on blocking vs non-blocking, then its true only before spring 5. But no more.
You are correct the spring 5 effort to integrate with the other reactive libraries is a good move, but the main contention there is the way of integration with chained futures mess which is hard to maintain and debug. With the right developers probably it would work fine but there is too much to think about there.
Learning Curve:
>>At that point is not Akka vs Spring. It’s actor vs another programming model. If that’s the case, Erlang should have been the most successful language for the last 15+ years. Irrespective of its commendable success on highly reliable/scalable telephone switching space, it wasn’t adopted widely for general purpose application development.
That statement is correct as well because of the complexity of Erlang and it being a pure functional programming language. Scala got people to look into Actors and then Akka made it mainstream. Today almost all the high throughput systems use Akka (https://www.lightbend.com/case-studies)
>>Like I mentioned in the beginning, Lagom vs Spring is a far better comparison than Akka vs Spring. Play vs Spring MVC is a far better comparison than Akka vs Spring MVC.
I do not agree with the comparison of Lagom v/s spring, it should be Lagom v/s Sprint boot or any other microservice based library instead
>>In Lagom Vs Spring case, spring wins hands down on many aspects (which should be a separate blog on its own. I have some material prepared comparing those two early this year. Maybe I shall publish a separate post on Medium regarding this topic. Hopefully soon).
Would love to read.
>>Last but not least, “looking not beyond spring” doesn’t necessarily mean “inertia”. It could very well mean the lack of sufficient advantages the alternate option provide. Simply it’s an engineering trade-off. Name one framework which is more than 15 years old, still adopted all cloud-native libraries or best practices and become defacto tool for developing microservices within the JVM ecosystem.
Well on that argument Java is oldest and one of the most successful languages but there are others like Scala on the JVM which are catching steam. Being old does not mean that it is good for the future as well. Did Java feel the need to introduce lamdas evetually, sure it did. Would Spring build the reactive first approach, sure, and Spring 5 is a start but how many developers understand the tenants of
Domain-Driven Design
Distributed Systems Design
Event-Sourcing, CQRS
Scalability, Resilience, Consistency models
Delivery Guarantees
Microservice systems
The Actor model
CAP theorem (and more)
SOLID design principles, hexagonal and onion architecture
CRDTs, the Saga pattern
Asynchronous, non-blocking designs
These are what defines the architecture of the future and Lagom and Akka are right there, spot on.
I would leave you with one excerpt from “Hackers and Painters” which would show the analogies that we are talking about. We can go around the old ways and let the competition beat us or we can do the right thing.
“So if Lisp makes you a better programmer, like he says, why wouldn’t you want to use it? If a painter were offered a brush that would make him a better painter, it seems to me that he would want to use it in all his paintings, wouldn’t he? I’m not trying to make fun of Eric Raymond here. On the whole, his advice is good. What he says about Lisp is pretty much the conventional wisdom. But there is a contradiction in the conventional wisdom: Lisp will make you a better programmer, and yet you won’t use it.
Why not? Programming languages are just tools, after all. If Lisp really does yield better programs, you should use it. And if it doesn’t, then who needs it?
This is not just a theoretical question. Software is a very competitive business, prone to natural monopolies. A company that gets software written faster and better will, all other things being equal, put its competitors out of business. And when you’re starting a startup, you feel this very keenly. Startups tend to be an all or nothing proposition. You either get rich, or you get nothing. In a startup, if you bet on the wrong technology, your competitors will crush you.
”
Details here -> http://www.paulgraham.com/avg.html
https://www.quora.com/How-does-AKKA-compare-to-Spring-Cloud
“Spring 5 is a start but how many developers understand the tenants of
Domain-Driven Design
Distributed Systems Design
Event-Sourcing, CQRS
Scalability, Resilience, Consistency models
Delivery Guarantees
Microservice systems
The Actor model
CAP theorem (and more)
SOLID design principles, hexagonal and onion architecture
CRDTs, the Saga pattern
Asynchronous, non-blocking designs”
Today? Enough to join team(s) & mentor them in cloud approach Scala is an niche language for Spark and perhaps Gatling, stress tests and some echoes of past Don’t forget that now HAL is moved outside services to either to AWS system because it is cost-effective or OpenShift decoupling infrastructure from devs to devops under condition internal cloud is a requirement You dont have devops? Go to onto Zuul + Hystrix + Ribbon & it will be faster and easier than going for Scala / Akka whatever keeping in mind that support wont be so easily skipped
Just look how much project(s) are to be migrated onto Java microservices rather than Scala. Even Linkedin decided to limit Scala implementations and keep focus on Java Python components
Simply Scala devs are not enough available on market & hype for consideration it will take over Java microservices is not possible any more
Hi Artur, Thanks for your response though if you would notice, the discussion is not on Scala versus Java. Both have their good reasons and both are popular. We are debating Spring versus Akka. Akka provides implementation for both Java and Scala https://akka.io/docs/
What will be the role of these frameworks (Spring, Lagom/Akka) when we are developing on a Kubernetes cluster with a Service Mesh on Envoy proxy like Istio. All the microservices cross cutting concerns will be handled by the mesh?
You would still have to write your microservices, right 😉
Even a hardware salesman can see the difference.
I came across this post back in Nov and finally have a bit of free time to respond. I felt compelled to, because the arguments that Vasu makes in favor of Spring 5 and in opposition to Akka were quite astounding to me (not in a good way) since he states that he has used Akka and was an early signer of the Reactive Manifesto.
It seems to me, as an “outsider” looking in (I say outsider because I am not a developer, I sell hardware but used to sell software so I know “a little”) that the only good reason to use Spring 5 (with Webflux, CompletableFutures) instead of Lagom, Play and its underlying Akka actor model if you want to build reactive microservices is because you are familiar with Spring and are not familiar with Akka and the actor model. That seems to be the primary argument that Vasu is making, as he cites the Akka learning curve, while also making the claim that pretty much anything you can do with Akka you can do with Spring 5. Vikas provides good responses to Vasu’s claims, and provides a link to the OpenCredo presentation slide deck.
The OpenCredo slide deck is instructive, and the presentation from which the deck derives is even more instructive – https://www.youtube.com/watch?v=mQI2C5VxneU
IMO, forget all the arguments Vasu makes about the Spring ecosystem being much richer than Akka/Lagom thus making Spring 5 “easier” for developers, and how by leveraging libraries Spring 5 “can do everything Akka can do”, (to paraphrase Vasu), and the counterpoints Vikas makes about Akka’s libraries and ecosystem. Instead, look at the underlying models of each framework.
IMO, the decision of which framework to use should come down to three criteria – resiliency enabling responsiveness, scale/elasticity and concurrency. Do you want those things at their fullest?
Do you want your unit of failure to be a microservice or a component of that microservice, where the component (actor) can fail in isolation and the MS keeps running, or, keeps “responding”. When your entire microservice is the unit of failure, once it fails it is not responsive. Not until your cluster manager instantiates a new instance of the microservice.
Do you want your unit of scale to be a microservice or a component of that microservice (an actor)? Since some microservices can be quite large, why would you want to scale the entire microservice when only one or two components of the microservice needs to scale?
It’s a matter of granularity. Actors within a microservice can scale independently of other actors that comprise that microservice, and the same with failure and recovery.
Do you want your application to make the maximum use of CPU resources, cores, or are you OK with wasting those resources? Webflux utilizes a single thread event loop, it is nowhere near as efficient as the actor model as implemented in Akka with its non-blocking concurrency model that leverages all CPU cores at maximum efficiency.
Put it another way, a Microservice made up of actors is a microservice system in and of itself. Each microservice is a distributed system in its own right.
Sorry Vasu, but a framework that supports the Reactive Streams spec (support for asynchronous data streams) does not make that framework “reactive” = responsive to events wrapped in messages. The Reactive Streams spec was not designed to build applications but rather to do Stream Processing. Yes, it can be used for app dev but at a higher level of abstraction, the dataflow programming model gives you less control. Webflux, based on Project Reactor will likely not be as low-level as you need or want.
Asynchronous messaging as the only means of communication between the components of a microservice is what makes a framework reactive. Yeah, the term reactive has many meanings. But I am talking about the way the Reactive Manifesto describes “reactive”. Surely Vasu knows what that definition is having been an early signer of the Reactive Manifesto?
With Spring 5 microservices communicate with each other via asynchronous messaging (with the help of a message bus or broker like MQ or Kafka). With Akka microservices communicate with each other via asynchronous messaging (natively) AND the actors (components) that make up the microservice communicate with each via asynchronous messaging (vs. an event loop).
Lorenzo Nicora at OpenCredo puts it this way in his presentation; Pivotal has chosen to define “reactive” in the macro sense, or with a top down approach, where microservices communicate with each other via asynchronous messaging, and with isolation at the microservice level.
Pivotal has chosen that definition, while claiming incorrectly that it meets the definition laid out in the Reactive Manifesto, because that is all that you can do with Spring 5 since objects in Spring interact by calling methods synchronously, and turning that communication into asynchronous calls via CompletableFutures does not change the unit of failure or scale…it is still the entire microservice.
The correct definition of reactive as described in the Reactive Manifesto encompasses a micro, or bottom up approach, where asynchronous message passing as the means of communication occurs within the microservice = between the components that make up the microservice.
The difference is profound.
Message handling code is inherently thread safe since actors process just one message at a time, and only consume a thread when they receive a message and then “do something” in reaction to receiving that message. Since there are no worries regarding locks and synchronization all cores in a CPU can be utilized safely and at all times. Think about that, compared to a single thread event-loop model.
Lorenzo also speaks to the difficulties of CompletableFutures with respect to dealing with random timeouts that generate random bugs which are hard to identify/troubleshoot, since you have to know where the exception happened if the Future is not returned, and further there is no proper way to separate error handling from failure handling.
When using actors, failure is handled by the default supervision strategy inherent in Akka, where the supervisor of an actor is notified when the actor throws an exception, with the result being that the microservice remains responsive. While errors are handled by the message processing logic, with an appropriate response sent back to the sending actor.
The bottomline, Akka enables simpler more testable concurrent code and requires far less discipline than plain Java 8 concurrent code.
The comparison boils down to the underlying models used by each respective framework, notwithstanding all the “good stuff” that both Pivotal and Lightbend have added to their frameworks.
The underlying model of Spring is MVC where objects communicate with each other in process, in memory, based on a synchronous request/response programming model where there is shared state. With blocking RPC calls if you need to go over the wire. In order to address the new realities of multi-core CPU’s and distributed computing Spring must be “adapted”.
The underlying model of Akka is the actor model, which was conceived as a concurrency model. It achieves high concurrency via asynchronous, non-blocking message passing as the only means of communication between actors which never share state. But wait! Isn’t message passing the essence of distributed communication? It is. So with the actor model you inherently get concurrency AND distribution in one programming model. No adaptation required.
The actor model just happens to be a great fit for the microservices paradigm and the folks at Lightbend were wise enough to leverage that model as the basis for their framework, which is meant to address the new realities of multi-core CPU’s and distributed computing. Spring was developed as an alternative to J2EE back when single core CPU’s were the only option and apps ran on one machine (for the most part).
“The actor model has its theoretical roots in concurrency modeling and message passing concepts.” – referenced from the Diploma Thesis by Benjamin Erb titled “Concurrent Programming for Scalable Web Architectures” written in 2012 and freely available under a Creative Commons license here: http://berb.github.io/diploma-thesis/community/index.html
In Section 5.5; Event-driven Concurrency, Benjamin states that event-driven programming via an event-lop is not a concurrency model per se, but rather enables concurrent programming.
“Events and event handlers are often confused with messages and actors, so it is important to point out that both concepts yield similar implementation artifacts. However, they do not have the exactly same conceptual idea in common. Actor-based systems implement the actor model with all of its characteristics and constraints. Event-driven systems merely use events and event handling as building blocks and get rid of call stacks.
“The actor model represents an entirely different approach that isolates mutability of state. Actors are separate, single-threaded entities that communicate via immutable, asynchronous and guaranteed messaging. Thus, actors encapsulate state and provide a programming model that embraces message-based, distributed computing.”
If you look at the differences between the two frameworks on the basis of their underlying models, then the difference is so easy to understand that even a caveman, er, salesman can do it.