Project Loom was introduced to make concurrent programming easy to write, debug, and to withstand the present multi-threading programming needs. Although, we have two programming concepts that are synchronous and asynchronous both have their advantages and disadvantages. But I will also explain Why do we still need Loom and How it is better than synchronous and asynchronous programming?
Synchronous calls
As we all know what synchronous calls are the blocking calls. If a request comes to the process then a thread is assigned to that request and that thread will be assigned to the request till all the operations of the request are finished. Therefore this type of code is easy to write, understand, and debug. But costly as it is a blocking call.
CPU is blocked until the current statement does not fully get executed even if that statement does not need the CPU. Since we have limited threads and using it in larger amounts will only slow down the application instead of making it faster as other processes might need threads to perform their tasks. Scaling of application is also difficult.
Asynchronous calls
Asynchronous calls will overcome most of the problems of synchronous calls like scaling which is an important factor of any application in the current world. But it has its disadvantages like it is difficult to write and debug it.
Project Loom what it gives?
Project Loom tries to solve the problem of asynchronous and synchronous calls. It will be easy to write and debug the code with Loom that is Scalable, asynchronous, and lightweight. Through Loom we can write a code that is easy to understand, debug, and non-blocking. Let’s have a small introduction to java threads and what changes Loom gives to these threads.
Threads in Java
Threads in Java are used as actual OS threads. This comes with a lot of disadvantages that are heavyweight and context switching that is done by OS not by JVM. Introducing Loom we have a virtual thread that is managed by JVM. Now, let’s talk about virtual threads.
What Virtual Thread give us?
The Virtual thread is managed by JVM rather than the OS. Virtual threads are produced over the heavyweight kernel thread. It means that from one kernel thread we can produce many virtual threads.
From an OS perspective, you are spawning only a few threads but in terms of the programming language or JVM, you are using many threads.
In the case of Context switching for the virtual thread, it becomes faster. As we are managing the stack and we know which process is going to take more memory. The stack that we have created is only considering the java program so it’s size automatically reduced whereas in OS thread stack is assigned a reasonable size so that it can support different processes of the system.
We are never going to block the thread inside of the native code because if we block the virtual thread, we will be actually blocking the OS thread.
That’s all for the small introduction for the Loom if you have any queries or want to know more about it you can add the comments below. I am happy to answer them. 🙂