“What is Project Loom is about?”

Reading Time: 3 minutes

1. Overview

 Project Loom. In essence, the primary goal of Project Loom is to support a high-throughput, lightweight concurrency model in Java.

2. Project Loom

Project Loom is an attempt by the OpenJDK community to introduce a lightweight concurrency construct to Java. The prototypes for Loom so far have introduced a change in the JVM as well as the Java library.

Although there is no scheduled release for Loom yet, we can access the recent prototypes on Project Loom’s wiki.

3. Java’s Concurrency Model

Presently, Thread represents the core abstraction of concurrency in Java. This abstraction, along with other concurrent APIs makes it easy to write concurrent applications.

However, since Java uses the OS kernel threads for the implementation, it fails to meet today’s requirement of concurrency. There are two major problems in particular:

  1.  Thread for every user, transaction, or session is often not feasible.
  2.  An expensive context switch happens between OS threads.

A possible solution to such problems is the use of asynchronous concurrent APIs. Common examples are CompletableFuture and RxJava. Provided that such APIs don’t block the kernel thread.

 such APIs are harder to debug and integrate with legacy APIs. so we need.

4. Tasks and Schedulers

Any implementation of a thread, either lightweight or heavyweight, depends on two constructs:

  1. Task (also known as a continuation) – A sequence of instructions that can suspend itself for some blocking operation
  2. Scheduler – For assigning the continuation to the CPU and reassigning the CPU from a paused continuation

 Java relies on OS implementation of continuations includes the native call stack along with Java’s call stack, it results in a heavy footprint.

use of OS scheduler not optimal for Java applications in particular.

 It would be better to schedule both these threads on the same CPU. But since the scheduler is agnostic to the thread requesting the CPU, this is impossible to guarantee.

Project Loom proposes to solve this through user-mode threads which rely on Java runtime implementation of continuations and schedulers instead of the OS implementation.

5. Fibers

In the recent prototypes in OpenJDK, a new class named Fiber is introduced to the library alongside the Thread class.

There are two main differences:

  1. Fiber would wrap any task in an internal user-mode continuation. This would allow the task to suspend and resume in Java runtime instead of the kernel
  2. A pluggable user-mode scheduler (ForkJoinPool, for example) would be used

6. Continuations

A continuation (or co-routine) is a sequence of instructions that can yield and be resumed by the caller at a later stage.

Every continuation has an entry point and a yield point. The yield point is where it was suspended. Whenever the caller resumes the continuation, the control returns to the last yield point.

It’s important to realize that this suspend/resume now occurs in the language runtime instead of the OS. Therefore, it prevents the expensive context switch between kernel threads.

Similar to threads, Project Loom aims to support nested fibers. Since fibers rely on continuations internally, it must also support nested continuations. To understand this better, consider a class Continuation that allows nesting:

Continuation cont1 = new Continuation(() -> {
    Continuation cont2 = new Continuation(() -> {
        //do something

As shown above, the nested continuation can suspend itself or any of the enclosing continuations by passing a scope variableFor this reason, they are known as scoped continuations.Since suspending a continuation would also require it to store the call stack, it’s also a goal of project Loom to add lightweight stack retrieval while resuming the continuation.

7. Scheduler

Earlier, we discussed the shortcomings of the OS scheduler in scheduling relatable threads on the same CPU.

Although it’s a goal for Project Loom to allow pluggable schedulers with fibers, ForkJoinPool in asynchronous mode will be used as the default scheduler. 

ForkJoinPool works on the work-stealing algorithm. Thus, every thread maintains a task deque and executes the task from its head. Furthermore, any idle thread does not block, waiting for the task and pulls it from the tail of another thread’s deque instead. 

The only difference in asynchronous mode is that the worker threads steal the task from the head of another deque.

ForkJoinPool adds a task scheduled by another running task to the local queue. Hence, executing it on the same CPU. 

8. Conclusion

In this article, we discussed the problems in Java’s current concurrency model and the changes proposed by Project Loom.

In doing so, we also defined tasks and schedulers and looked at how Fibers and ForkJoinPool could provide an alternative to Java using kernel threads.

I hope you got a quick overview about  Project Loom with this blog. If you have any doubt, feel free to contact me navdeep.parash@knoldus.com.

Thank you for sticking to the end. If you like this blog, please do show your appreciation by giving thumbs ups and share this blog and if you feel, give me suggestions on scope of improvements.

References :

Scala Future