BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Podcasts Java’s Project Loom, Virtual Threads and Structured Concurrency with Ron Pressler

Java’s Project Loom, Virtual Threads and Structured Concurrency with Ron Pressler

Bookmarks

In this podcast Ron Pressler, technical lead for Project Loom at Oracle, sat down with InfoQ podcast co-host Charles Humble to discuss the project and its forerunner Quasar.  Topics include the differences between concurrency and parallelism; what virtual threads are; current issues with JVM concurrency; the Loom developer experience; pluggable schedulers; structured concurrency; and more.

Key Takeaways

  • In OpenJDK, Java threads are just thin wrappers around OS threads and OS threads are a very precious resource; a modern OS can't support more than a few thousand active threads at a time.
  • Currently Java developers are faced with a choice between writing code that's harmonious with the design of the platform but is not scalable, or writing code that makes efficient use of existing hardware with asynchronous programming and fighting the platform.  Project Loom is supposed to break that dilemma through virtual threads.
  • Virtual threads are cheap enough to have a single thread per task, and eliminate many of the common issues with writing concurrent code in Java.  There aren't any new APIs you have to learn, but you do need to unlearn many habits such as using thread pools to deal with resource contention.
  • Debugging is still challenging and the project Loom team is working with the various IDE vendors on how to present large numbers of threads.
  • Loom has support for a pluggable scheduler to allow developers to experiment with different scheduling algorithms; the default scheduler uses fork/join which is based on a work-stealing algorithm.  In the future (post the first GA release), the team may look at adding explicit tail-call optimisation.

Transcript

Introductions [02:20]

Charles Humble: Hello, and welcome to the InfoQ Podcast. I am Charles Humble. I am a freelance technology consultant and communicator, managing editor for cloud native consultancy firm Container Solutions and a co-host of this show. This week, I'm joined by Ron Pressler, who is the technical lead for Project Loom at Oracle. Ron, welcome to the InfoQ Podcast.

Ron Pressler: Hello and thank you for having me.

Charles Humble: I first came across your work around about 2016 via project Quasar, which was trying to add fibers or lightweight threads to Java using bytecode manipulation tricks and was something of a forerunner to Loom. Can you talk a bit about that project for us please?

Ron Pressler: Yeah, so back at the time I was working at a start-up that is now no longer conducting business and we had a database, it was a spatial database that was intended for mostly online games and location-based services. And that database is so fast that applications actually slowed it down. And the only way to use it efficiently was with these asynchronous APIs. But when we tried to show it to various game companies, they said that no way are they going to use asynchronous APIs, it's too complicated. They don't want any of that. And that is how I started to get interested in the idea of user-mode threads. I took inspiration from Erlang and Go. I found a library that had the basics of what I needed in terms of bytecode manipulation. I took it, I forked it and built Quasar on top of it. And so Quasar was actually born in order to make it easier to use our very fast database.

Charles Humble: And then how did you get from that to winding up running project Loom at Oracle?

Ron Pressler: I think it was around 2015 that I was invited to give a talk at Oracle at JVMLS, the JVM language summit that takes place every year, well not during the pandemic, at the Oracle offices in Santa Clara. It started way back when it was still Sun Microsystems. And I presented Quasar. And I guess the Java team at Oracle was very interested in the idea. I think they had at least considered some aspects of it before. A year after that, Brian Goetz contacted me and asked if I wanted to come to Oracle, work on that. I wasn't ready at the time. But a year after that in 2017, I told Brian that I'm ready to join the team. And that's how I got to Oracle.

What are the differences between concurrency and parallelism? [02:51]

Charles Humble: Project Loom is mainly concerned with concurrency on the JVM. And I think that some of our listeners might be confused by the differences between concurrency and parallelism. Can you help us out? Can you give us a sort of definition of the two and what the differences are?

Ron Pressler: The way I define it and in fact that is also the way that the ACM recommends people teach it, is that concurrency is the problem of scheduling multiple largely independent tasks onto a usually smaller set of computational resources. So, we have a large set of tasks that might interact with one another, but otherwise are largely independent and they're all competing for resources. The canonical example is of course, a server. Parallelism on the other hand is a completely different algorithmic problem. Parallelism is when we have one job to do, say, invert a matrix, and we just want to do it faster. And the way we want to do it faster is by employing multiple processing units. So, we break the job down into multiple cooperating tasks and they all work together to accomplish that one task. So parallelism is about cooperating on a single thing, and concurrency is about different things competing for resources. So in Java, parallelism is perhaps best served by parallel streams. And of course, project Loom tries to address the problem with concurrency.

Why is the way concurrency currently works on the JVM problematic? [04:18]

Charles Humble: So, why is the way concurrency currently works on the JVM problematic?

Ron Pressler: Ah, so the problem is this. When we write a server, what we'd like to have is a high throughput. That means we'd like to have a high number of requests handled per unit of time, say, per second. So in fact, if we write any server and we want to do things concurrently, classically what we've done in Java since Java 1.0, is to accept a request and service it, start to finish, on a single thread. Once we're finished, that thread might be returned to some pool, but during the processing of a transaction, we hold onto a thread.

Ron Pressler: And the problem with that is that the number of things, the number of transactions we can process at any one time, is then limited by the number of threads. And while the number of open sockets that a modern server can support could be around one million and maybe even more, you cannot have one million threads. So if you could have say 2,000 threads or 4,000 or 5,000 threads, that is the maximum number of requests that you can handle concurrently, even though your server could hypothetically support a million requests at the same time, because you can have up to a million open sockets.

Why can we have such a small number of threads? [05:31]

Charles Humble: And why can we have such a small number of threads?

Ron Pressler: The answer to that is that in OpenJDK, Java threads are just thin wrappers around OS threads and OS threads are a very precious resource. And a modern OS can't support more than a few thousand active threads at a time, for various reasons. And so if that is the style of programming you want to write, you're limited by the number of threads. So what other people do, because of that limitation. They say, okay, we're not going to service the entire request from start to finish on the same thread, but rather we're going to do some processing, return the thread to the pool to be used by other transactions. And then let's say we dispatch an outgoing request to a microservice. And when the answer comes back, then we'll continue it possibly in another thread. And that style of programming is called asynchronous programming. There are various libraries commonly called reactive that are meant to make that easier.

This solves the throughput issue. So with reactive programming, you can get to the maximum throughput that your hardware supports, but then the way you write the program is not harmonious with the design of the platform, because everything about the Java platform is built around threads. If you have an exception, the troubleshooting context you get is the thread stack. And once you use those reactive frameworks, the thread stack you get doesn't really have any valuable information, any useful information, in it because the same task really moves from one thread to another. You can't step through those requests or those transactions in the debugger, because the debugger steps through operations on a single thread. While, if you use asynchronous programming, your action actually moves from one thread to another and you can't get actionable profiles using profilers like JFL, because they group operations by threads. They don't know about these asynchronous transactions.

You're fighting the platform left and right. So the way I like presenting it and saying that today, Java developers are faced with a choice. They can write code that's harmonious with the design at the platform, but it's not scalable and then waste money on hardware, because they'll need more servers just because of the thread limitation, or they can make efficient use of their existing hardware with asynchronous programming. But then they're going to need to waste more money on development and maintenance, because developing and maintaining and observing the program is so much harder. So project Loom is supposed to break that dilemma, let you write code that is simple, that is harmonious with the platform and let you use the same tools you use, like debuggers and profilers, as you do today to get the same kind of actionable information. Stack traces will be meaningful. And at the same time, you'll have the same throughput as you do with asynchronous programming.

Can you briefly describe what fibers or virtual threads actually are? [08:26]

Charles Humble: Can you briefly describe what fibers or virtual threads actually are? What do they do? What's going on at an OS level?

From the perspective of the Java programmer or from the perspective of the Java programmer, as they sit down to write their code and run it, virtual threads are just threads. They're Java threads. The semantics are exactly the same in terms of everything, but under the covers, unlike today's threads, which we've started calling platform threads, they do not map one-to-one to OS threads. So a virtual thread is not a wrapper around an OS thread. Rather it is a Java runtime construct that the OS doesn't know about.

Under the covers, the runtime, which is the libraries and the VM map a great many of those virtual threads, millions even, onto a very small set of OS threads. So from the OS perspective, your program might be running, I don't know, eight or 32 threads, but from your perspective, you'll be running a million threads and those threads will be virtual. And the name virtual was supposed to evoke the same connotation and an association with virtual memory, where you have some abstract, cheap, unlimited resource like virtual memory or virtual threads that is then automatically and cleverly mapped to some more limited physical resource like RAM or OS threads.

Surely though, if I have more threads, I just have more of the problems associated with threads, right?

Charles Humble: Surely though, if I have more threads, I just have more of the problems associated with threads, right?

Ron Pressler: Ah, that's a great question. So, first of all, we need to understand that when we miniaturize something or we prefer to use the word, right-size something, that difference in scale alone can have a qualitative difference. I mean, even though technically you could think of a smartphone as if it were a miniaturized mainframe, the experience is quite different. The problems are different. Just because of the form factor and because a smartphone normally just has one user, it doesn't have all the problems that a mainframe has like managing batches from multiple users. And the same thing is with threads. In large part, many of the problems that we've had managing threads is because they were costly.

And because they were costly, we had to pool them and share them. And that has been the source of many issues working with them. And I want to give just two examples. The first is thread locals. Once you share threads in a pool, you have to be very, very careful if you use thread locals and you probably do because the library you use, say if you use some logging library, they use thread locals. A thread local is associated not with a task, but with a thread. So if the thread is shared among many tasks and you're not careful, your thread locals can leak from one thread to another. And that in fact it could be a security problem. You could leak secret from one user to another.

Charles Humble: You also have issues, I think, around interruption, right?

If you submit a task you a thread pool through an executor service, you get back a future, and then say, you decide to cancel the task. If it's already been started on some thread, what that's going to try to do is interrupt the thread that is executing that task. But because threads are shared, by the time the thread is interrupted, it's possible that it is executing a different task altogether. So that means that everywhere, where we deal with interruptions, we have to consider the possibility that we've received an interrupt, not aimed at us, but to some other task. So everywhere we deal with interrupts, we need to handle spurious interruptions.

So these problems are entirely a result of tasks not being mapped one-to-one to threads. The fact that we need to share our threads because they are so costly. But with virtual threads, they're cheap enough to just have a single thread per task. We don't have those two particular problems. And I will say that we won't have many other problems as well, because once the thread captures the notion of a task, working with them becomes much simpler. That is the right size for threads. So even though you'll have more threads, I believe that will make working with threads much, much, much easier than having fewer threads.

What is the Loom developer experience like? [12:29]

Charles Humble: In general terms, what's the developer experience like? If I want to, not that many people do, but if I want to manually create either a platform thread or a virtual thread, how do I do that?

We have something akin to a different construct. The reason I'm hesitating is the API changes very quickly, but there's a builder class called Thread.Builder I think. It's changed a little bit in the past few weeks, but when you create a thread, you can choose whether you want a platform thread or a virtual thread. But it's interesting that you said that you don't do it commonly these days. And that is very true. And in fact, that is one of the reasons why we've decided to expose virtual threads as Java.lang.Thread. We were a bit concerned that the thread API has accumulated a lot of baggage over the last 25 years and it has, and we thought that doubling down on it and using it for a new thing would get new developers to look at the API and see all the baggage it has had.

And to be honest, I found some methods on thread quite recently, that I hadn't even known existed, but it turns out that is not a problem because of exactly what you said. Since Java 5.0, We've encouraged people not to use the thread API directly for most things. So Thread.currentThread() for example, is very commonly used, but very few people actually do new Thread() or thread.start(). Rather, they use the executors in the java.util.concurrent executors class. And you would do the same for virtual threads, only you'll be using different executors.

For platform threads, you will use one of the thread pool executors. And the reason you need to do that is because those threads are very costly. So they have to be pooled. With virtual threads you should never ever, ever pool them. If you find yourself pooling virtual threads, then you're doing something wrong. I mean, it might behave correctly, but you're sort of missing the point. But we do have new executors in the executor class and project Loom that will spawn a new thread for every new task you submit to it. And of course, if you choose to configure that executor to spawn a new virtual thread, you just need to replace your executors with that executor. I think it's called newVirtualThreadExecutor(). And then instead of having pooled threads, every task will get its own virtual thread.

How do you limit resource contention with Loom? [14:44]

Charles Humble: It's interesting though, that you mentioned thread pooling. It's been a while, to be honest, since I last worked on a very kind of big system in Java, but one of the things we did on that system was we had thread pools of different sizes that would limit the number of requests that could hit a particular service or the number of requests that could hit a database, basically as a way of limiting contention. So, you might allow 10,000 requests into the container, and then, let's say, 50 for a service and 10 for a database or whatever. Is there an equivalent to that in Loom, is they were a way of handling that?

Ron Pressler: Yeah, so it's actually kind of funny, because people started doing what you just said by limiting the size of the pool in order to limit concurrent access to some resource as a sort of a side effect, that was never the intention. So, if you were to asked to move from one thread pool to another, you'd say, okay, so on one thread pool, I just have 10 threads and those will be accessing the database. In another pool I'll have 50 threads, those will be accessing some microservices, et cetera. But when you work with virtual threads, you need to go back to the original intention or the original construct designed especially for this purpose. And that is the semaphore. So what you'll do is that instead of having multiple thread pools, you will no longer have thread pools, but when you wrap the call to the database, you'll obtain a semaphore.

Your database access API will have a semaphore, say, initialized to 10. And if one of the dual threads wants to access it, it will grab the semaphore, access the database and then release the semaphore. And if none of the releases are available, it'll just block and wait. And that will have the same effect as having multiple thread pools, only much more fine grained. So just wrap all your access to your limited resources with their own semaphores and you don't even need to worry about moving tasks from one thread pool to another. Things will just take care of themselves. And this is actually how we're supposed to write concurrent code, it's just that we haven't been doing the right thing because threads have been so costly. Now we need to go back and rethink how to program when threads aren't cheap. So, it's kind of funny, I say, in terms of project Loom, you don't really need to learn anything new.

All it does, in fact, the project is close to 100,000 lines of code about half of which is tests. And all they do is basically add two methods to the Java libraries and they don't change the language at all. So they add this new source of constructor for a thread and they add a query method on thread called isVirtual. So you can ask if the thread is virtual or not, and that's it. So there aren't any new APIs you have to learn, but you do need to unlearn many habits that, over the years, have become sort of a second nature that we're doing for the only reason that threads are expensive.

You have to start thinking that threads are free. To give you an example, suppose you are now handling a request. And the code you're writing is a thread per request, synchronous as we normally do. And then you say, okay, to handle this request, I have to contact 20 microservices if they want to contact them in parallel. So I don't start the request one only after it's terminated another. So, what you do is you just spawn 20 more threads. Of course, you don't do it manually, you use these executors as you do today, but you don't need to worry about starting new threads. If you want to do two tiny things in parallel, you just spawn new threads for them.

What is the Loom debugging experience like? [18:03]

Charles Humble: You mentioned how challenging sometimes debugging can be when you're working with some of the reactive frameworks. And I was interested to know what the Loom debugging experience is like. I imagine if I have hundreds of virtual threads, that would cause some considerable challenges in terms of how I might present that in an IDE, in a debugging context.

Ron Pressler: That's another great question. So, first we've been working together with IDE vendors, so IntelliJ IDEA, Eclipse, NetBeans, and even Microsoft VS Code to make sure that we have the right user experience. Now, first of all, stepping through inspecting the stack traces, all that should work exactly as it does today, but you're right, that there is a problem with how you present all the threads to the users. If you have only 10 threads, it's one thing. If you have a million threads, it's a whole other thing, because even if you could show a million threads in a list in IDE, it's doubtful that is helpful in any way. So first, first thing to remember is what is the current status quo?

Today the list of threads doesn't really help you, because they're shared anyway. So today the thread list, if you're writing asynchronous code, the thread list doesn't give you much information anyway, and you also can't step through. So with Loom, you will at least be able to step through code. As to presenting the threads. This is in part, an open question that the IDE vendors will have to, over time, think how to do correctly. One thing they might do today, when I say today, I mean, while they're experimenting with, with project Loom, they might decide to only display the threads that you've ever received a debugger event on.

Only threads where you had a breakpoint hit. So you won't see all the threads in the system, just the ones that at one point or another have been of interest to you. Down the line we are interested in presenting many threads in a structured way, and this might lead us to another subject tonight, which is structured concurrency. Even though everything, all you've said, so far is perfectly sufficient to use Loom. We wanted to make the experience even better.

And if you have a million threads, it's nicer to give the users a very clear mechanism of let's say herding them. And that mechanism is based on the idea called structured concurrency. It's not our idea, it's an old idea that's been resurrected and popularized recently. It's now spreading like wildfire. Everybody wants it. And we like it too.

And the idea there, even though it's better to see code examples, but the idea there is that those executors I mentioned, that every time you submit a task to them, they spawn a new thread, you can use them inside try-with-resources blocks. And the try-with-resources block does not exit as long as are any live threads in the executor. So conceptually, the idea is that every time your execution splits into multiple concurrent paths, you don't exit your current block until all those paths have joined. Maybe I could give an example of something that's not structured.

If you have a method that spawns a thread and returns, and the thread keeps running in the background, that is not structured. Structured means that if you spawn something, you have to wait for it and join it. And the word structure here is similar to its use in structured programming. And the idea is that the block structure of your code mirrors the runtime behavior of the program. So just like structured programming, gives you that for sequential control flow, structured concurrency does the same for concurrency. So, you can see in the way that your code blocks are organized where a thread starts and where they end. And how does this help you with debuggers and profilers, because that expresses some logical relationship between various threads. You know that the child threads are doing some work on behalf of their parents and the parents are waiting for that work. And at any point in time, if you choose to use structured concurrency, then your threads will have a sort of a tree structure.

Every thread works on behalf of a parent. And in the future, we hope that debuggers and profilers will present you with a stray view that even when you have a great many threads, you'll at least be able to tell from that view what they're doing. But one thing we already have today in project Loom is we have that mechanism for thread dumps. So you can get a thread dump from a running JVM today. And we didn't want to just give you stack traces for a million threads, because it wouldn't be useful. So we have a mechanism that will give you stack traces in JSON format that contains sufficient information to reconstruct that tree structure and display it as a tree.

Will CompletableFuture continue to work? [22:42]

Charles Humble: It isn't quite the same thing as structured concurrency I realize. But since we're in this sort of area, will CompletableFuture, which was introduced in Java 8, will that continue to work for people who are making use of it?

Ron Pressler: Yes, everything continues to work as before. It's just that with completable future, what you wanted to do is chain various operations without ever calling get, because get would block your thread and thread are precious. With virtual threads you could just call future.get. That becomes a virtually free operation. So there is less need to write all that asynchronous code that chains various stages of a completable future. You just call get and programming becomes much simpler, but if you prefer writing in the asynchronous completable future style, then of course you can use it as before.

 What do you envisage Loom’s Plugable scheduler being used for? [23:29]

Charles Humble: Something that I was really intrigued by, although in truth, I suspect I'd caused all kinds of problems with it, is that Loom has the ability to have a pluggable scheduler. So the default scheduler is based on fork/join, which is based on work-stealing, but you can override it, which is really cool, I think, but I was curious as to what you envisage that being used for?

Ron Pressler: Right. So, I agree it's cool, but we are a bit afraid that it might become a sort of an attractive nuisance, because whenever you have something that's too cool, we found that many people are attracted to try the most complicated thing first. And in 99% of the situations, you should just use the default scheduler. Custom schedulers, one use case where we've envisaged them to be useful is UI threads. So UI threads normally have a limitation that you can change UI components only on a UI event thread. And with pluggable schedulers you can say, I want, whenever the virtual thread has some processing to do, I want that processing to take place on that particular OS thread that serves the UI. It doesn't, however, currently work because every virtual thread still has its own identity. And the current code in AWT and Swing and JavaFX check to see that whenever you do some UI change, you're doing it on the thread whose identity is the event thread, UI event thread.

But we hope that the UI people will change that and will allow doing that. There are however other uses for custom schedulers. Some operations that need to access the GPU can only happen on specific OS threads. And some people have asked for a way to specify that all the processing operations of virtual threads happen in specific OS threads. And also people want to experiment with different kinds of scheduling. Say for example, something that involves pinning, pinning certain OS threads to specific calls. And we want to give people the opportunity to try those techniques, but most people shouldn't touch custom schedulers.

What have been some of the biggest challenges with the project? [25:42]

Charles Humble: What have been some of the biggest challenges with the project? I'm imagining that you must be interacting in a pretty complex way with the various garbage collectors, for instance.

Ron Pressler: Yes. So the project is implemented like many other projects in Open JDK. It works on many different levels. I think in project Loom, it's actually quite a bit. So we're touching the VM, which I'll get to in a moment, we're touching the libraries, we're not touching the language. That's maybe one difference from other projects, but we are touching tooling, which maybe other projects don't. So we have these at least three levels and each of them has its own challenges. At the VM level. Yes. Interacting with the GCs was the most challenging/interesting part of the project. Much of the performance of project Loom depends on a precise interaction with the GC. Just to give you a taste of why that is. So the GC needs to know about every reference in the system and what are the things that contain references to objects.

They can other objects. And they can be the stack, your thread stack. So you can have pointers or references to objects on your stack. But the references to objects on the stack and references from other objects are handled completely differently by the GC. The stacks are known as GC roots and the GC starts with them and treats them specially. And the assumption there is that you won't have too many stacks, but of course that assumption is broken with project Loom, because you might have a million stacks. So, the stacks of virtual threads are not GC roots. They're like ordinary heap objects. They're kind of like arrays, but their structure is still the same as that of a stack. We have to do some interesting changes in the GCs. At the library level, we basically had to touch everywhere where we do IO. And a lot of it has actually been delivered already. Again, just to give you a sense, up until JDK, I think 13 or so, we had two completely distinct mechanisms for doing IO.

We had NIO, which is new IO and the old IO, legacy IO, that was java.net.Socket, for example. And they took completely different code paths. The legacy IO was implemented in C and rather than try to bake Loom into all those places. We've replaced all that C code with Java code and re-implemented all the legacy IO in terms of NIO. So we just had to change NIO to work with Loom. And now legacy IO works on top of it. The tooling level, I think some of the greatest challenges have actually been there. And that has to do with debugger support. That wasn't easy. It wasn't easy at all. Again, to give you a sense of some of the problems.

If you were to implement things in the naive way, you would observe the same piece of code running at the same time in two threads. Once on the virtual thread and once in the OS thread that's underneath it, which is also a Java thread. And that didn't work great in the debugger - you'd sort of stepped through breakpoints and stepped through two threads at a time. And that was very confusing and not at all the experience we wanted. And figuring out a way to define the relationship between the virtual thread and the OS thread that carries it, what we call the carrier thread, was something interesting that we only figured out a few months ago. So every layer had its own challenges.

How can our listeners help out? [29:10]

Charles Humble: All right, thank you. We're reaching the end of our time here. How can our listeners help out? Are you looking for feedback from the Java community?

Ron Pressler: The best way is to download the Loom early access builds on jdk.java.net. Start playing with them. There is a Wiki which I'll ask you to post somewhere that gives you a getting started guide. When you download the early access. That means that you don't have to build anything. You basically get a JDK that has Loom in it and you can start writing code and using Loom. The only thing you to consider is that the API might change and some of it frequently. And then what we would ask you to do is use it for some things that are similar to what you do in the real world and see if it answers your need.

And if it does, and if it doesn't, we'd like to hear about it. So we'd like feedback of the sort. "I've tried to use Loom to, I don't know, in this scenario that similar to a web application accessing a database and it's helped me a lot, except one thing that wasn't very easy, and this is what it was." Or feedback of the sort, "I tried to do this. I think this is something that Loom should address, but it wasn't convenient or didn't work at all." So that is the kind of feedback we're looking for.

What might we expect in the future? [30:23]

Charles Humble: Thank you. I'll be sure to link to the Wiki in the show notes for this episode. Just to finish off, what might we expect in the future. There were some interesting ideas you were exploring in the early days of Loom around for instance, tail-call optimization, and you also don't expose continuations anywhere, although your using the ones under the hood. Might that change?

Ron Pressler: Continuations are very much used. And in fact, they are at the heart of project Loom. The VM doesn't know too much about virtual threads at all. It just knows about continuations. We've decided not to expose continuations directly as a public API as a first step. There are three reasons for that. The first is that we want people to do the simpler thing first and continuations are very powerful, but they can be very, very complicated. And virtual threads are just threads. So we want people to use virtual threads first and then maybe consider a more complicated thing. So the second reason is that the killer app for continuations is user-mode threads. So 99% of the utility that continuations give you, you get from just virtual threads. And we want to focus on giving the biggest benefit at first. And the third reason is perhaps possibly the most important one.

And that is that using continuations directly in an unsupervised manner can break some very important things. You might observe that in a single method, the thread identity, your current thread changes, and that might break program logic. It actually also breaks some compiler optimizations. So you might get missed completions. The JIT compiler might compile your method incorrectly if you're not very, very careful when using continuations. So we can't just expose them as is. Custom schedulers, what they're seeing is something very close to continuations. And in the future, we will certainly consider exposing a more limited kind of a continuation that is confined to a single thread. So with custom virtual thread schedulers in one hand, which they maintain thread identity and thread confined continuations on the other, we've got everything covered, but that might take a while. When it comes, you also asked about tail-call optimization.

First, I should say that what we're talking about there is explicit tail-calls. And we're not talking about everywhere in your program, where you make a call in the tail position, it will automatically be optimized, but where you specifically asked for it in the source language. So, if you're writing Java code, I don't know what that's going to look like, but you have to say, this is a tail-call and I want that optimized. And this is something we certainly want to do. But when it came to priorities, we decided that virtual threads can give more benefits sooner. So we'll start working on that. We haven't started working on it, we'll start working on it only once we deliver virtual threads.

Charles Humble: That's great. Ron, thank you so much, indeed, for sitting down with me to talk about project Loom, it's been a fascinating conversation and thank you to all of our listeners for joining us this week on the InfoQ podcast.

Resources

 

More about our podcasts

You can keep up-to-date with the podcasts via our RSS Feed, and they are available via SoundCloud, Apple Podcasts, Spotify, Overcast and the Google Podcast. From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Previous podcasts

Rate this Article

Adoption
Style

BT