Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage Presentations Why Continuations are Coming to Java

Why Continuations are Coming to Java



Ron Pressler discusses and compares the various techniques of dealing with concurrency and IO in both pure functional (monads, affine types) and imperative programming languages (threads, continuations, monads, async/await), and shows why delimited continuations are a great fit for the imperative style.


Ron Pressler is the technical lead for Project Loom, which aims to add delimited continuations, fibers and tail-calls to the JVM.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.


Pressler: Hi, my name is Ron Pressler, and I work at Oracle here in London as part of the Java platform group. That's the group that develops OpenJDK, designs the Java language, the JVM and the core libraries, and I serve as a technical lead for Project Loom. That is the project that's intended to add continuations and fibers to the JDK. This talk is not part of the Java track, it's part of the programming languages track, so I've been asked to make it more theoretical.

We're going to start with quite a bit of theory, things are going to get more concrete later on, but this gives me an opportunity to speak about and to focus on the subjects or aspects of the project that I don't normally get to talk about, but if you are more interested in the more down-to-earth aspects of the project, then you can search for other Loom talks on YouTube. As I said, I work for Oracle, so I'm required to show you the slide that basically says that everything I'm about to tell you is aligned.

Views of Computation

Our views of computation have changed based on what we use computation for. Traditionally or in the past, we used to think of computation as being deterministic. What we see here, the circles, we can think of them as program states, and on left we have deterministic computation. We don't know and want to say we start the program, because you can think of it as different command line arguments, but once we do, there's only one outgoing edge for each state. Once we start we know where we're going to end up, this view sees computation as sort of a function, you have an input and you get an output.

In the last few decades, we've been using computation more and more interactive systems, and subsystems and non-deterministic and if we are at any point in the program, the next state could possibly be more than one. One of the reasons for that is interaction or IO, that's the one mainly we are going to be talking about. For example, if you want to read an input from the user or read something from a socket, you don't know what it's going to be, so you don't know what the next program state is going to be. Other sources of nondeterminism can be concurrency or threading. If you let the Kernel scheduler decide which thread to schedule on the CPU, you don't know as a programmer which one is going to be, so the next program, so it can be one of many possible ones.

We're going to talk about how different languages do IO. I am going to start with a paradigm that is perhaps the most closely associated with the deterministic viewer programming that is pure functional programming. In this paradigm, a language term, see like a subroutine application behaves similarly to a function application in math. For any given value of input you necessarily get only one, always the same output, the question is, how do we describe non-deterministic programs? We said nondeterminism is something that's forced on us by virtue of being interactive in such a world.

We can think of nondeterminism instead of saying, "The next thing I'm going to do is either this or that," we can imagine that there is an additional parameter that we don't know what it is and that additional parameter really determines our next steps. Things are sequential and deterministic, but in this way we can model nondeterminism. If you think about it, I can toss a coin and from my perspective, the result is going to be completely non-deterministic, but it may well be that the universe is very much deterministic, and the reason I don't know the result of the coin toss is because I don't have all the known parameters, but the universe does. I can think that the outcome of the coin toss is actually a deterministic function of some value that I don't know. We transform this picture on the left to the picture on the right by adding more parameters.

The language perhaps most associated with this style is Haskell. What you see here is not Haskell, it's a language that I made up for the purpose of these slides, but it's very similar to Haskell and we're going to try to express nondeterminism in this way. We have some IOs subroutines, get line and put string line and they're going to take this hidden mysterious parameter called world that is not known to us, but it represents the state of the universe and these subroutines are going to consume it and give me a new value of world and perhaps do something else. Get line is going to also return a string, what it reads from the console, and put string line is going to output that string.

In that language the main program just takes the state of the world returns a new state of the world. I've used a construct that does not exist in Haskell, I think let crack just so I can reuse a W variable, but I get the world for me, then you pass it to get line, get a new world and the result string and then I pass the world to put string line with a string that output it to the console and I'm done. To see what's going on here more closely without reusing the same variable, so each of those steps I get a new value of the world. We start with W0, we use that, we get W1 back, we use that and we get W2.

The problem with that is that this is insufficient, in fact, it won't work to represent IO for the following reason. Every time we read a line from the user we don't know what the next stage is going to be, we don't know what the results are going to be, but from what I presented you, I'm allowed to pass the same value of the world, W0, to get line twice. According to this model, I will need to get the same value again in both cases, but this is not the case, because the user is allowed to enter whatever string they want. We can solve this by adding something called linear types, which are represented with the upside down exclamation mark.

If I say that the world's values a linear type, then the compiler is going to ensure that I use each of those values exactly once. If I pass W0 to get line and I get W1 back and I try to pass W0 again, the compiler is going to give me an error that W0 has already been consumed. If we make sure that each of these mysterious parameter, world parameters, only use exactly once, we can actually represent nondeterminism in this fashion. This is how the result is going to look like.

I think there are some languages that do that already, if you've heard of Cogent, maybe Idris is playing with that a bit, but this is not what Haskell does. There is another solution, and if you remember the beginning I showed you that even in the deterministic picture of the world, there is one point of allowed nondeterminism, that is the beginning of the program. We're allowed to begin the program in one of various states, but once we start we are deterministic. What we can do is to move the nondeterminism outside the program and say, ok the transition from A could be deterministic and now we want to ask for input. What we're going to do is we're going to basically end the program, return back to the runtime, ask the runtime to read some inputs for us and then restart the program at a new entry point depending on what the user has inserted. This is exactly what Haskell does, this is actual Haskell now. This code doesn't show you much. It actually looks quite imperative, but this is just syntactic sugar for the following.

The main function returns a value of something called an IO type, and that value is constructed with a function called bind IO, and bind IO constructs value of the IO type from a pair of two things. First, the operation, we want the runtime to perform and second the new program entry points. What we do here, we basically say, "Okay, this program is done. I've returned the runtime despair," but then the runtime is going to offer the input and going to start the program again passing that string of input into the new entry points.

There is actually a bit more going on here to make this kind of programming actually convenient a nice to use. These IO values can be combined in nice pattern with these two functions return IO and bind IO, and that's called a monad, so the IO is a monad. It is quite problematic even in a language like Haskell, but we're not going to talk too much about the problems monads, but just to say that what monad does is allows us to combine various operations on the IO type in a nice way, and that is what enables us to use this nice do notation with nice syntax.

If you're not using Haskell, if you're using classical imperative programming like Java or C or python, in those languages the meaning of an operation such as a subroutine application is not the same as a math function. If you're interested in programming language theory you may know this. It's something that's called a predicate transformer. It doesn't really matter, what matters here is that the outcome of this operation does not need to be deterministic unlike functions. This is why in Java you can write code that looks like this, we just reline and we are non-deterministic. We don't know what the result is going to be, and then we print it right back out to the console.

This is why it's surprising, in languages like this dealing with non-deterministic is quite easy and it's built into the semantics of the language, but this is why it's surprising, that we find the following: in the Java code libraries we see the strange class computable future, and I've translated the signature of the methods to Haskell like notation. If you look at that, those signatures are actually identical to those we find with the return IO and bind IO. That is because this class actually implements a monad, and they've various users for monads, but this class is used to do IO. The question is why? We know why we need in Haskell, because Haskell is deterministic and functional, but Java isn't. Why do we need to use a monad in Java to do IO?

There is something else going on here, it's not just a nondeterminism and it's not just the theory. The part of the code that performs certain kinds of computation are actually run on the CPU, but when we want to do IO we need to move off the CPU and use different circuitry in the computer. We basically need to release the CPU to do other stuff and we call that blocking. What read line does it also blocks, it stops using the CPU, it asks the operating system to do IO for us using other circuitry in the machine and when it's done we ask the operating system to bring us back to the CPU.

This is done using processes or threads, we've all learned back in kindergarten that blocking threads is bad, because friends are big and heavy weight and expensive and you can only have so many of them and blocking is slow, so we shouldn't do that. It's not because of the semantics of the meaning of the language that we use that strange monad, but because we just don't want to block. We are using a similar mechanism taken from a completely different paradigm to solve a different problem.

Let's look at this Java method that does some important computation and it uses micro services. Every time you call compute and actually go over the network I pass them operation I want my computing service to do, financial computing service to do and I'm going to block and want to start I'm going to get the result and based on that do the next thing. If we want to use that computable future this code would look like that. This is called asynchronous programming, sometimes it's also referred to as reactive programming, but really it's just a style that's meant for us to allow us not to walk.

Reactive Programming’s Problems

Whatever your opinions are about this style, I suggest just last night I very nice talked about someone who's an expert in reactive programming. He has some strong opinion on why this style is actually problematic, but let's look at some of the problems.

First of all, suppose original method was not that simple, but we actually want to do some control flow. You want to branch the result of the first compute and then loop over the second core and we simply cannot use those existing Java mechanism if we use CompletableFuture. If we use different ones that are specially built sort of a DSL, this CompletableFuture has sort of its own DSL for doing control flow, but we can't use the ones that are built into the language. This already shows some kind of mismatch.

Another problem is, what happens if you have an exception? We use two exceptions giving us some context of where the problem happened, but when we use something like CompletableFuture, any of the reactive frameworks like RxJava etc., each of these steps can be executed in a completely different thread and it will likely be a thread other than the one that called calc important finance, and the stack trace we're going to get to the exception is not going to tell us the context we were actually running it. This makes debugging this code very hard. Not only that, it makes profiling this code hard, because profiling works by sampling stack traces. If the stack traces no longer give us the actual context of a computation, they're basically useless.

Perhaps the biggest problem is the return type. Like in the case of IO in Haskell, the return type here is of this strange class that wraps a double, and we can't get the double inside it unless we block, and that's what we don't want to do. Everyone who calls this also has to work with these CompletableFuture in an asynchronous style and this is viral. It means our entire call stack must be either synchronous in blocking or asynchronous. It is nearly impossible to interoperate the two styles, so we get two separate worlds.

What do other languages do about that? C# introduced something called async and await. The task closing in C# is similar to CompletableFuture in Java, but now you can add this async annotation and you can prefix calls to async methods with the await keyword, I guess, and this way you get a nice imperative looking code. This safely solves the first problem. Now, we can use exceptions, now we can use control for like if and world etc. It doesn't really solve the second problem though of the exception stack trace, because each one of those lines will still be executed on a different thread, and the exception is going to give us a strange stack trace.

This is actually something they've fixed in .Net Core I think 2.1 released last year. They artificially generate a stack trace that does capture the actual context, but it would still be a very different stack trace from the one you'd get if you were writing ordinary blocking code. It doesn't solve the biggest problem, that is that you still have now two separate walls that can't interoperate probably the one the async world and the synchronous blocking world.

The imperative languages that we know not only handle input and nondeterminism, they have this built-in notion of threads or processes and blocking and it is a good abstraction from a programming perspective. The only reason we sometimes want to avoid it is because the implementation by the kernel threads is too heavy weight. Why do we want to abandon something that's not only a good obstruction, but a core obstruction of this paradigm just because of an inadequate implementation?

The obvious solution, instead of changing how we program, is just she's changed implementation. That is what we're trying to do as part of Project Loom. To replace implementation first, we need to understand what it does. What is a threat in the process? If we think about it and try to decomplect it, as Rich Hickey may say, we see then there's actually two different capabilities here. First, we need the ability to stop the code that's running on the CPU and say, "I'm not using the CPU anymore," and later on have the ability to resume it. That's one capability, the other capability is we need some mechanism to schedule those pieces that want to run the CPU and say, "Now you're ready because you were waiting for something else to happen like an IO operation to complete. Now, you can run on call number three." We have names for these two capabilities. The first one is called continuation. That's the ability to suspend running computation and later on resume it and the second is just a scheduler.


As part of Project Loom, we've implemented continuations in the JVM exposed as this clause here. What it does, this is very similar to the one actually in the prototype, it's not quite the same, and it changes a bit. What we get here is we have a continuation of a certain body that is going to be the code that will be able to suspend itself and then we run it. When we run it, we start executing it and it can either run to completion or yield, in either case, run will return. There is no concurrency here, everything is running on the same thread, but when you call run, the body of the continuation is going to run either to completion or until the next time it calls yield. If it calls yield, if we ask, if you're done is going to say no, if it's terminated, it's going to say yes. This scope thing allows us to nest different continuations inside another and to be able to suspend multiple continuations and jump back multiple callers similar to how we throw exceptions way up the stack.

To be more precise, if you're interested in the theory of continuations in the academic literature, the kind of continuation of this class implements are called One-Shot multi prompt delimited continuations, I'm going to explain each of those. The delimited part means that the piece of the code that we are suspending and resuming is not the entire program, but just the code that's inside the body. The body delimits the code that we want to suspend and that's why they're delimited. Multi prompt is exactly the different scopes, different scopes are in literature are called prompts. We can nest different continuations and jump back to suspend however many of them we want, and one-shot means that this continuation is mutable in the sense that every time you run it it's going to run until the next yield point and then it state changes, and then when you run it again, it's going to run from the first yield point to the second and we can never go back in time.

Continuations are very low level construct, application developers not expected to use them directly, supposed to use a high level constructs that's built on top of them, but if you were to use them directly, this is how you use them. You have a continuation, that's the body, loops forever, prints out some stuff and occasionally yields, and then if you want to use that continuation, you just loop as long as it's not done. In this case it's never going to be done, you run it and every time you call run, it's not going to loop forever, it's just going to execute the next iteration until yield. The important thing here is that the call to yield does not actually need to be inside this outermost top level block. It can actually be inside some method deep in the stack. You can call foo, and foo calls bar and bar calls yield. Part of the continuation state is a stack, the continuation maintains its own stack and its own program point, program counter, where it is in a program.

People who may be familiar with continuations and other languages like scheme, and maybe I think it is experimental implementation or kernel might be a bit horrified to see that the return types of the running yield methods is void. Usually, you want to pass some information from run to yield. Writing continuations are able to pass information from run to yield. On top of this is actually very easy, but one of the reasons we've done this way is that normally continuations yield cooperatively or voluntary. Because the return type is void, if a continuation gets into an infinite loop, we can ask it to preempt itself, and forcefully remove it from the CPU. This being Java and we like monitoring, and I said stack is part of the continuation, we also have some methods to inspect the continuation stack.

This class implements one-shot multi prompted limited continuations, but we could decide to add the ability to clone them. If we clone a continuation, we can capture its certain time and then we run it and we get something that's called re-entrant or multi-shot continuations. This is how simple it takes to implement those on top of these one-shot continuations, we just clone the continuation every time we run it and return a new copy. You can do some crazy stuff with this, you could write programs that actually go back in time, and most people have no need for and I'm not sure we're actually going to implement that, but we could.

Another more interesting thing we can do, we can make those continuations serializable, this means you can write a piece of computation which blocks. Say you are reading something from the database and you're waiting for a response, and while you're waiting, you're actually not just going to be suspended off the CPU, but entirely off the machine, and when you return and get the results from database, you could be in a different machine all together, perhaps closer to where the data is and this would make access to data faster.


These are continuations, and I mentioned before that they are a low-level construct. What we want to build on top of them is rebuild threads, threads are just continuations and the scheduler. We already have very good schedulers in the JDK in the form of the thread pools like full join pool, and we combined them to implement threads in the JDK in user mode not going to the kernel, and those are called fibers.

To remind you why do we want to do that, today, people writing applications have either the option of writing simple synchronous blocking code, but they're relying on the kernel to provide them with threads and the kernel can only handle so many threads. The application is going to be very clear and easy to maintain and debug, but it's going to be not scalable. The other option is to write asynchronous code as I showed you which is much more complex and very hard to put existing code to, but at least it's scalable.

With fibers, if we can indeed make them much more lightweight than the threads provided by the kernel, we solve this problem. You can have as many of these user mode lightweight threads as you like, and you can block, blocking becomes essentially free. We can say that it codes like sync but works like async in the sense that your code looks locking, but from the perspective of operating system, no kernel thread is actually blocked, and all the IO behind the scenes is a synchronous even though to your code everything seems synchronous.

This makes writing concurrent applications easier because it helps you match your domains unit of concurrency, say, the user of the session or the income request or outgoing request to match it directly to the software unit of concurrency, which is the thread or in this case the fiber, the lightweight thread. This is something you can't do with heavyweight kernel threads because you may have 100,000 concurrent users, but you can't have 100,000 current threads. With fibers you can, this already makes writing code easier.

Now, that we have the scheduler and we have the continuations, we have fibers, but now we have to find all the places in the JDK that potentially block and teach them about this new mechanism. What kind of things block? Of course we talked about IO, we've gone into all the places in the JDK that do IO and taught them about fibers. The other case where you can block is with synchronization constructs like locks or blocking queues, channels, whatever. Those are all classes in Java.util.concurrent, and we've done the same there, just to see an example of how.

All of the classes in Java.util.concurrent that block traditionally just called unsafe, when they want to block or unblock a different thread, they all go through this class called LockSupport, and what LockSupport used to do is just called unsafe.park. Unsafe.park is a call to the kernel placed by this current thread and if you want to unblock a different thread, you could unsafe.park and then ask the kernel to unblock that thread.

With fibers, we have an option. First, we have to see if we are existing old heavyweight threads and if we do it the old way, but if the current abstract thread we're running on, which you called the strand, a strand is either a fiber or a heavyweight thread. If we want to park, parking is just yielding the fibers continuation, every fiber has a continuation. If we want to unpark, all we have to do is to find that fiber scheduler and that fibers continuation, and submit the fibers continuation to the fibers scheduler and that continuation just appears to be an ordinary run or an ordinary task for that scheduler. The next time that scheduler is just going to run that task, in effect, is going to continue your code from where it last left off. If you love async/await so much, this would be the entire implementation of async/await on top of this. This is strictly stronger than async/await.

Of course, all this would be worthwhile only if we could do better than the operating system. We need to do better on two counts if we want to support that many lightweight threads, that many fibers. On the left hand we have some data about old heavyweight threads in the JDK. They each have about 2 kilobytes of metadata plus by default one megabyte to stack. When it comes to fibers, they currently have in the prototype, they currently have only 2 to 300 bytes of metadata and the stack is pay-as-you-go, it can grow and shrink however much you use. When it comes to task switching costs for heavyweight threads managed by the kernel, the time to switch task is between 1 and 10 microseconds. For fibers, we don't know how much that is, and that performance is something we're still very much working on, but we expect it to be much lower than that.

Rethink Threads

Fibers are just a user mode implementation of threads, we thought we could do the same thread API, maybe with a new constructor flag saying whether when you create a new thread in Java, whether you want it to be managed by the kernel or by the JDK, but in Java we try to think that Java is already big. It's been around for over two decades and will probably be big in two decades, hence.

When we touch concurrency in Java and threading in Java, there is an opportunity to rethink things. The Java architect said, "Yes, maybe we'll use the same thread API," but let's not assume that that's what we're going to do. Try to think that if there was no backup of threads, no existing API, how would you design them? At first, I thought there's not much to do, after all, what kind of operation you want from a thread. You want to start it, you want to wait for it to and join it, maybe you want to ask it to interrupt whatever it is that it's doing.

Shortly after we had this conversation with the architect, I read a very interesting blog post by Nathaniel J. Smith about something called structure concurrency. He credits the idea to Martin Sústrik, and this was one of the rare occasions when you read a blog post and you say, "That is absolutely right. That is the way to do things."

What is the main idea of structure concurrency? The central trick there is that instead of creating threads in a way that's like fire-and-forget, so you create a thread and that thread runs for as long as it wants and you have no control over it. Threads are confined to a well-known lifetime that extends to a given code block, and I'll show you an example. In the current prototype for lack of a better name, we just call it fiber scope, the name will probably change. We have this block that defines a fiber scope, all the fibers that are created inside the scope are guaranteed to be terminated by the time we exit the scope.

How does that work? When we try to exit that scope in Java, there's a tree of resources that's going to automatically call the close method on fiber scope and that's going to block until all the fibers that were created inside the block have terminated. This gives us some nice advantage, for example, when you create a thread and you just forget about it, there could be exceptions that are thrown by that thread and no one is ever going to handle. This way, because you know that the thread is going to be born there and die before the end of the block, you can catch its exceptions and handle it. You can create a great many number of fibers, and you can even nest these scopes to create a tree of fibers and you can cancel all of them by just canceling the scope.

Some nice stuff you can do with it, you can write a method that gets through re-tasks and wants to start the ball in parallel, but it's like a race, that you only want to wait for the first time it finishes. Fiber scope give us termination queues, and for each of the tasks we spawn a new fiber. That's essentially free, it's just like creating objects in Java and we sign them that to the termination queue, and then we block on that cue, and we take the first the result of the first fiber that has terminated and because we cannot leave the scope until all the others have terminated, as soon as we have a result there in the finally block, we cancel all the remaining fibers and then we can leave the scope.

We can do the exact same thing by just changing a little bit. Instead of saying that the fiber scope is cancellable, we can give it a deadline and say that we are only willing to wait up until that deadline for any of the threads. If one of the threads finishes by that deadline, then we good. If not, then the fiber scope is going to automatically cancel all them. This allows us writing interesting things in a very nice way.


Finally, there are by far the biggest use case that we currently envisioned for continuations of fibers. We expect people to use fibers, not continuations directly, but there are other things that you can do with continuations not as useful perhaps as fibers, but interesting, nonetheless, one of them is generators, and you may be familiar with them from python.

This is how you'd implement generate, there is a bug here, but generally this is how you'd implement generators using integration just to show you how easy it is, and what this gives us is the following: we want to imagine that we have an infinite array or an infinite collection of the Fibonacci sequence, and we want to enter iterate over it until we get to some look at the bottom, until we get to a number that's greater than 10,000. We can use streams for that, but in this way we get an iterable, something that looks like a collection, but notice how we generate it, we generate it in an imperative way. First, we yield zero and we go back, once we yield, you go back to the call there and that zero is going to get into numb. Then we start an infinite loop and we keep yielding numbers one after another. Every time we yield number, we leave that piece of code, we go back and we can write iterators in a very nice way.

The last thing I want to show you is how these continuation is composed. In fact, they're compose much better than monads because they can nest and they have these scopes. This example doesn't really make sense, but this time we want to generate the stream of prime numbers combined with some user input. We want to find the next prime number and then read a line from the console and concatenate it with a number we found, and we're doing a blocking operation inside the generator. What happens if you run it inside a fiber?

Right now we have two nested continuations, we have the continuation for the generator, and every time we yield a generator, it's going to go back into the for loop, but every time we go to the console and read a line from the user, that is going to suspend the fibers continuation and the closing scope, and we can, in this way, compose different kinds of effects. Sometimes they are called different kinds of effect in a transparent way.

This project is still under heavy development, you can find the Wiki page here. We don't have all the access binaries yet, but you can build the code yourself, and right now mostly interested input on the structure currency API. Any help you can provide in that would be appreciated and that is all I had.

Questions And Answers

Participant 1: Thank you. How does that differ or overlap with the co-routines? Looks quite similar to me.

Pressler: Co-routines are another name. The nomenclature here in the world of continuations is a problem. At least in the past, co-routines are sometimes a different name for one-shot delimited continuations. It's just that the name co-routines is now mostly associated with usages of continuations that have some syntactic expression, something like async/await, that at the syntax level you say, this piece of code is a continuation, and we don't do that here, any method is allowed to yield a continuation.

There's another different kind of continuations, symmetric and asymmetric and co-routines at a time used to refer to as symmetric continuations. These are asymmetric continuations, but basically different names of the same basic idea.

Participant 2: We often see continuations and tail calls talked about together for the ICPS Transformer. Is there any plans to add tail calls, the wonderful missing piece of the puzzle, into Java, plans to add them within my lifetime? I know there've been plans that have been forever.

Pressler: We don't actually need tail calls to implement these continuations. However, actually, Project Loom, the goal of the project is to add continuations, fibers, and tail call elimination. This is one of the goal of the project, it's just we haven't started yet, and we're probably not going to start until continuations are the least. That's going to be a next step. I just don't like talking about, some people start expecting it, but yeah, that is one of the other goals of the project.

Participant 3: Are you aware of any projects in Python that would add continuations?

Pressler: Python has generators which are a form of continuations, but on that I'm not that familiar with Python.

Participant 4: Do you envisage Project Loom being released one big bang in one release or you're going to be releasing the different kind of goals in different releases of Java?

Pressler: I've been asked if we were willing to release continuations before we release fibers. The answer to that is absolutely no because the continuations are a very low-level construct and once we release continuations, it can be 105 libraries built off of that. First, we want people to get used to one, however, it is possible that we will release fibers before we open continuations as a public class. We will likely release something before we do tail calls, for example. It will probably multiple chunks, I'm not sure what those chunks are, but basic functionality for fibers is probably going to be in the first release.

Participant 5: You mentioned that you have use-as-you-go memory footprint. Does that mean that you'll have to load the executed stack to continue executing because you can't continue calling forwards? How does that work?

Pressler: I can speak, and you can watch in other talks while I speak about the implementation for now. The continuation stacks are stored in the Java Heap inside two Java arrays and we do copy them back and forth, the question is how to do it fast. Right now, we have a solution that allows us to copy just one frame and the rest lazily. We can talk about this later or you can watch in other talk we talk about the implementation.

Participant 6: I think introducing continuations in Java opens the gate to memory leaks, another gate, and I'm wondering if there are any patterns that we can stick to to avoid memory leaks. Are fibers one of them?

Pressler: I think you find memory leaks in the sense that you can have a piece of code that starts executing, holds on to some object and never terminates. We do have the problem today already with threads, you can create a thread that at some point either lives forever or sleeps forever while still holding on to some stuff, but this probably could be exacerbated if you can have a million threads instead of just a thousand. Application developers are not supposed to use continuations directly, they're supposed to use fibers. You have the same problem with fibers as with threads in that regard possibly more.

We don't have a good solution to what happens if this problem is exacerbated, but if you have any ideas, I will be happy to hear them, but I can say that when it comes to the garbage collector, this is actually easier for the garbage collector, because if you have a thread, all the objects that are referenced by that thread are considered as GC roots. GC Roots needs to be traversed during stop the world pauses, and they can actually increase your pausable time. Continuations are not GC Roots. If they don't change, as long as they don't run the GC doesn't look at them. It doesn't solve the memory leak problem, but at least they don't introduce an initial burden in the garbage collection.


See more presentations with transcripts


Recorded at:

Jun 20, 2019