Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage Presentations Java Futures, 2019 Edition

Java Futures, 2019 Edition



Java Language Architect Brian Goetz gives a tour of some of the features coming to Java next.


Brian Goetz is the Java Language Architect at Oracle, and was the specification lead for JSR-335 (Lambda Expressions for the Java Programming Language). He is the author of the best-selling Java Concurrency in Practice, as well as over 75 articles on Java development.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.


Goetz: Today, I'm going to talk a little bit about where the Java language and platform is going in the next few years and a little bit about how we've changed the way we're going to get it there.

Let's talk a little bit about where Java has been and where it's going. Java is getting old, in a sense, Java has been around for almost 25 years. If I had $1 for every time Java was declared dead, I could probably retire. All of those predictions, so far, have turned out to be incorrect. Java is still the world's most popular programming platform, and we would like that to continue to be the case for a long time. How do we do that? Well, it's not a big secret, the answer is stay relevant. Stay relevant to the problems people want to solve, stay relevant to the hardware people want to run on, and don't break your promises. That's easy to say, it's a little bit harder to do, but you should see each of those themes going on in the things that I'm talking about in this talk.

Keeping Our Promises

In terms of don't break your promises, that's basically compatibility. From our perspective, the prime directive is stay compatible. It's my belief that Java is successful today because the Java code that you wrote 25 years ago just works. Old binaries still run, old source code still compiles, and we keep our users by keeping our promises. Now, this has a cost, it means that evolution of the language takes longer, it means there are certain things that we can't do or it's going to take longer for us to do. On the other hand, it allows people to adopt new functionality without us pulling the rug out from under them completely.

A perfect example of how we pulled this off was generics. Generics was a big disruptive change in the language and in the ecosystem, but nobody had to pay attention to it at any given time. You could take your old non-generic code and continue to run it, you could migrate it in whole or in part. You could migrate it now, later, or never. You could interoperate between generic code and non-generic code with graceful degradation at the boundaries and no flag day is where someone blew the whistle and everybody has to go recompile their code. That is how we can make significant upgrades to the platform without the risk of breaking our promises to the users and leaving people behind.

A similar example of compatibility is what we did with Lambdas. When we did Lambdas in Java 8, all of a sudden, libraries that had been written 20 years ago, that didn't even know that Lambdas were coming, worked with Lambdas on day one because we played into patterns of programming like single-method interfaces that were in common use rather than saying, "Oh, if you want to use Lambdas, you have to rewrite your libraries to use function types," and that would have meant a 10-year adoption curve for Lambdas rather than they work with old libraries on day one. It's possible to do this, it takes longer, it's a little bit harder, but we can still get a good result.

First, Do No Harm

The important thing to remember when we're picking language features is that language features are forever, and once we put a feature in the language, we're never going to be able to take it away. Every feature interacts with every other feature, and it constrains every future feature we might want to do. That means we have to pick very carefully and sometimes the consequence of that is, we can't do this feature, or we can't do this feature right now.

Generics were a good example of this, in 1995, we knew the language needed generics. It's not like we didn't know what parametric polymorphism was, but we didn't know the right way to do it. We knew the wrong way to do it, the wrong way to do it was to copy C++, and I think we're all glad that we didn't make that choice in 1995 even though the consequence was we had to wait a lot longer to get generics in the language. I think we got a much better result than we would have if we had just copied C++ in 1995.

The same thing is true with Lambdas. In the 2005 timeframe, some of you may remember there was a vibrant debate in the community with many competing proposals about how to add Lambdas to Java, and I think we're all glad we didn't pick any of those proposals. It took maybe five years longer, but we got a much, much better result. As programmers, we tend to focus on language features because we look at code all day, the reality is the way we think about how to pick language features is, is this making it easier for people to build and maintain reliable programs?

If people write code, we want them to be able to keep their investment in that code for a long time. We don't want there to be arbitrary reasons why they have to throw that code out. The best way to do it is to make it easy to build reliable programs that you can read, you can understand, you can maintain, because the code that you can't understand is the code you're going to throw out. When we select features, we look at it through the lens of is this making it easier for people to build and maintain programs that they understand?

So, We’re Clearly Not Done

It should be clear that no language is ever finished and Java is no exception. I was talking, before the talk, to some folks in the lounge about why do programming language designers always want to add new features to languages, why aren't languages ever good enough? The reason is, well, the problems that we want to solve with programming languages change. The hardware we want to run on changes. Developer expectations change, developer fashions change. If we want to stay relevant to the way people want to program, the language has to evolve. That's ok, and we can continue to evolve, although we have to do it carefully because languages can get full, in the sense of not being able to add new things without breaking things, so we have to pick carefully.

In the Last Year (or So)

Let's talk about what's happened in the last year, or actually a little bit more. We recently switched to a different delivery cadence for Java. It used to be that we had these two to four-year feature box releases, where we pick a giant feature like Lambdas, and we'd say, "That's the release driver for Java 8," and then we would take a wild crazy guess that we could get that done in three years and we'd be wrong, big surprise, and then we'd be late. This wasn't good for us, this wasn't good for our customers. The perception was things were moving slowly, dates were unpredictable, but there were also a bunch of other ill effects that came from that release model.

It often didn't feel worthwhile to do smaller features because they always got stuck behind the big features. The users got frustrated because a feature that might have been done in the first six months of a release cycle had to wait three more years for the release to finish. We made the decision to switch to a six-month time box release cycle. We've been doing this almost two years now, we've been able to deliver releases like clockwork, and it's been fantastic. Not only are we able to deliver things more frequently, be more agile in our planning, but from our perspective internally, our release management overhead has gone almost zero. It's fantastic.

I get to spend all my time on engineering and almost no time on release management meetings and things like that because features don't board the train until they're ready. If something misses a train, no big deal, six months is not a very long time to wait for the next train. Whereas if the next train is four years away, people would move heaven and earth to try to get a feature in, and that didn't always work out as well as they would have hoped.

This transition from our perspective has been fantastic in terms of our ability to focus and deliver value more frequently. I think it's been a little bit of a challenge for users because they don't quite understand it yet, but it's actually fairly simple. The rate of innovation hasn't really changed, it's the rate of delivery that's changed. You can pick up every release if you want, you could pick up by every other release, every three years. Whenever you want to do, it all works.

New Release Cadence

To illustrate, Java 9, which was released almost two years ago, was three and a half years in the making, there were over 90 JEPs. JEPs is our unit of feature planning. This was, as some of you probably know, somewhat of a disruptive release. The following releases, 10, 11, 12, followed every six months after that. They weren't as big as 9 because we hadn't been working on them for as long, but if you count up the JEPs in like a six-month release like Java 10 and you multiply it out to the scale of how long Java 8 or 9 took, you see that the rate at which features are being added hasn't really changed, the rate at which features are being delivered has changed.

The most recent version of Java 13 is already in ramp down and should be released in a few months, and we're already working on Java 14. These six-month releases are full feature releases, and they're not as big and disruptive as 7, 8, 9 were, but they're not betas, they're real releases.

Preview Features

As we develop the platform faster, there's a risk that certain things will happen too quickly, especially because language features are forever. Once we put a feature in the language, we don't want to change it. We still need a way to make sure that we're putting the right features in the language, so what we decided to do was that for major features that are visible in the programming model, we have a mechanism called preview, where features will go through a round of preview. It's a provisional the feature is done, but we might change the paint color a little bit before we finalize it. These aren't experimental or beta features as much as they're provisional. We're test driving them, and the expected outcome is they'll be promoted to full permanent features in the next version or two.

This way, not only do you have a chance to see what's coming, you also have a chance to try it out. There's full IDE support, tooling support for all the preview features when they're released because they're part of the language specification. They're not just nailed on the side. If you want to use them, you have to turn them on because we don't want people to accidentally run them in production and find that out after the fact.

For example, if you're using the command line tools, Java compiler or launching the JVM, you have to say enable preview in order to use preview features. If you're using your IDE - IDE is like in their language level picker - you'll see two different choices, one for 12 and one for 12 with preview, so you can pick which language level you want to have.

Current Initiatives

That's process, you didn't come here to talk about process. You came here to hear about features, so let's talk about features. Our feature pipeline is better than it has ever been. All the time that I've been involved in Java, I've never seen a feature pipeline this rich and this deep. We have a lot of projects going on, I'm not going to talk about them all, I'm going to talk about a couple of them. The one that developers are probably most interested in are the features that are closest to the surface of the language. This is project Amber, we call this right-sizing language ceremony. These are a lot of the features that developers have been asking for a long time. It might not be deep, but they help what we do every day.

Some of the other projects, Valhalla and Loom, are much deeper features in the sense that they start in the VM and work their way up through the language and have to do with adapting the programming model to run better on modern hardware, for example, or Project Loom is about fibers and continuations so that we can run millions of concurrent activities on a single JVM. Project Panama is about better interop with native code and native data. I'm not going to talk about all of these today, but I'm going to talk about Amber in some detail and Valhalla in a little bit of detail.

Local Variable Type Inference

I know this talk is called language futures, I'm going to talk about something that's already in the language because I'll bet for a lot of you it's still in your future. How many people here are still on Java 8? This is a feature we added in Java 10, which from your perspective is the future and from my perspective is the infinite past.

This is an interesting feature, it's not actually from a language design perspective, a very interesting feature, but it was actually one of the most commonly requested features that we got, which is why do I have to type out the whole big name of a type? Why can't I just let the compiler figure this out for me? This is something that you might have seen in Scala, or in Kotlin, or numerous other languages where instead of declaring a local variable with an explicit type, you can tell the compiler, "Go ahead and infer that for me. Figure out the type, compute the type of the initializer and make that the type of the variable."

It's not particularly deep, it's not quite as shallow as some people think it is. Some people call this syntactic sugar, it is absolutely not syntactic sugar, it's deeper than that. The thing I like about this is, in any of these declarations, there are three things going on, there's a type, there's a variable name, and there's an expression. The most important thing in that line is the variable name because that's the aspect that involves programmer creativity where you've actually said what this variable means in my program. By leading the type, it brings the variable name front and center, where it is more clearly in the user's field of attention and makes it easier if you have chosen good variable names.

How many people here routinely choose bad variable names? None of us choose bad variable names, if we don't choose bad variable names, we often don't lose a lot of readability. Sometimes we might, in which case, we're not making it illegal to declare an explicit type, we're just saying, "If you think it's more readable, go ahead and leave the type out, otherwise put it in if you think that's more readable." Developer’s choice, some people don't like when we give developers choices because bad developers might make bad choices. What we found is that's true, but it's pretty hard to stop bad developers from making bad choices.

The key thing here is the variable name is often the most useful thing, so let's give that priority. What are the restrictions? This works for local variables. It doesn't work for method return types or parameter types because those are part of the API, but for local variables, which are part of the implementation, it's perfectly ok. There are some weird cases because the Java language actually has some weird types in its type system, like intersection types and capture types. Every once in a while, this will expose a weird type to your program that you didn't realize was there. For example, if you say the type of this.getClass, you might think that's class of question, it's actually class of capture of question.

What's a capture type? Well, we've all managed to ignore capture types for a long time, even though they've been there. They're basically existential types that, say, whenever I use a wild card, there's a specific type that I mean, at that point in the program and the capture that describes that. Most of the time we can ignore this, but I bring this up because this is bringing some of the fine points of the type system a little bit more in your face and sometimes people get surprised by it. More interestingly, if I say List.of(1, 2, "three"), you might think that would be a list of question, but actually it's a list of question extends serializable and comparable and some more because the common supertype between those parameters is not just object or wildcard. Every once in a while, this may bite you. Just be aware the type system that you've been working with all along is a little more complicated than you thought, and most of the time that's just been hidden from you.

At a meta-level, it's interesting, this is one of the most commonly requested features. People would frequently say, "When is Java going to do this? It's crazy that I still have to type out these big long tie-ups when in Scala I don't have to." On the other hand, when we decided finally that we're going to go ahead and do this, there was this vocal backlash of, "Oh, you're just getting into fashion. Oh, you're going to make code unreadable. You're just enabling bad developers to write bad code," so, you can't win.

Then when we actually released the feature, the reality was completely different, it was fine, no one complained. I think this was all just a lesson about how we fear change, but it will take a little bit of time for good practices and good style to emerge. In aid of that, we've written some documents, a FAQ and a style guide about how most effectively to use this feature, and we plan to keep doing that for other new features.

Like most features that give you a choice, it requires some judgment. That should be a good thing. Most of us, hopefully, have good judgment most of the time, but it does also mean that people are given an opportunity to do the wrong thing and we need to help each other find the right style.

Switch Enhancements

That's from my perspective past, but still future. What else is coming up? In Java 12, which is also in the past, we had a preview feature, which was enhancements to switch. This was re-previewed in 13 with a small change. This feature sedimented out of a bigger feature called pattern matching, which I'll talk about, where we had explored this as part of pattern matching and then realized, well, here's a small chunk of it we can deliver earlier. This is one of the benefits of the more rapid release cadence.

The switch statement in Java is one of the more unfortunate sets of choices in the language. I think we were copying a little bit too literally from the C specification when the language was designed. There are a lot of complaints about switch. Very often, you want to use it as an expression, but it's a statement, so you have to simulate an expression by assigning to a common variable in every arm.

For example, people certainly hate having to break on every arm of a switch, which is irritating, but much worse than irritating is it's error-prone. It's a way to make mistakes that are hard to notice. In the context of pattern matching, we looked at how does the switch statement have to evolve to support pattern matching, and then we identified a couple of things that we could do to make it more generally useful.

As an example of what's wrong with switch, here's a typical switch that's an expression masquerading as a statement. We have a declare, a local variable, and then in each arm of the switch, we assigned some local variable, and then even though we've covered all the cases, we still have to have a default case saying something about the world is broken. There's a lot of repetition here, we're repeating the assignment, we're repeating the breaks, and then we have this annoying boilerplate that we have to write.

People complain about boilerplate, I don't like boilerplate either, but for a very different reason. Most people don't like boilerplate because it's stuff they have to type, and they don't want to type it. I don't like boilerplate because they're a place for bugs to hide. I want to eliminate boilerplate because it eliminates the places where bugs are going to hide.

Here's the same switch that's rewritten as a switch expression. It looks slightly syntactically different, but the basic concept is the same. This is probably a lot closer to the code you had in your head when you sat down to write the previous code, which is if it's Monday, Friday, or Sunday, the number of letters is six, otherwise, if it is Tuesday it's seven, etc., and expressions have to be total, they have to provide a value in all cases, and because we're switching here over in enum, the compiler actually knows that its total and doesn't make you write the default clause. We've squeezed away a lot of the repetition, but we've also squeezed away a lot of the sources of error. The code is shorter, but it also has fewer places for bugs to hide.

There's actually two sub-features going on here at once. One is that you can use switch either as an expression or a statement, and the other is we have a streamline form of the case label, which uses an arrow instead of a colon, which means there has to be one thing on the right-hand side, either one expression or one statement, depending on whether it's a statement switch or an expression switch. That means, by design, you can't fallthrough because fallthrough can only happen if you have more than one thing.

A lot of switches today are actually expressions in disguise. We did some searches over some typical bodies of code, and 80% of the switches were actually wanted to be switch expressions anyway. Instead of making people specify something in a very roundabout way, we let them specify what they mean directly. These two improvements are orthogonal. You can use the streamline case format in either expression or statements switches. They're two independent improvements to switch that work nicely together.

This was originally previewed in 12, we made a small change, re-previewed in 13, presumably it'll be a permanent feature in 14, unless the feedback in 13 tells us that we made some horrible error that we didn't anticipate, which could happen.

Multi-line String Literals

The other preview feature that's in Java 13 is beta string literals. String literals have been a common source of complaints in Java, and again, this is a trivial feature. There's not a lot of rocket science here, but this is something that people complain about a lot, where if you have a multi-line snippet of code, a snippet of JSON, or HTML, or SQL, or XML, or whatever, you have to mangle it up manually with backslash, and then quotes, and concatenation. That's boilerplate, and boilerplate is bad because it's a place for bugs to hide. What you'd like to be able to do is take a snippet of text and just paste it in to your Java code without having to mangle it up. That's both easier to read, but it's also less error-prone.

This illustrates another aspect of the more rapid cadence, which was we originally were going to do this feature in 12. We had a different feature designed and at the last minute, we withdrew it because we realized we could do better. We withdrew it, stopped, redesigned it, and I think that the version that we have is significantly better than what we were considering in 12.

To just give a quick example, this is what people have to write today, it's awful. You can write a multi-line string literal, the delimiter is the three double quotes that a lot of languages have made that delimiter choice. The dots aren't actually dots, they're there to illustrate what we call accidental white space, which is when you have a multi-line string, you're likely to indent it with your code, but you don't actually want that indentation. You want some of the indentation, not all of the indentation. The dots illustrate the accidental indentation that the language is going to strip for you and then the white space that hasn't been rendered as dots, that is the essential indentation which is relative to the delimiters that gets kept. This way, if your IDE reinvents your code, it doesn't actually change the output of your program by adding extraneous spaces and removing spaces. The intent is, you should be able to take a multi-line snippet of something, cut and paste it out of another editor, put it between the fact delimiters, and you're good to go.

Pattern Matching

Those are features that have already shipped in some form, either as permanent features or as preview features. Let's look ahead a little bit, see what's on the board. I mentioned earlier a larger feature called pattern matching that we've been working on for a while that we intend to deliver out in phases. The first phase probably will come in 14, I hope, no promises. The basic concept underlying pattern matching is that we write a certain pattern of code all the time where we take some object, and we do some test against it. If the test succeeds, then we conditionally extract some data from it and we use the data. A cast is actually a very simple example of that. You say, "Is this thing an integer?" If so, cast it to integer and then use the integer value.

There's some repetition here, too, we're repeating the type name twice. That's a place for bugs to hide. How many people have ever accidentally cut and pasted some instance up in cast code and they changed the instance subtype and they forgot to change the cast type? That's an easy mistake to make. It's super irritating because the cast actually isn't adding any value here. What are you going to do after you do instanceof integer? The only thing you could possibly do is cast it to integer. Not only does the language make you do it explicitly, but it gives you a chance to get it wrong. That's not how we want it to be. We'd like to get rid of that repetition because it's where bugs hide.

Some languages have chosen to address this with flow typing, but I think there's a much better answer hiding in there, which is pattern matching. A pattern basically fuses those three things that I mentioned, a test, a conditional extraction, and binding new variables into one operation. We can rewrite this instance of using a pattern by writing it like this, if object instance of integer, and then we give it a name. The integer intValue is a type pattern, and it combines the type test, are you an integer, with a conditional extraction of if you're an integer, cast it to integer and bind the integer result to that fresh variable in value with the exact scoping that you would expect that that intValue is valid inside the block, but not valid outside the block.

It looks a little like a variable declaration, that's not an accident, and if you rewrite existing code that uses instanceof in cast with patterns, basically all the casts go away. That's nice, but this is really just scratching the surface of what pattern matching can do. I have some more examples that show that there's a little bit more depth here.

Pattern matching also works nicely when you're doing short-circuiting because the scoping rules are, they're flow sensitive. For example, if you look at the code for an equals method that your IDE generates, it's generally this horrible mess of if this return falls, else if that return true, else return this, then, that and the other thing, but not the other thing on Tuesdays. This is ugly code to read, you have to actually look at it carefully to make sure you understand it. If you were to express this using a pattern match, you could express your equals in a single expression, which is much more straightforward.

If object instance of whatever class and then bind it to a variable, and this size equal that size, and this name equal that name, you look at that, it's much easier to read, it's obvious what's going on and because the scoping of pattern binding variables is flow sensitive, the compiler will be able to typecheck that, yes, it works if I can join these expressions with and, but not with or because it might not be defined when you get to other side of nowhere. It gives us a better way to express a lot of things that we do every day. We can use it in instance of like I showed, but we can also use it in switch. That's probably going to come in a later phase.

Here's another example of highly repetitive use of boilerplate code where you're repeatedly testing something against a bunch of types, instanceof integer, instanceof byte, instanceof long, instanceof double, etc., and in each case, you repeat the test, you have to repeat the instanceof a bunch of times, you repeat the assignment a bunch of times. There's a lot of repetition here. We can turn this into a switch statement with patterns where the case labels, instead of being constants, are type patterns. If the target is an integer, cast it to integer, bind it to i, use i in that case arm. If the target is a byte, same thing, etc.

The code got a little bit smaller, that's good. We got rid of some boilerplate, the business logic is starting to get more obvious, so that's all good. Then we can combine it with the expression switch feature that we talked about, and you end up with something like this, which is obviously a lot more concise, but it's also a lot more clear what's going on. Probably, this is the code you had in your head when you sat down to write it in the first place, but you couldn't write it, so you ended up writing this big nasty if then else if instance of, etc., chain with all the cut and paste errors that you could manage. The expression switch feature started out as a sub part of the pattern matching feature, it segmented out into its own feature, the pattern matching will come later.


The pattern matching rabbit hole goes deeper than this. A lot of classes are just dumb aggregates for data. When we write one of these classes, there's a lot of repetition, where you've got a bunch of fields, but then you have to have a constructor, and an equals, and a hash code, and a two string, and getters, and setters, and all of that. We've all experienced this frustration, the IDE will generate the code for us, but it doesn't help us read the code, it only helps us write the code.

Reading code is actually a lot more important than writing code. The reason I don't like code like this is all this repetition is an opportunity to make mistakes. If all this class is, is a dumb holder for a couple of fields, then it should be clear just by looking at the code that's all that's going on. I shouldn't have to read every line of boilerplate code to say, "Oh, yes. There's no code I had to read here." That's a frustrating experience, to read all this code and say, "Good. I didn't have to read any of that," because you don't get that time back.

People have asked repeatedly for “I want to be able to write a dumb data holder that morally is a class like this, but I don't want to have to write all of this.” This is not a particularly deep feature, but I think it will make a lot of people happy to be able to say, "Just like enums were a specialized class where I gave up some flexibility and in turn, I got some extra features, records are similar thing where I say, 'Here's a record. It's called point. It has fields called X and Y, and if I don't implement the standard members myself, I'll get sensible defaults for constructor, and accessors, and equals, hash code, two string etc.'"

That's a nice feature, it's a pleasant reduction in boilerplate, but it's not a feature that's about boilerplate. It's about raising the level of semantics in your program. When you see a record, it's saying, "I am just a carrier for my data. I am not anything more than that, and because the program has committed to this semantic restriction, it allows us to infer all of the methods and other API members that are related to the state." This is a tradeoff we've seen before in the language. With enums, we gave up control over instance creation and in return, we got a lot of functionality for free. With records, we give up control over being able to decouple the API from the representation. We say the API is the representation. The representation is in text and y, and that's the API, that's my constructor, that's my accessors, that's my equals, in return, I get all of those things for free.

Sealed Types

We're going to come back to pattern matching in a minute. This is, in some sense, a digression, but you'll see the connection in a minute. If you programmed in functional languages, you'll look at records and you'll say, "Well, that looks like a nominal form of tuple, or what ML would call product type. Records are one half of what are called algebraic data types. They are the products half of the algebraic data types. The other half, which is sum types, is also a very useful thing. A sum type is just a discriminated union, it's a way of saying a shape is a circle or a rectangle and nothing else.

We've seen this before in Java, enums are a form of sum type. A day is either Monday, Tuesday, Wednesday, it's not anything other than these seven days. As we saw earlier in the switch example, when you program with sum types, it gives the compiler the ability to reason about exhaustiveness. If you say a shape is either a circle or a rectangle, and then you do something that covers the circles, you do something that covers rectangles, the compiler should be able to say, "Ok, you've covered everything," without you having to say, "And if it's something else, then I don't know what to do," because that should be impossible. A compiler can figure that out for you.

Sealed types and records, sums and products go very nicely together. Continuing on the shape example, which might not be the best example, but it's good for fitting on slides, if I say a point is X and Y, and I say, "I have a sealed interface shape and it's subtypes are circle and rectangle, and a circle is defined by a center and a radius, and a rectangle is defined by two corner points, the compiler can reason about exhaustiveness, and this connects very nicely with pattern matching, again.

Pattern Matching, Again

As we get deeper into the phase delivery of pattern matching, we'll have, in addition to type patterns which we've seen before, deconstruction patterns, which we'll be able to take apart a shape into its appropriate parts. A deconstruction pattern for circle looks like this here. It looks like the constructor a little bit, that's not an accident. Where you say if (shape instanceof Circle(var center, var radius)), you're saying if the things a circle, cast it to circle, extract its center property and its radius property, I haven't told you how yet, and put them in local variables, and I can just use them.

Where does this deconstruction pattern comes from? Well, it comes from the declaration of your code. Just like the constructor, you can think of a deconstruction pattern as an anti-constructor. Constructor takes some state, makes an object. Deconstructor takes an object, explodes it into the state. Records, in addition to getting constructors for free, will get deconstruction patterns for free. For the declaration of circle that I had on the previous slide, I'm automatically able to do this to say, "Take a circle apart into its center and radius, and I'll get full type checking." I didn't have to say var here, it's just a matter of convenience. I could have said points in int, but you can use type inference and patterns just as easily. There's no magic here, we're not guessing based on field names. If you want to do a deconstruction pattern on a class, that class has to have a deconstruction pattern. Classes like records will get them for free, other classes, we'll have to write them the same way we write constructors today.

Here's how it all comes together, which is if I have a switch over a shape, I can say, "Well is it a circle?" and extract the circle content and say the area of a circle is pi r squared. Ok, is it a rectangle, I extract the bounding boxes, I compute delta y, delta x, and there are no other cases, so I don't have to say default, and I'm done. The compiler will type check exhaustiveness, and if I haven't covered all the cases, it will fail. There are a lot of things that make sense to model in terms of these sums of products or sealed types of records. It allows us to pass them around and take them apart very easily without having to do a lot of instanceof, and cast, and all of that. Once you have this feature in the language, you'll wonder how you lived without it.

Project Valhalla

This was Project Amber, this was mostly language productivity features. I want to talk briefly about Project Valhalla, which is a much deeper feature. It has to do with how the VM lays out data and memory. This is a project we've been working on for a long time. You've probably heard me talk about this before. Some of you are wondering, "Gee, Brian, you were talking about this five years ago. You tell me you're not done?" No, we're not done. We've been working on it hard for a long time, and we've made a lot of progress, and we're not done.

This is an example of staying relevant to the hardware. Hardware has changed tremendously in the last 30 years. Thirty years ago, the cost of the memory fetch versus the cost of an arithmetic op were about the same, they reached a handful of cycles. Now, I can issue four arithmetic ops on a cycle, but a cache miss might cost me 300 cycles to go back to main memory. That's a factor of 1000 that the relative cost model has drifted in 30 years. It stands to reason that whatever we did for memory layout 30 years ago probably isn't optimal for today's hardware.

If the hardware that we're running on is changed, the behavior of the runtime should change in order to take advantage of today's hardware. Unfortunately, the layout and memory that we have today induces a lot of cache misses because of object identity. The natural way to implement objects with identity is through pointers, pointers meaning direction, and indirection means cache misses. Not all objects need identity, a lot of objects are just dumb data. They don't need identity, they don't necessarily need to pay for it.

Data Layout

How do we pay for it? Well, we pay for it in the form of layout. If I have my point class, and I have an array of them, this is what the layout looks like in memory. The array is an object with a bunch of pointers in it, and each element is a pointer that points to a point object, which has a header and some data payload. First of all, on a memory density basis, I'm not doing very well because I have these headers for every point and about 60% of the memory in this diagram is just overhead for object headers and pointers and isn't actual data.

That's limiting how much data I can put in the heap. It also means if I want to walk down this array of points and I do a calculation across it, I'm risking the cache miss every time I traverse one of those arrows. I didn't ask for this layout directly, but it's the layout I got because of the assumption that these points are objects, and we care about object identity. The VM can't really figure out whether you don't want object identity, and therefore, it's very hard to optimize away.

When confronted with this, sometimes developers will do stuff like this. They'll say, "Forget about this object abstraction. I'm just going to shred things into array as a primitives," which is a totally fine trick, except now your code is unmaintainable, unreadable, and error-prone. Sometimes we have to do that, but sometimes people do it just out of some bizarre obsessive-compulsive disorder. I look at this as we've given people a bad choice. We've said, "Abstraction or performance, pick one," and this is a chance for people to pick wrong. Given this particular choice between anything and performance, developers almost always pick wrong, and they go for performance even when they don't need it.

The cure to this is for the programmer to say, "I actually don't care about that identity. You're free to lay things out the way you want," because the data layout we want, in most cases, is something like this. I want a big array and alternate X, Y, so the question is, what code do we write to get this layout? We don't want it to be vastly different from the code we're writing today, and so we model this as a modifier on a class. We're calling this in line, we used to call it value classes, but people found that confusing because the word value meant too many other things. This means that you can take the contents of this class and inline it into other classes, inline it into arrays. It's just the data, there's no identity here. It looks like a small change. It's not a small change, it goes all the way down to the metal, so it's been taking a little bit of time.

This is a way for the programmer to say, "This is what I mean," and for the VM to respond with, "I can give you a flat and dense layout that doesn't have any of that object header overhead and pointer indirection," and you can do calculations over this much more quickly. This is one of those tradeoffs that I was talking about where you give something up, you give up identity. What does that mean? It means you're giving up mutability, you're giving up representational polymorphism, but in return, the VM is giving you the ability to have things laid out in memory in a much more hardware-friendly way. You get flat and dense layout, you can fit a lot more data in memory, and you can access it more quickly.

Value Types

You can think of this as faster classes or programmable primitives. It is the same thing for both directions, but they're really more like classes in terms of the programming model. They can have methods, they can have fields, they can have interfaces, they can have type variables, they can have private fields, they can use encapsulation, they can use generics. You can think of this as faster objects or as user-definable primitives, both intuitions work. The motto is codes like a class, works like an int.

Who cares about this? My claim is everybody cares. If you're writing applications with large data sets, you care about locality and footprint. If you're writing libraries, this is a mechanism by which you can write more efficient libraries, better data structure implementations, or numerics, or smart pointers, or wrappers like optional that give you functionality, but that don't have extra memory footprints or indirections in them.

If you're a compiler writer, if you're like Charlie Nutter writing the JRuby compiler, you aren't forced to model Ruby's numerics using objects, and take a boxing hit, an indirection hit every time you do a numeric calculation. My claim is this benefits everybody, whether you're using the feature directly because you're working with large data sets, or you're working in a language like JRuby, in which case you just get faster performance, or you're working in Java and hash map just got 40% faster, so all of your programs just get faster as a result. This is something that benefits the whole ecosystem, even if you're using it directly or indirectly.

Project Valhalla

We've been working on this for quite a while. We've gone through multiple rounds of prototypes to investigate various aspects of the problem. We're honing in on what I think is the first prototype that should be usable for people to actually write data structures with. We expect that to be out in August or so. If you're interested in trying this out, it's getting to be a good time. I'll just give you a quick example of the improvements that are possible here.

Here's a typical textbook complex matrix multiplication. I wrote complex class by literally copying out of my math textbook. People are, “Look at this code, and it's not very complicated,” but they shudder up because they see new complex all over the place. They say, "That's going to allocate a lot of stuff," and it does. Then we write the multiplication, again, copying from the textbook, and it's straightforward except for the fact that we're allocating a ton of objects. If we run this, our performance is going to be limited by all of this allocation cost. If I change complex to be an inline type, all that cost goes away.

Microbenchmark, take with a grain of salt. This was run on a pretty typical modern desktop class system. When I ran the inline class version from a wall clock time perspective, I saw a speed up of 12x. That's pretty good. Three x reduction in instructions executed, thousand x reduction in allocation, that's all pretty cool. Why did that happen? Well, the interesting number is this last one, which is instruction per cycle. Remember, earlier I said, modern CPUs can issue multiple arithmetic ops per cycle, but in reality, they rarely do because they're spending all their time waiting for the memory subsystem to cough up data. In the boxed version, we were retiring about one instruction per cycle. In the inline version, we were retiring almost three times that many instructions per cycle.

That's why even though we executed 3x fewer instructions, we spend way better than 3x less - that doesn't make any sense - wall clock time, running the benchmark. Very promising, still a work in progress, but it's starting to bear fruit.

Summing up, our pipeline is awesome. Amber is already delivering features, there's lots more of good features coming. The bigger projects like Valhalla, and Panama, and Loom are also starting to bear fruit. It's a really exciting time. As you notice from the title of my talk, this was the mid-2019 edition. I've gone to the Apple version numbering for my talk because things change so quickly. Come back next year, gauge how much progress we've made in the last year.


See more presentations with transcripts


Recorded at:

Jul 18, 2019