BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations Panel: Startup and VM Futures

Panel: Startup and VM Futures

Bookmarks
47:42

Summary

A lot of the techniques and approaches that are used for developing and improving software performance are tried and tested, rather than innovative - but where does that leave startups who leverage the VM? What does the future hold?

Bio

Monica Beckwith works at Microsoft. Anil Kumar works as a Performance Architect for Scripting Languages Runtimes at Intel. Gil Tene is CTO and co-founder of Azul Systems. Mark Stoodley is Eclipse OpenJ9 and OMR Project Lead at IBM. Sergey Kuksenko is a Java Performance Engineer at Oracle.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcripts

Vidstedt: I'm Mikael Vidstedt. I work for Oracle, and I run the JVM team, and I've been working with JVMs my whole career.

Stoodley: I'm Mark Stoodley, I work for IBM. I am also the project lead for Eclipse OpenJ9, which is another open-source Java virtual machine. I've also spent my whole career working on JVMs, but primarily in the JIT compiler area.

Tene: I'm Gil Tene. I'm the CTO at Azul Systems, where we make Zing and Zulu, which are JVMs. I have been working on JVMs, not for my whole career, just the last 19 years.

Kumar: I'm Anil Kumar from Intel Corporation. I can say I've been working at on the JVMs for the last 17 or 18 years, but not in the way these guys on the left side are talking. I'm more in showcasing how good they are. I also chair the committee at SPEC which made the OSGjava benchmark. If you heard about SPECjbb2005, SPECjbb2015, SPECjvm, SPECjEnterprise, they all had come out of the company Monica [Beckwith] and I have worked on. Anything related to how to improve these benchmarks better, how you can use them or what issue you see, perfectly happy to answer those things. We are working on a new one, so if you have any suggestions…

Kuksenko: I am Sergey Kuksenko. I am from Oracle, from Java performance team. My job is making Java JDK, JVM, whatever is faster, probably for the last 15 years.

GC and C4

Moderator: One of the things that we have in common in this panel is startup and JVM futures and responsiveness. Everybody has contributed in some way or form to improving the responsiveness of your application if it runs on the JVM. If you have any questions in that field, with respect to, say, futures, then I think talking to Mark [Stoodley] about that would be a good option, or you can just start talking to Gil [Tene] as well, where he can tell you everything about LTS, MTS, and anything in between. There's an interesting thing that I wanted to mention about ZGC and as well as C4. Maybe you want to talk about C4. Do you want to start talking about C4 and how it changed the landscape for garbage collectors?

Tene: I did a talk yesterday about garbage collection in general. I've been doing talks about garbage collection for almost 10 years now. The talk I did yesterday was actually a sponsored talk, but I like to put just educational stuff in those talks too. I used to do a GC talk that was very popular. I stopped doing it four or five years ago, because I got sick of doing it, because it's the only talk anybody wanted me to do. The way the talk started is I was trying to explain to the world what C4 is, and it turned out that you needed to educate people on what GC is and how it works. We basically ended up with a one-hour introduction to GC, and that's popular. What I really like in the updated thing is, when I went back and looked at what I used to talk about and how we talk about it now, I used to explain how GCs work right now and how this cool, different way of doing it that's solving new problems, that is needed, which is C4 and Zing, does its thing and how it's very different and how I'm surprised we were the only ones doing it.

Yesterday I was able to segment the talk differently and talk about the legacy collectors that still stopped the world and the fact that all future collectors in Java are going to be concurrent compacting collectors. There are three different implementations racing for that crown now. I expect there to be three more, probably. I wouldn't be surprised if there's a lot of academic thing. There's finally a recognition that concurrent compaction is the only way forward for Java to keep actually having the same GC in the next decade. It should have been for this decade, but I'll take next decade. C4 probably is the first real one that did that. We've been shipping it for nine years. We did a full academic peer review paper on it in 2011. It is a fundamental algorithm for how to do it, but there's probably 10 other ways to skin the cat. The cat is concurrent compaction. Everybody needs to skin it.

Vidstedt: I think you're absolutely right. I think the insight has been that, for the longest time, throughput was basically the metric. That's what we wanted Java to be really fast on the throughput side, and then, maybe 10, 15 years ago was when we started realizing that, "Yes, we've done a lot of that." It's not like we can sit back and just relax and not work on it, but I will say that the obvious next step there was to make everything concurrent and get those pause-times down to zero. Yes, lots of cool stuff happening in that area.

Stoodley: That's actually a broader trend, I think, across all JVMs. Throughput did use to be the only thing that people really were focused in carrying about. The other metrics like responsiveness, like startup time, like memory footprint, and all of these kinds of metrics are all now coming to the floor, and everybody's carrying about a lot of different things. It introduces a lot of interesting challenges in JVMs, and how do you balance what you do against all of those metrics, how do you understand, how do you capture what it is that the person running a Java application wants from their JVM in terms of footprint, in terms of startup, in terms of throughput, in terms of responsiveness, etc. It's a very interesting time to be a JVM developer, I think.

Tene: I think the reason you've heard how many years we're all at it is because it's very interesting to work on JVMs.

Different Implementations of Compacting Concurrent Collectors

Participant 1: In the same way that the legacy collectors let your split and try it off between latency and throughput, I was curious see what the panel thinks about the different implementations of compacting concurrent collectors these days and how you would say they're differentiating themselves in their runtime profiles? For a little bit of context, I've been using Java for 20 years. I haven't been on the JVM side. I've been on the other side. I've been causing problems for you guys for 20 years. By the way, not everyone's just interested in the GC talks. We did coordinate in the mission talks. That was remarkable. I was just curious to see if you could reflect on that, how do you see the different collectors differentiating themselves in that low cost, time, low latency space.

Tene: It's hard to tell you. I think they're still very evolving, and where they were aligned and whether there will differentiation on throughput versus responsiveness is still a question. I think there are multiple possible implementations that all could be good at both latency and throughput, that all myth about you have to trade them off, it's false. There's simple math that shows a concurrent collector will beat a stop-the-world collector on efficiency. There is no trade-off. Forget about the trade-off. There are implementation choices and focuses, for example, memory elasticity, focusing on small footprint versus large, is fragmentation a problem or not, which battles do you want to pick and compare it. We might find that there's some rounded way of handling all of them, or maybe there's specialization. I think there's a lot for us to learn going forward.

For example, for us, with Zing, we probably started from the high performance, high throughput, low latency world, and we've been slowing coming towards the small. Today, you can run it in containers with a couple cores and a couple gigabytes in it. It's great. For us, 200-megabyte seems really, really small, like tiny. Then you have these guys, they're starting there and actually trying to compact from there. Those are very different concerns. I don't know exactly what line, size of heap, size of footprint, maybe even throughput the crossovers are. They're not architectural. They're more about the focus of that implementation. I think that over the next three, four, five years, we're going to see multiple implementations evolving, and they'll either have an excellence in one field or wide breadth or whatever it is. Obviously, I think that the one we have now in production is the best, but I fully expect to sit here with a panel of five other guys comparing what we do well.

Stoodley: I would agree with that. I think, for us, it's still early days. We're still building out our technologies in this area. There's still a lot to do just to get to state of the art in the area. There's lots of interesting things to do, and we'll see what it looks like when we get there. In the end, how successful we'll be and whether there will be differentiation is going to come down to people like yourselves asking us questions and showing us examples, giving us things to prove how well it works or how well it doesn't work. That will drive how well-developed these things are going forward. We need your help.

Realising New Features

Participant 2: Can I ask a follow-up question? When are we going to deprecate or take out CMS and G1 from the options?

Vidstedt: I have good news for you. Eight hours ago, I think, now, approximately, CMS was finally actually removed from the mainline codebase.

Tene: This is for 15 or 14?

Vidstedt: We never promise anything in terms of releases, but this is mainline JDK. The next release that comes out is 14, in March. That much I can say.

Participant 2: [inaudible 00:11:54]

Vidstedt: The reason why I'm hesitating here is that the only time to tell when a feature actually is either shipped or not in the release is when the release has likely been made. We can always change our minds. Basically, I will say that this is unlikely for CMS, but if people come around and say, "Hey, CMS, what's actually the greatest thing since sliced bread?" On top of that, we're willing to help maintain it because that's the challenge for us.

Tene: That one's the big one. Ok, all you guys, let's say it great. Are you going to maintain it?

Vidstedt: Exactly. We're not hesitating that to our use cases where CMS is very great but the fact is that it has a very high maintenance cost associated with it. That's the reason why we've looked at deprecating and now finally removing it. Again, you'll know on March, whichever day it is, if it's in 14 or not, but, I'd bet on it not being there.

Tene: The other part of that question was the really interesting one. When are we going to deprecate G1?

Vidstedt: I think this is back to how things are evolving over time. What I know about myself is that I'm lousy in predicting the future. What we typically do is we listen to people that come around, and they say "We have this product and it's not working as well as we want it to," be it that on the throughput side or on the pause-time side, or I'm not as efficient as I could, or footprint, as Gil [Tene] mentioned. All of those things, those help drive us. We make educated guesses, obviously, but in the end, we do need the help from you figuring out what the next steps really are. G1 may, at some point, well be deprecated. I'm actually hoping that it will be, because that means that we're innovating and creating new things. Much like CMS was brilliant when it was introduced 20 years ago or so, it has served its purpose, and now we're looking into the future.

Tene: Let me make a prediction, and then we'll look at the recording in the future. I want people to remember that yes, I'm that Zing guy and C4 and all that, but we also make Zulu, which is straight OpenJDK with all the collectors that are in it, including CMS that's about to go away. I think G1 is a fine collector in Zulu. It's probably the best collector in the Zulu thing, especially in Java 11 and above, as you've said. I think it's around for a while. If I had to predict, I think we'll see maturation of the concurrent compacting collector that can handle throughput with full generational capability in OpenJDK somewhere in the next three to five years to a point where it handles the stuff, it's not experimental, maybe not default yet but can take it. At that point, you're going to have this overlap period between that and existing default one, which is G1. I think that will last for another five-plus years, at least, before any kind of deprecation. If I had to guess, G1's not going away for at least a decade, probably 15 years. If it goes away in 15 years, we'll have really good things to replace it with.

Vidstedt: Exactly. Thanks for saying that. We try to be at least very responsible when it comes to deprecating and removing things, because we, again, realize that old things have value in their respective context, let's say. CMS, for example, I remember the first time I mentioned that we were going to deprecate it. It was back in 2011 at JavaOne. I know that people had been talking about it before, but that was the first time I told people in public. Then we actually deprecated it in, I want to say 2000-something, a few years ago. We've given people some amount of, at least, heads up that it's going away. We like to think that the G1 collector is a good replacement, and there are other alternatives coming as well. We do spend significant time on removing things. We realized that that comes with a cost as well.

Stoodley: Interestingly, in OpenJ9, we have not been removing collectors so much. We do have some legacy collectors that are from 15 years ago, things called optavgpause, which nobody really believes that it actually does optimize for average pause anymore. Our default collector is a generational concurrent similar to what CMS is, and we've kept that. We have a region-based GC. It's called Balanced. We are working on pause-less GC. The irony is not lost on me that I have to pause while saying pause-less, because it's very different than saying pauseless. Anyway, we have all of these things. one of the interesting features about OpenJ9 is we build the same source code base into every JDK release. JDK 8, 11, and 13 right now, all have the same version of OpenJ9 in them, in their most recent J9 release. Deprecating some of the things like GC policies are a little bit harder for us, but, on the plus side, it forces us to keep investing in those things and making sure they work really well in providing lots of options for people, depending on what it is that they want from their collector.

Tene: Actually, picking up on what you just said, the same JVM across the releases, we do the same on our Zing product. The same exact JVM is going for Java 7, 8, 9, 11, 13. HotSpot had this at some point. For a very short period of time, it was actually our domain thing, the HotSpot Express release model, which I personally really like. I really liked having a JVM that can be used across multiples. There's goodness about the cool good things that have worked on for Java 14 bringing speed to Java 11 and that stuff. I would love to see a transition, a way to do that again, and it does have this little "deprecation is harder" problem, but only at the JVM level, not at the class library level. HotSpot Express, I think, was a great idea in the OpenJDK world. I don't know what you think about it.

Vidstedt: I think there are definitely pros and cons. Obviously, you can get more of the investment and the innovation that is happening in mainline or more recent releases, let's say, all the way to the old releases as well. What I think we found was that it came into existence at a time where JDKs were not actually being shipped. JDK 7 was effectively stalled for I think it was 5.5 years or something like that. We needed to get the innovation into people's hands, so, therefore, we had to deliver the JVM into the only existing release at the time, which was JDK 6. Once the JDK started coming more rapidly and more predictively, I'm going to say that the reason for backporting things was less of an issue, let's say, and also, the cost again of making sure that everything works. There's a lot of testing and stuff that needs to happen to make sure that the new innovation not only works on the mainline version but on all the other versions in the past as well. There are trade-offs, some are good, are more challenging.

Tene: I would say that the community benefit that I would most put on this is time-to-market for performance improvement. As it stands today, you invest usually in the latest. They just get better. The GCs get better. The runtime gets better, whatever it is. Lots of things get better, but then those better things are only available as people transition and adopt. When we're able to bring them to the current dominant version, you get a two-year time-to-market improvement. It's exactly the same thing, but people get to use it for real a couple of years earlier. That's really, I think, what's driving us in the product, and probably you guys have been in the same way. I'd love to see the community pipeline do that too, so Java 11 and 13 could benefit from the work on 15 and 17 at the performance side.

Stoodley: It's also from platform angle too so improvements in platforms. Containers weren't really a thing when JDK 8 came out, but they become the thing now. If most of the ecosystem is still stuck on JDK 8, as a lot of the stats say, then it forces you to backport a whole bunch of stuff, and it's extra work to do that backporting in order to bring that support for modern platforms into the place where everyone is. From our perspective, it's just an easier way to provide that support for modern platforms, modern paradigms that would otherwise have to take a change in API perhaps. You have to jump the modularity bound, the hurdle, or whatever it is that's holding you back from moving from 8 to 11. That's getting easier and easier for a variety of reasons, and the advantage of doing that is getting greater and greater. Don't think that I'm trying to discourage anyone from upgrading to the latest version of the JDK. I want people running all modern stuff. I recognize that it's a reality for a lot of our customers, certainly from an IBM standpoint and stakeholders from the point of view of our open-source project. It's just the reality that they're on JDK 8. If we want to address their immediate requirements, JDK 8 is where you have to deliver it.

Vidstedt: This is not going to be a huge dataset, I guess, but how many people in here have tried something after 8, so 9 and later? For how many did that work?

Participant 3: We're running 11. We're running a custom build of 11 in production with an experimental collector. It's a small sample set, highly biased.

Vidstedt: Ok. Out of curiosity, what happened? Ok, didn't work.

Tene: Actually, when I look around, I start with that question, "How many have tried," and then I say, "How many have tried in production?" Ok. How many are in production? Same number, good. How many don't have aid in production anymore? There you go. That's a great trend, by the way, because across our customer base for Zulu, for example, which we've been tracking for a while. In the last two months, we've started seeing people convert their entire production to running on 11. Running on 11 doesn't mean coding to 11. It means running on 11. Because that's the first step. I'm very encouraged to see that in real things. We have the first people that are doing that across large deployments, and I think that's a great sign. Thank you, because you're the one who makes it good for everybody else, because you end up with a custom build.

Challenges in Migrating

Participant 4: I was just going to share that the challenges for us getting from 8 to 9 was actually the ecosystem. The code didn't compile, but that was just Java 9's modules. I completely understand the reason why modules exist. I totally understand the ability to deprecate and removing things. Used it, loved it, many years ago. Can't believe it's still in the JDK. Our biggest challenge was literally just getting the module system working from 8 to 9. We couldn't migrate because the ecosystem wasn't there. We had to wait for ByteBuddy, for example, to get up on to 9. Going from 9 to 10 and 10 to 11 was literally IntelliJ migrate to the next language version. I have lots of challenges doing that. I tried going from 8 to 11. That was a complete abject failure. It was just so complicated to get there. Went 8 to 9, got everything working on 8 to 9, and then just went 9 to 10 and 10 to 11. 9 to 10, 10 to 11 was like a day's work, 8 to 9 was about 3 months, because we had to wait for the ecosystem. It was simply not possible, but that was a long time ago.

Tene: I think your experience is probably not going to be the typical, because a lot of the ecosystem took a while, and it went straight to 11. As a result, you won't have it on 9. There's a bunch of people that are doing it now rather than when you did it. The 9 and 10 might actually be harder to jumping to 11 because there's things that only started supporting in 11. For most people, I think we're going to see a jump straight from 8 to 11. They're both out there.

Participant 4: Yes, we started our 11 migration basically the day that 11 was out, and we tried to migrate. We're very early.

Vidstedt: I think what we're also seeing is that the release cadence that we introduced started with 9, but 10 was obviously the first one that then came out 6 months later. It takes time to go to 9, to some extent. There are a few things that you do need to adjust. ByteBuddy is a good example of that. What I think we saw was that the ecosystem then caught on to the release cadence and realized that there isn't a whole lot of work moving between releases. As a matter of fact, what we're seeing more now is that older relevant libraries and frameworks are proactively looking at mainline changes and making sure that once the releases do come out, it may not be the first day, but it's at least not all that long after the release that they ship support versions. I think it's going to be easier for people to upgrade going forward, not just because the big change really was between 8 and 9, but also because the libraries are more active in keeping up.

Tene: I like to ask the people for support and voice, and a good example of one that I'd like to voice, please move faster, is Gradle. I really wish Gradle was 13-ready the day 13 came out, and I'll ask them to do it when 14 comes out so it'll be ahead of time. Please, wherever it is that you can put your voice to issues and stuff, make it heard.

Efficiency of GCs

Kumar: After one question on the part of you guys talking a lot on the GC side, one of the trend we start seeing is many of them deploying in the container, and the next part, in the interaction with the customer, I'm seeing the use of the function as a service. That use case I wanted to check, is anyone of you here planning to deploy that yet? Because I do see some use. When that will happen, I don't think right now the GCs are considering that case of just being up 500 milliseconds.

Stoodley: Epsilon GC is built for exactly that use case.

Tene: Don't do GC, because there's no point, because we're going to be deprecating.

Kuksenko: It wasn't built for that use case, it was a different purpose, but it's extensively used for that.

Stoodley: It works well there.

Kuksenko: I'd rather say, it doesn't work well.

Tene: It does nothing, which is exactly what you need to do for 500 milliseconds. It does not work very well, yes.

Kumar: Any thoughts about adding those testing for what GC might be good for function as a service situations? The reason I'm asking this is that 10, 15 years ago, when the Java came [inaudible 00:27:55], people were seeing the same issue where it's C-program, you get the same reputability, it takes that many microseconds to do Java. It can be anywhere from one millisecond to one minute. Function as a service, when in the cloud, people are looking for the guarantees, what is your variability for that function within that range. I feel, right now, the GC will be in trouble at the rate we saw [inaudible 00:28:20].

Tene: I think we're measuring the wrong things, honestly, and I think that as we evolve, the benchmarks were going to need to follow with the actual uses will play. If you look at function as a service right now, there's a lot of really cool things out there, but the fundamental difference is between measuring the zero-to-one and the one-to-many behaviors. Actual function as a service deployments out there, the real practical ones are all one-to-many. None of them are zero-to-one. There are experiments of people playing with. Maybe soon, we can get to a practical zero-to-one thing, but measuring zero-to-one is not where it is, because it takes five seconds to get a function as a service started. It's not about the GC, it's not about the 500 milliseconds, microseconds, or whatever it is. The reason nobody will stick around for just 500 milliseconds is it costs you 5 seconds to start the damn thing.

Now, over time, we might get that down, and the infrastructure might get that down. It might start being interesting. I fundamentally believe that for function as a service to be real, it's not about short-lived, it's about elastic start and stop when you need to. The ability to go to zero is very useful, so the speed from zero is very important, but it's actually the long-lasting throughput of a function that is elastic that matters. Looking at the transition from start to first operation is important, and then how quickly do you get quick is the next one, and then how fast are you once you've been around for a while. Most function as a service will be running for hours, repeating the same function and the same thing very efficiently. Or, to say it in a very web-centric way, we're not going to see CGI-bin functions around for a long time. CGI-bin is inefficient. Some of you are old enough to know what I'm talking about. We're not going to see the same things. The reason we have servlet engines is because CGI-bin is not a practical way to run, and I wish the Hadoop people knew that too.

The behaviors that we're seeing now that I think are really intriguing is that the AOT/quickstart/decide what to perform first at the edge, the eventual optimization in the trade-off between those, can you do this and that, rather than this or that. Then GC within this is the smallest concern for me, because I think the JIT compilers and the code quality are much more dramatic, and the CPU spent on them at the start, which you guys do some interesting stuff around, I think is very important. GCs, they can adapt pretty quickly. We probably are seeing just weird nobody planned for these cases, but it's very easy to work them out of the way. They don't have a fundamental problem in the first two seconds, and all we have to do is just tweak around the heuristics so that it will get out of the way. It's the JIT compilers and the code quality that'll dominate, I think.

Kumar: The other cases I'm seeing, like the health care or others where you could have a large image or a directory comes in, they want to shut it down. They don't want the warm instant something due to security and other things. You have that in your heap or something where you don't end up doing the GC immediately. You can't imagine to analyze and GC, and so you're not there in the responsiveness.

Tene: You see that people want to shut it down because it's wasteful, but that's after it's been up and doing nothing for minutes, I think.

Vidstedt: I completely agree with you. The time to warm or whatever we want to call it is a thing and a lot of that is JIT compilation and getting the performance cold in there. The other thing is class loading and initialization in general. I think that the trend we're seeing and we've been working on is attacking that in two different ways. The first one is by offloading more of that computation, if you so will, to before you actually start up the instance. With JDK 9 and the model system, we also introduced the concept of link time. Using J-Link, you can create something ahead of time before you actually start up the instance where you can bake in some of the state or precompute some of that state so that you don't have to do it at startup. That's one way of doing that. Then, we're also working on language and library level optimizations and improvements that can help make more of that stuff happen, let's say, ahead of time, like statically compute things instead of having to execute it at runtime.

Tene: OpenJDK 12 and 13 have made great strides in just things like class-data sharing that I think it's cut it in half.

Vidstedt: Yes. I know, it's been pretty impressive. We have spent a lot of time. We have especially one guy in Stockholm who spends like 24 hours a day working on just getting one small thing at a time improved when it comes to startups. He picks something for the day or the week and just goes at it. CDS has been improving a couple of different ways, both on the simplicity side. It's actually, class-data sharing, for those of you who don't know it, is basically taking the class metadata. This is not the cold itself, but it's like all the fields and bytecodes, all the rest of the metadata around classes. Storing that off is something that looks very much like a shared library. Instead of loading the classes JAR files or whatever at startup, you can just map in the shared library, and you have all the relevant data in there.

Simplicity in the sense that it's been there for quite a while since JDK 5 if I remember correctly, but it's always been too hard to use. We've improved on how you can use it. It isn't always basically start up the VM and the archive will get created and mapped in next time. The other thing is that we've improved on what goes into it, both more sharing, more computations stored in the archive. Also, we've started adding some of the objects on the Java heap bin there as well. More and more stuff is going into the archive itself. I think what we've seen is that the improvements – I'm forgetting the numbers right now. We've talked about this at Oracle Code One earlier this year – the startup time for Hello World and Hello Lambda and other small applications has been improved. It's not a magnitude, but it's significant improvements, let's say.

Kuksenko: I have to add that we work with startup from thought areas. We have AOT right now. We finally did CDS, class data sharing, not for all JDK class, but it's possible to use it for your application class-data.

Tene: AppCDS.

Kuksenko: AppCDS is our second option. The third option, class and mainland, some of the guys pass through all static initialization of our class libraries using stuff. Nobody cares when JVM started. People cared when the domain is finally executed, but we have to pass all of that static initialization before. It was also reduced.

Tene: The one unfortunate name here is called class data sharing because the initial thing was how desktops don't have to have multiple copies of this. We probably should call it something like class computation preloading. It's saving us all the class loading time, all the parsing, all the verification, all the computation of just taking those classes and putting their memory in the right format and initializing stuff. It's the sharing part, that's the bit.

Vidstedt: I agree. We're selling the wrong part of the value at this point. You're right.

Stoodley: We got caught by that too, because we introduced shared classes cache in Java 5. It's true that it did share classes when we introduced that technology, but very quickly, after Java 6, we started storing it ahead of time, compiled code in there, and now we're starting profile data in there and other metadata and hints and all kinds of goodness that dramatically improved the startup time of the JDK, of applications running on the JDK, even if you have big Java EE apps. It's sharing all of that stuff or storing all of that stuff and making it faster to start things up.

Tene: I think that's where we're going to see a lot more improvement and investment too. I think the whole CDS, AppCDS, whatever we call it going forward, and all the other things, like stored profiles, we know how to trigger the JITs already, we don't have to learn on the back of 10,000 operations before we do them, that stuff. We call that ReadyNow. You guys have a different thing for this. We have JIT stashes or compile caches or whatever we call them. I think they all fall into the same category, and the way to think of the category is we care about how quickly a JVM starts and jumps into fast code. We're winding this big rubber band, then we store our curled up rubber band with the JVM so it could go, "Poof," and it starts. That's what this CDS thing is and all the other things we're going to throw into it over time, but it's all about being ready to run really quick. The idea would be we have a mapped file, we jump into the middle of it, and we're fast. That's what we all want, but we want to do this and stay Java.

Java as a Top Choice for Function as a Service

Kumar: I think the second part of that one is, ok, we showed that it could be fast, it could be pretty responsive. I have been in Intel and other customers across many environments, not just Java. When it comes to function as a service, which is a rising case in the cloud, I don't see that Java is still at the top choice. I see Python, I see Node.js, and other languages. Is there anything at the programming level being done so people see Java as the top choice for the function as a service?

Stoodley: Do you think that choice is being made because of performance concerns or because of what they're trying to do in those functions?

Kumar: What they're trying to do and how easy it is to be able to set up and do those things.

Tene: I think that function as a service is a rapidly evolving world, and I've seen people try to build everything in it. It seems to be very good for Glue right now, for things like occasional event things, not event streaming, but handle triggers, handle conditions. It's very good for that. Then, when you start running your entire streaming flow through it, you find out a lot. I think that when you do, that's where you're going to end up wanting Java. You could do Java, you could probably do, I don't know, C++ or Rust. Once you start running those things, what matters is not how quickly it starts, it's how it performs.

Right now, within the runtime languages, within the ones where you can actually state and GC and all that stuff and don't have to worry about the stuff yourself, Java dramatically outperforms everything else. There's a reason most infrastructure, Cassandra, Elastic, Solr, or Kafka is in Java, on a JVM. It's because when it actually comes down to how much you get out of the machine, that dramatically outperforms a Python or a JavaScript or a Ruby. There's great ways to get stuff off the ground if you don't care how fast they run, but if you start actually using a lot of it, you're going to put something that uses the metal well. C++ uses metal well and Rust uses the metal well. You could write in those. Go is somewhere in between, but Java dramatically outstrips Go in performance right now, for example.

Vidstedt: The other thing we have with Java, I'd like to think at least, is the serviceability and observability and debugging aspects of it. It executes quickly, but if something isn't working as well as you'd expect, you have all these ways for looking into what's actually going on. That's much harder with C++, for example.

Tene: It's got an ecosystem that's been around for 20 years and isn't NPM.

Stoodley: Checkmark, not NPM.

Participant 4: That's why we're on the JVM for our use case because it's like a Toyota Camry. You can go to the shop, buy a new indicator bulb, and plug it in. I can get a Thai tokenizer or a Chinese tokenizer and plug it into Java. It's open source, it's on GitHub. The ecosystem is inside.

Stoodley: The other thing is that the runtime is engineered to scale well across a wide variety of machines. No matter how much iron you throw at it, it can scale from mainframes down to little small devices. You don't have to think about that. In other languages, you have to think hard about that in order to get that degree of scalability, and it's work, and it's hard.

Participant 5: Why Java is faster than Go?

Tene: Because it has a JIT. Period. It's that simple. Go as a language doesn't have any limitations like that. It's just Go the runtime and the choices they have right now that they've made. If Go could have a compacting collector, Go could have a fully multitier JIT if you want to, and people have with using LLVM backends for Go. Right now, if you run in Go, you're running an AOT nonspeculative optimized piece of code, there's no way that thing could compete with speculative optimizations in a JIT. That's it.

Vidstedt: Another way of phrasing it is, we have man-centuries, if not millenniums, behind us on the Java side, just the investment from multiple companies for 25 years or so. Just the fact that we've been around for much longer, but yes, that is obviously our secret, but not so secret weapon in the end.

Stoodley: I have a slide at the beginning of my talk later today that shows all of the different investment that just companies have made in building JIT compilers, AOT compilers, caching JIT compilers, now JIT servers, all kinds of investment. Like you say, it's definitely hundreds and it may be thousands of person-years of effort that have gone into building all of that stuff. Now, they're not all still around, but we've learned from all of those exercises, and it doesn't even count all of the academic work that's gone into building these things and making them better and enhancing them. I mean, the Java ecosystem is super rich in the investments that we've made in compilation technologies. It's pretty amazing actually.

More Data Scientist Working on Java?

Participant 6: Do we have any plan for Java to support more data scientists to work, like machine learning?

Tene: We had this nice panel at Oracle. One of the things that I'd pointed out is there's a bunch of libraries in Java that target this. The main bug that they have is the name. They just need to name themselves JPandas, and everybody would then know what it is. Then they'll know that Java has stuff that does AI. As long as they insist on naming it other things other than those cool names everybody knows.

Vidstedt: I'll mention two things. I think we're running out of time. I'll mention Project Panama is where we're doing a lot of the exploration and the innovation and implementation stuff, let's say, around how we can make machine learning come closer to Java. It is a multistep story from the lowest level leverage in the vectorized instructions, in the CPU in a better way, and also make use of already existing machine learning libraries that exist today. I'm sure that a lot of people are using Python for machine learning. The library in the back end is not actually Python it's like some native code. It's just that the front end is Python. We want to enable that same use case on the Java side. We can interact with the library using Java code in a better way.

Participant 7: Is it in a timeline?

Vidstedt: I never promise timelines. Keep track of Panama, There's a lot of cool stuff happening there right now. There were presentations at JVM, I think we had one this year. Keep your eyes open.

Stoodley: There's lots of interesting research work going around doing Auto SIMD, Auto GPU in the Java ecosystem. We have some projects even with OpenJ9 that are looking at Auto GPU usage. I think it's coming, it's just not quite there production-ready.

Tene: I think what you see on on our side, we're making the JVMs and JDKs. We tend to look at the low level parts, how do you get good vector libraries and optimize ways to pass across library paths and all that. The actual ecosystem gap I think is at the top, the approachable library APIs for people who just want to ride to this stuff, and they're never going to ride to the levels we just talked about. They're going to ride to whatever JPandas is going to be. What we need is for people to go build JPandas, and it's not us, because it's an ecosystem item.

Stoodley: Once it gets built, then it's up to us to make it fast and make it work really well.

Participant 8: I think that would happen really quickly after something like Panama.

Tene: People can do it now. There's nothing holding them back. Panama will allow you to do it faster, faster meaning performance faster, but it's not about performance. It's about approachability.

Stoodley: Usability.

Participant 9: Maybe in the broader ecosystem, which is consistent. From our use case, it's absolutely about performance and not about approachability.

Tene: Ok, fair.

Participant 10: There was a talk yesterday from Google about moving TensorFlow to Swift. After the talk, I asked why they're not planning to use a JVM language like Kotlin instead of Swift, which are very similar. The claim was that the fact that the JVM has a garbage collection is a limitation in the machine learning world, and they prefer Swift because it is reference counting instead. What's your point?

Tene: I think that's a religious argument, not a technical one.

Participant 10: I thought so.

Kuksenko: I'd rather say that just reference counting because he had to know the garbage collector does [inaudible 00:46:54].

Tene: Precisely, yes. A much less efficient garbage collector, that's what it is, yes.

 

See more presentations with transcripts

 

Recorded at:

Feb 20, 2020

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT