BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles InfoQ Talks to Azul Systems Gil Tene about Zing, Zulu, and New Releases

InfoQ Talks to Azul Systems Gil Tene about Zing, Zulu, and New Releases

Bookmarks

Azul Systems has been an industry leader in producing JVMs and algorithms that beat the industry standards for large-scale systems. Azul CTO and co-founder Gil Tene spoke to InfoQ from the heart about how Azul tackles some industry challenges.

InfoQ: Let's start off with your new release of Zing. This came out back in September, just around the time of JavaOne. Can you explain the new features, and how it fits into your overall roadmap?

Gil: There are a few significant things in this release. The two main things I'd like to highlight are that this is the first release of Zing that supports Java 8, and it came out about 6 months after Oracle's first Java 8 release, which is pretty good. We're the only JVM vendor apart from Oracle to have shipped a Java 8 version, although of course we also have Zulu, our OpenJDK binary releases, which are also Java 8. So we're a little ahead of the market, and we want to be more aggressive than that with Java 9.

Secondly, we added the ability to record optimization logs with a feature set we call ReadyNow. This is focused on avoiding or eliminating slowdowns on warm-up or deoptimization storms. This can be very important for events like market open or close, or just flow response times for web applications. It's something we've been asked to work on by many of our financial services customers. In the history of Azul, after garbage collection, this has been the most requested feature by users.

This has been in development for a couple of years now, with the need being clear. We started with our first set of features for ReadyNow earlier this year where we addressed some of the root causes for deoptimization and some ways to direct the compilers to handle methods in specific ways. In the new release, we add the ability to use a log of optimizations from a previous run, and use this to influence the decisions made by this run. In particular, this allows the user to tell the JVM to avoid aggressive optimizations that were performed on a previous run, but which had to be deoptimized during the run, as this is the real user pain we're trying to address.

InfoQ: What do you do when you have a run which differs significantly from previous baselines, what I sometimes call the "Non-Farm Payroll Day Problem"? What happens if the optimization decision that you have on file aren't optimal for the extraordinary days?

Gil: If we're talking about optimizations that survived all the way through yesterday, but which really don't apply today, then as it stands, I don't think we can do much to help with that case yet. However, from what we've seen, it's quite an uncommon case. More common is an optimization that applied early on in yesterday's run, that turned out to be incorrect and was deoptimized and didn't survive to the end of yesterday's run.

For example, in the case of algorithmic trading, the code is listening to a lot of market data, and probably only acting fairly rarely (depending on the type of algo trading). So the compiler can optimize for the case of not trading, and then later, you find that you do trade, and so obviously you want that code to be fast as well.

Hopefully, by the end of the day, the JVM has settled down enough, so the initial overly aggressive optimizations have been deoptimized and reoptimized for the real cases, and you can take that knowledge and apply it without having to relearn it from scratch each day.

InfoQ: What are some example of optimizations that fall into this class?

Gil: Class hierarchy optimizations based on small code sets that are later invalidated by classloading. Or any optimizations based on an untaken path, such as branch prediction. However, it's important to understand that we're not remembering the optimizations from yesterday, and we're not applying the compilations at startup. We're remembering the stats - that's an important difference. Java has clear semantics around initialisation, and we can't force that to happen early. So we have a full set of stats and only as classes become available can we kick in our optimisations, but they kick in with yesterday's counters.

InfoQ: Let's talk about Zulu. This is your new OpenJDK build, and I see in your press release that it's described as the only supported OpenJDK build. What about Red Hat's IcedTea product? Is it the case that Red Hat aren't shipping an OpenJDK 8 as IcedTea yet?

Gil: Yes, that's true. I check on it every couple of weeks or so, but as of right now, we have the only binary distribution of OpenJDK 8. You can get experimental source tree builds from other people, but Red Hat is pretty much the only other source people go to get OpenJDK binaries.

InfoQ: Debian unstable appears to ship an OpenJDK 8 build.

Gil: If you pull it down and do a "java -version" on it, you'll see that it claims to be Java 8 Update 40, which doesn't exist until March 2015. It's a great example of pulling a binary off the Internet and thinking that it means what it claims to mean. There is no JDK Update 40 today. This is a problem, because Docker uses that one for their 'official' build of Java, whereas when the real release of Update 40 comes out, it could be a very different binary. The "Update 40" name should not be used for what they're shipping today - and it goes beyond just one company, because it gives us all a bad name, as users don't know if they can trust OpenJDK builds.

We've done something about this with Zulu as a direct reaction. Azul certifies our binaries, that a specific .rpm or .deb or .zip with a specific checksum, has passed all the tests (including our stress tests), is fully compliant with the Java Testing Compatibility Kit (TCK) and certified compatible with the Java spec. We certify all of our released binaries, so that people can know that Zulu can be trusted. I don't know if it can completely fix the situation, but I think that having responsible builders of OpenJDK binaries identify their binaries as "good" can help sort out the confusion. For example, I'd love to see Red Hat do that, because I know that they produce very good and well tested JDKs.

InfoQ: As it stands, the OpenJDK that's in Debian (& Docker) hasn't passed any kind of TCK.

Gil: That's correct. The reason we can know that is that we can go to the page of signatories of the OpenJDK TCK and we can see that there's nobody there from Canonical or the Debian project. So it's impossible for the build that they're shipping to be known to have passed the TCK.

UPDATE: Since this interview was recorded, Canonical have been added to the page of signatories for SE 8.

InfoQ: I notice that the FreeBSD Foundation are also a signatory of the OpenJDK TCK.

Gil: I was surprised by that too, but good for them, I think that's a great thing. I expect there are some FreeBSD-based appliances that need Java, and the FreeBSD folks are ahead of the game. The OpenJDK TCK is a fairly friendly license, and it's what anyone who wants to build or distribute a binary of OpenJDK that end-users can trust should use.

By the way, we have Zulu up on the Docker repositories. If you search for Zulu (or hopefully OpenJDK), then you can find binaries for JDK 6, 7 or 8.

About the Interviewee

Gil Tene is CTO and co-founder of Azul Systems. He has been involved with virtual machine technologies for the past 25 years and has been building Java technology-based products since 1995.

Rate this Article

Adoption
Style

BT