BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations Understanding Java through Graphs

Understanding Java through Graphs

Bookmarks
42:52

Summary

Chris Seaton discusses Java’s compiler intermediate representation, to understand at a deeper level how Java reasons about a program when optimizing it.

Bio

Chris Seaton was a Researcher (Senior Staff Engineer) at Shopify, where he worked on the Ruby programming Language, and a Visitor at the University of Manchester. He was formerly a Research Manager at the Oracle Labs Virtual Machine Research Group, where he led the TruffleRuby implementation of Ruby, and worked on other language and virtual machine projects.

About the conference

QCon Plus is a virtual conference for senior software engineers and architects that covers the trends, best practices, and solutions leveraged by the world's most innovative software organizations.

Transcript

Seaton: My name is Chris Seaton. I'm a Senior Staff Engineer at Shopify. I'm going to talk about understanding Java programs using graphs. Here's where I'm coming from with this talk. I've got a PhD in programming languages, but I've got a personal interest in languages beyond that. One of the great things about working in programming languages is that you can have a conversation with almost anybody in the tech community. Almost anyone who uses programming languages has opinions on languages, has things they wish were better in languages, things they wish were faster in programming languages. A great thing about working in languages is you can always have a conversation with people and you can always understand what they want out of their languages. You can think about how you can provide that as someone who works on the implementation of languages, which I think is a really great thing about working in this field. I'm formerly from the Oracle Labs VM research group, part of the Graal team. Graal is a new just-in-time compiler for Java that aims to be really high performance and give many more options for how we optimize in compiler applications. I worked there for many years, but I've currently moved to Shopify to do compiler research on the Ruby programming language. I work on Ruby, but I work within a Java context, because I'm using Java to implement Ruby. That's the TruffleRuby project. TruffleRuby is a Ruby interpreter working on the JVM, not to be confused with JRuby, which is another existing implementation of Ruby on the JVM. What I'm trying to do is apply Java compilation technology to make Ruby faster, to make Ruby developers happier. We use the same technology in Java, applying it to Ruby.

Outline

What's this talk about? This talk is about understanding what your Java program really means. We can read our Java source code. We can have a model for how a Java program works in our heads. We can use, if we wanted, the Java specification to get a really deep understanding of its semantics and what it really means. I think it's good to understand how the JIT compiler, so the just-in-time compiler understands your Java program as well. It's got a slightly different model of the program. We can reveal that by using some internals of the compiler. We can see how the compiler understands your Java program. I think that can help us better understand what our Java programs are doing, if we're at the level where we're trying to look at performance in detail. We'll be thinking in more depth than bytecode. If you've heard of bytecode, we'll be starting there, but not quite as much depth as machine code. I'm aiming to keep this all accessible. We'll be using diagrams to understand what the compiler is doing rather than using dry text representation, something like that. It should help it be accessible, even if you're not sure what goes on beyond bytecode.

This talk is about knowing rather than guessing. I see a lot of people argue about what Java does, and the performance of Java, and what's fast and what isn't, and what Java can optimize and what it can't. I often see people guessing online and trying to guess what Java does. This talk is about knowing what Java does, and how we can use some tools to really understand how it's understanding your Java program, and how it's optimizing them. Rather than guessing about what you've read online. It's about testing rather than hoping for the best. We can use some of the techniques I'm going to talk about in this talk to test the performance of Java applications. Again, rather than simply relying on what you think it should do, we can test how it should optimize. All of that is in order to get the performance we want. We're talking about context where we want high performance out of our Java applications, and how do we do that? How do we test it?

Graal

The first thing I've done is I went to graalvm.org, and I downloaded the GraalVM, which is the description of Java we're going to use to do these experiments. Go to the download link, and you can download the community edition for free. It's GPL licensed, so it's easy to use. Graal means a lot of different things. Unfortunately, it can be a little bit confusing. Different people use it to mean slightly different things. Sometimes people can talk past each other. Essentially, Graal is a compiler for Java that's written in Java. By that I mean it produces machine code from Java bytecode. I'm not talking about a compiler from Java source code to Java bytecode. It can be used as a just-in-time compiler for Java within the JVM. Replacing something that is called opto or C2 within the HotSpot JVM, so it plays that top tier compiler with a different JIT compiler.

It can also be used to ahead-of-time compile Java code to a Native Image, so a standalone executable, which runs specifically compiled from C or C++, or something like that, that has no requirements on a JVM. It can also be used to compile other languages via a framework called Truffle. This is what TruffleRuby does. It compiles Ruby code to source code via Java, using Graal as a just-in-time compiler. The reason it can do all these different things is because it's essentially a library for compilation. You can use that library in many different ways. You can use it to build a just-in-time compiler, or you can use it to build an ahead-of-time compiler. You could do other things with it as well. It's a library which you can use with different things. That's why it's one term that's used for doing so many different types of things. That's packaged up as something called a GraalVM. The GraalVM is a JVM with the Graal compiler, and with Truffle functionality within it. That's what the GraalVM means. You may hear the term GraalVM compiler, that's the same as Graal compiler.

I took GraalVM and I've put it on to my path. I'm going to do PATH equals GraalVM contents, home, bin, PATH, and that gives me Java on my command line path. Now I've got an example Java program here that has a simple class. It has a main method, which simply runs a loop, and it calls this method called test. What test does is simply adds together two parameters and returns the result. It's kept as a static to keep it nice and simple. The way I've set this up is with this loop, the purpose of that is to cause this method to be just-in-time compiled. It's an endless loop because I want the compilation to happen naturally, I don't want to force the compilation in any unusual way. The input to the method are two random variables. I have a random source, and the random variables go into the test routine. The reason I do that is because I want the program to not be static at compilation time, so I want real dynamic data flowing through it. The just-in-time compiling can't cleverly optimize anything away, because actually, it's static.

Now we've got our javac, which is our Java compiler on our command line from GraalVM as normal. We can do javac Test.java like that. That converts our Java program to bytecode as you'd normally do. We have the source code now, which is how we normally understand the program as human beings. We can read that and we can reason about it. There's more ways than that to understand your Java program. The first one you may be aware of is an abstract syntax tree. What it is, is a representation that javac uses to understand your program. I'm using a plugin here for IntelliJ that allows you to see how the javac compiler understands your program. You can take an example source file like the one we have here, and you can use this parse button, which gives us an option to inspect. Then we can see how the javac compiler understands our source code. We have here a class, which is our test class. It tells us what comprises that. Then after that, we have a method declaration, which is our add declaration. You can see it highlights the source code which corresponds to it, and it has private, static, has a name, has a return type. Within that it has a block which is the body, has a return statement. Then it has the binary operator. Within that, we can see it has x and y as two of those. This is the abstract syntax tree or the AST, which is the simplest representation the machine can use to understand your Java source code.

We already said we compiled to Java bytecode, so that means there's another representation we can use to understand our Java source code. I'm going to use javap. Javap, and the command is C on test. This will disassemble our Java bytecode from the class file. Because it's static, you need to use p to get extra members. What we have here is a representation of that, adding routine test as written as the Java bytecode. We have the AST which is how javac understands it. It produced this bytecode which is what goes into the class file. We have, it loads an integer, loads another integer, so 0 and 1 corresponds to the two parameters. It adds them as integers and then it returns an integer. That's what it does: load them, add them, and return them out. Nice and simple Java bytecode there.

When you run this Java program at runtime within HotSpot with the just-in-time compiler enabled, it converts it to machine code. We can see the output of that machine code using some special flags. What I'm going to do here is use this set of flags here. What all these flags mean isn't particularly important. If you look up on some blog posts, you can quickly see how to get machine code out. I'm going to simply run one of these. This tells us the machine code that the Java just-in-time compiler has produced from our Java code. It tells us it's compiling. Test, there we go. This is the test method. This is the machine code it's produced that actually runs on your processor. There is a add operation in here. That is the actual add, which corresponds to the add we wrote in Java, but it's buried around some other stuff.

JITWatch

There's quite a bit of gulf here, we talked about the AST, and then the bytecode, now we've jumped all the way to this low level, hard to understand machine code, which we can't really use to understand our Java program. It's far too dense. This is a tiny method. There's already quite a lot going on there. In this talk, what I'm going to do is address that gulf between the Java bytecode and the Java machine code. There's a couple of tools we can use to do this that exist already. One of them is called JITWatch. I'm running JITWatch here as an application in the background. It's a tool. What you can do is you can use basically this flag called log compilation. I'm going to run our test program with that. It runs as before, but now it's producing an extra file of output, which we can interrogate to understand a bit more about what the JIT has done. I'm going to open the log that we just produced, and I will analyze it. There's our class, and it tells us there's a method in there, which is just-in-time compiled. This tool is a bit better than the javap command line tool, and the print disassembly we used, in that now it gives us all those together. It tells us the source code, the bytecode, and the machine code output. This add operation corresponds to this add operation in the bytecode. Then we said that this was where the actual add was, and yet we can see it's connected up, and it tells us that's the actual add operation going together. This is a bit better. It shows us how these things link up. There's still somewhat of a gulf here, in that how's it getting from this bytecode to this machine code? That's what we're going to answer using the next tool.

Seafoam

I'm going to use some more flags now. I'm going to add something called graal.Dump. What this does is it asks the Graal JIT compiler to print out the data structures it uses to understand the compilation. The program runs as normal. After a while, I'll cancel it. Then we get an extra directory, which is this graal_dumps, which lists all the compilations which the JIT compiler has done. I'm going to use a tool here called Seafoam, which is a command line tool for reading out these graphs. We've got a directory. I'm going to run the Seafoam, and I've got directory of these graal_dumps. I'm looking for HotSpot compilation, and these are all things HotSpot has compiled, and we're looking for Test.test, so 172. I'm going to ask it to list all the things it compiled within when it was compiling that method. This list is hard to understand, but these are all the phases the compiler runs, but I'm going to simply jump in and get it to look at after parsing. What does the code look like after it's been parsed? I'm going to say I want you to render this. This is what Seafoam does. This prints out a compiler graph. This is the central idea of what this talk is about.

This is a graph. It's a data structure. It has edges or arrows, lines, and it has boxes, nodes. It's a flowchart effectively. It tells us how the just-in-time compiler is understanding your Java program. What we have here in the center is an add operation, which is that add operation in our method, the key thing. What this graph is telling us is that there's input flowing from the zeroth parameter, so the first parameter, and the first parameter, so the second parameter which flow into the add production as x and y. Then the add operation goes to be returned as the result. There's also a node which says where the method starts and where it ends. They simply are connected by one straight line. There's no control flow going on. The green arrows represent data flowing. The red arrows which we'll see more of later, the thicker arrows, they represent the control flowing through the program. The green or the oval boxes represent data sources. The green or diamond boxes represent operations on data. The red or rectangular boxes represent some decision or some control flow being made. You can see that this adds operations that goes together.

Example: Understanding Java Programs

How can we use this to understand some Java programs? What can we use this to understand about how Java understands your Java programs? Let's look at an example. We've got this add routine here. I'm going to expand it to have another parameter, so x, y, and z. What I'm going to do is I'm going to introduce the extra variable here like that, so x + y + z. Then I'm going to run the program again. I have to compile it, because I've modified it, and then run it as before. Now we've got two add operations, and you can see the result of the first add operation flows into the input to the second operation. This is x + y + z, the third parameter. Java has got local variables. What do local variables mean to how the just-in-time compiler understands it? It doesn't make a difference to your program when you use local variables to change how your program works. I've seen some people argue online that using local variables is slower than just using code directly of an expression, because they think the compiler has to set a local variable somewhere. Let's look at what that actually looks like. I'm going to modify this code now to do int a = x + y, and then do, a + z. We've got different Java source code now, but that achieves the same thing. Let's look at how the compiler understands that.

I've compiled again, run again. We introduced a local variable, but you can't see any difference in the resulting graph. The result of this is x + y that's now assigned to the local variable a, but that local variable doesn't appear in the graph. It's like the just-in-time compiler's forgotten about it entirely. What this edge here represents is the data flowing from the add operation from x + y into the input that adds it to z? It doesn't matter if that value was calculated and stored in a local variable, or if it was simply part of an expression, all the compiler cares about is where the data is flowing. There is a local variable here between node 5 and 6, but the compiler doesn't care about that. It can ignore that and just know that this is where the data comes from, this is where the data is going. We can see, we get exactly the same graph out of the program if we use local variables, or if we don't. It doesn't make a difference to how the just-in-time compiler optimizes it. This is what I mean by we can use this tool to understand how the just-in-time compiler understands our program, because we can change things in the program. We can see what differences that actually makes to the just-in-time compiler, and why.

So far, graphs have been pretty simple. I'm going to introduce some control flow now, so some if statements, things like that. I've got an example already set up, so exampleIf. Now I've got this method, exampleIf, and it has a condition, an x and y. If the condition is true, it sets a to be x, of y sets a to be y, and then it returns whatever one of those was. We also have something in the middle, which sets an int field to be the value we're setting it to. The reason we do that is to put a point in the program where there's some action taken so we can see that action more easily in the graph of why sometimes the graphs get very compact very quickly, and it's hard to see what you're looking for. I'll run this program. I'll remove the graal_dumps, I think. ExampleIf, 182. What we have now is a graph that includes control flow. Before, the only red things, the only rectangular things were start and end, but they come in now when we have a control flow, such as a loop or an if. Now what we have is the first parameter, so our condition is equal to 0, 0 meaning false. If it is equal to false, then we use x, of y's we use y, and we can see us assigning x that field here, and then we can see the results comes from either x or y depending on which way we took the if. What this is, is a special node called a phi node that says, take which value we want based on where we control flow diverged. We can see our control flow now has a diverge in it where it can go one of either way, just like our program. We can see now that the red or the thick arrows have a meaning for control flow.

Now we can use this example to see a really interesting point about how Java optimizes your program. What I'm going to do is I'm going to change this random Boolean that says whether we want to take the first branch or second branch, and I'm going to give it a constant value. I'm going to change it from random to always being false. This condition is always false now, so we're only ever going to use this branch. What do you think this is going to do to the way the Java just-in-time compiler understands your program? We see this pattern quite often in things like logging, for example. You may have a logging flag, which is off most of the time, or sometimes is on, sometimes is off. Does that add some overhead to the way your program is compiled? Let's try it out. 180. We've no longer got any control flow in our graph, but we had control flow and now I have a source code. Where has it gone? What the compiler says is it has never seen that value be anything apart from false. It's gone ahead and it's just-in-time compiled your program, assuming it's always going to be false. Because that value is coming in dynamically, it could change. What it's done is instead of an if node, it's now got something called a Guard node, which is saying, I want you to check that the first parameter is still false, so the first parameter equals false. Check that's true. Then it carries on assuming it's true. We have the StoreField, and it returns simply the first parameter. If it wasn't true that the value is false, then it does something called deoptimizing, where it jumps out of this compiled code and goes back into the interpreter. What we can see here is that the just-in-time compiler looks and profiles what values you have flowing through your program, and uses those to change how the program is being optimized. The benefit of this is there's less code here now, because only one of the branches are compiled. Also, it's straight line code. This Guard is written in such a way that the process will know it's not likely to fail. Therefore, it can go ahead and do this code afterwards while that Guard is still being checked. Here we can see the profiling going on and working in action.

Example: JIT Compiler

I'll give you a more advanced example now of what we can see about what the just-in-time compiler is doing by using an example which looks at locks. I'm going to take an example here. I'm going to take the code which calls this. We don't need that anymore. What we have here now is a method called exampleDoubleSynchronized, it takes an object, and an x. We did need the field still. Then it synchronizes an object once, write to field, and then it synchronizes an object again, and write to field. Why would you write code that synchronized on an object twice, back-to-back like this? You probably wouldn't, but you may get this code after optimizations, so if you call two synchronized methods that you're effectively doing this, if you call them back-to-back. Or if you have code that inlines other code that uses synchronized locks, you may get them back-to-back like this. You may not write this manually, but it's the thing you may get out automatically from the compiler. The driving code used the same object for each lock, but it allocates it new each time, then it's parsed in a random integer.

Let's compile this. I'll remove the graal_dumps first, 175. What we can see is what we'd expect to start with. We have straight line code. Those kinds of synchronized blocks, the objects that uses them is called the monitor of the object. We take that object in as the first parameter, and we enter the monitor of the object, and then we leave it, and in between, we write the field, and then we enter it again, write the field and leave it. We can see here that we're locking the same object twice, which is wasteful. What I'm going to do now is look at a later phase of that same method being optimized, so I'm going to use the list thing, which gives me all the phases which are being done. I'm going to grep for lock elimination. We've got two phases here, before lock elimination phase and after lock elimination phase, so it is 57 and 58. I'm going to render the graph again at stage of compilation 57. What's happened here is the program has already been optimized a bit. Some things have already been changed, and it's also been lowered too. Some higher-level things being written as lower-level things. For example, implicitly we can't synchronize on that object if it's null, so a null check has been inserted and made explicit here. We still have the MonitorEnter, the write to field, the MonitorExit, the MonitorEnter, write to field, and the MonitorExit.

What I'm going to do now, though, is look at the same graph after something called a lock elimination phase has run. This is a compiler phase within Java's just-in-time compiler, which is designed to improve our utility of locks. This is at stage 58 now. I'm looking at just after the next phase, and we can see what has gone on here. What's happened is we now have just one MonitorEnter, we write both fields, and then one MonitorExit. We can see what's going on here is it's seen the two locks are next to each other, back-to-back. It has said, I must as well combine them into one single lock. I might as well lock just once, do both things inside the block, and then release the lock. This is an optimization that you may have been aware is going on, you may not be aware it was going on. Instead of debating whether Java is able to do this for our code or not, we can look at the graph and find out. We can either do this as a manual process, as I've done here. I said for this example code, I want to know if the two locks are synchronized or not. I wanted to know effectively, I was going to get this code out, which is what we have done. I can test that. Because we're using command line tools, and we're using these files that come out of the compiler, what we can do is we can also write a test to do this.

TruffleRuby

I work, in my day job at Shopify, on a system called TruffleRuby. TruffleRuby is a Ruby interpreter. It's an interpreter for the Ruby programming language. It's written in Java, and it runs on the JVM as a normal Java application if you want to. It doesn't require any special functionality inherently. It uses the Truffle language implementation framework. This is a framework for implementing programming languages, produced by Oracle Labs. It can use the Graal compiler to just-in-time your interpreted language to machine code somewhat automatically. It uses a technique called partial evaluation. Instead of emitting bytecode at runtime, and compiling that as if it came from Java, what it does is it takes your Java interpreter, applies a mathematical transformation to it with your program, and produces machine code from that. It's capable of some really extraordinary optimizations thanks to Graal. It can inline very deep. It can constant fold through lots of metaprogramming, things like that, which is essential for optimizing the Ruby programming language, which is very dynamic.

This is how we actually test TruffleRuby at Shopify. The optimizations we care about having been applied are very important to us because they're very significant for our workloads. We have tests that those optimizations are applied properly, and what they effectively do is they automatically look at the graphs, as I'm doing here, but they do it using a program. They check that the graph looks as it expects, so here, you could query this graph. You could say, I expect to only see one MonitorEnter and one MonitorExit. The great thing about Java that people don't always know as well, when they try to understand and guess what we do is, of course, Java is open source, the compiler is open source. You can just go and look at how they work. We can see here that this lock elimination phase has worked really well for us, and it's done what we would expect.

If you go to Graal on GitHub, you can look at how this works. We set it to the lock elimination phase, it did what we wanted. We have a test for it. Here you go, lock elimination phase. This is the optimization which applied what we wanted. The great thing about Graal is because it's written in Java, you can jump in, and it's very readable. I'm not pretending that anyone can do compiler stuff, anyone can work on compilers. I think anyone can read this code who is familiar with Java and Java performance work, and can understand what's going on here. This is a full production optimization phase here. We're saying for every MonitorExit node in the graph, so get all the MonitorExit nodes in the graph, look at the next node. If the next node is another enter, and if the two locks are compatible, so they're the same object, then replace the exit with the next enter. That's what it's done to our graph to be able to optimize it. There was an exit here and it said, replace it with the next node after the next enter, which was this right here.

Summary

The point of all this is that we can get the compiler's representation for how it understands our programs out of the compiler. We can use that to gain a better understanding of what Java is doing with our programs ourselves. That means that you don't have to guess at how your Java program is being optimized. You don't have to rely on working through the spec. You don't have to rely on hearsay that you see online about what Java might do or might not do. You can check it yourself and you can see what it's doing. I think it's relatively accessible via these graphs, because you're looking at a visual representation, not having to pore through a log. You can simply see how it's transformed your program and understand what it's doing. Because of this, logs are files that we can get out of the compiler, we can also use them to test stuff. We can build tests by saying, does the graph look like how we expect? Has it been compiled how we expect? I think these are some more options for understanding Java and for understanding how our Java code has been optimized, checking that it's been optimized as we expect, which makes it easier, I think, to get the performance we want out of our Java application.

Resources

A lot of the work here on how to understand these Graals come from a blog post, Understanding Basic Graal Graphs. If you look at that one, that'll give you a way to understand all the concepts you might see in a graph. What edges you see, what nodes you see, what normal language concepts compile to. You can get Graal from graalvm.org. You can get the Ruby implementation from there as well. The tool I'm using to look at graphs is something produced by Shopify called Seafoam. I also demonstrated JITWatch, and the Java Parser which allows us to look at the Java ASTs.

Questions and Answers

Ritter: I'm a big fan of understanding more about what JIT does. It's very interesting to see what you're doing with the idea of the graphs and then getting the JITWatch to expand out the information.

Seaton: I think a lot of people spend their time guessing at what Java does. There's a lot of myth and misinformation and old information there. We can just check. I see people having arguments online, "Java does this, Java does that." Let's just go and take a look, and you can find out for real what is working on your code. You can even write automated tests to figure out what it's doing for real by looking at these graphs.

Ritter: Yes. Because as you say, if you put a local variable in, does it actually really get produced as a local variable? Is that like escape analysis? Because you're not actually using that variable outside of the method, or the result outside of the method. Is it related to escape analysis, or is that just simply optimization?

Seaton: No, it happens in a different phase. What it does is it says, every value that's produced in the program, every expression that is in the source program, is given a number. What it says is every time you refer to that expression it's using the same number. It's called global value numbering. If an expression has gone through a local variable, it still has the same number as if you wrote it there directly, so if you go to the compiler, it's exactly the same thing. This is why if you write a + b twice, independently, they're the same expression so the compiler says, I'll give them the same number that'll be used once. Again, people don't use a local variable and think I've got a + b twice here, I'll put in a local variable and use it. Does that make it faster? No, it doesn't because it's exactly the same thing. There are still readability reasons. It's important to say that making your code readable is a separate concern, and that's a very human thing. It's important to understand how the compiler actually understands your code and what it actually does.

Ritter: Because I remember from my days, many years ago, doing C programming, and do you make a variable a register, and what impact that has on whether it improves the performance or not?

Seaton: Yes. It's historic, and it doesn't really mean anything anymore.

Ritter: Yes, because then they ended up with register-register. It's like, what?

The other thing I really liked you explaining about was how the code can be optimized based on previous profiling. I talk a lot about that with the stuff we do, speculative optimizations, which is the same approach as what you were describing there.

Seaton: Again, these graphs allow you to see what's happening there. One of the properties you can find on a graph, because there's more information in the graphs than is visible, because of the tool I use, tries to show enough stuff to be reasonably useful without putting an avalanche of data. Another thing it can do is it can tell you the probabilities. You look at the graph and you can see which path is more likely to be taken than the other. You can see what if a path is never taken, or it's always taken, or whether it is taken 25% of the time. You can use that information to understand your programming. The compiler uses that in different ways. People often think it only uses it for binary reasons. It says if a branch hasn't been taken, then compile it, if it's never been taken, don't. You may wonder why does it collect more profile information than that? Why is it collecting fine-grained information? It actually has a float for the probability to call log precision. The reason for that is the register allocator, will try and keep registers here live but longer on the more common paths, or the most common paths. It's worth gathering that more detailed information, and you can start to do something. Obviously, these are like, last 1% optimizations rather than the most important things in the world.

Ritter: That's the thing I always find interesting, because, obviously, you've worked on the Graal project, so Graal has become very popular recently, because of the idea of Native Images, and ahead-of-time compilation. I get that that's very good from the point of view of startup time, so you're immediately running native code, so you don't have to warm up. The JIT compilation, because you can do things like speculative optimizations more, and you can do profile guided optimizations with Graal, but you can do proper speculative optimizations, and as you said, deoptimize if need be. You can get that slightly higher level of performance through JIT compilation or optimizations.

Seaton: Again, graphs are great for seeing this. The same tool can be used for Native Image. If you want to understand how your Native Image programs are being compiled, you can dump out graphs in a similar way. If you look at the C-Frame repository, there's commands for using Native Image as well. If we looked at some example, realistic Java code, we'll be able to see, the Native Image graph is actually more complicated. Why is that? It's because the JIT was able to cut this off, no, this wasn't needed, get rid of that, and so on, and get to be simpler code. Because it's the same tool and it works for both, you can look at them side by side and see where the Native Image attempted to do more stuff to keep it going. That's a common misconception that Native Image will always be faster. Maybe in some cases, faster peak performance. It may actually work in the future. It may get there. Yes, you're right, it's a startup and a boot warm-up tool.

 

See more presentations with transcripts

 

Recorded at:

Mar 17, 2023

BT