BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations The Rust Borrow Checker - a Deep Dive

The Rust Borrow Checker - a Deep Dive

Bookmarks
36:08

Summary

Nell Shamrell-Harrington discusses how to transition from fighting the borrow checker to using its guidance to write safer and more powerful code at any experience level.

Bio

Nell Shamrell-Harrington is a principal software engineer at Microsoft in the Open Source Programs Office. She is also a member of several Rust language teams and working groups, as well as the lead editor of This Week in Rust. Previously, she worked at Mozilla and Chef Software. In her spare time, she is a member of the board of Operation Code.

About the conference

QCon Plus is a virtual conference for senior software engineers and architects that covers the trends, best practices, and solutions leveraged by the world's most innovative software organizations.

Transcript

Shamrell-Harrington: When I work with someone who is just learning Rust, they sometimes question whether the Rust borrow checker is their friend or their foe. If you take a look at the Rust subreddit, it's common to see posts with headlines like, "Newbie question regarding the borrow checker. Help finding the borrow checker. Does it ever get easier fighting with the borrow checker?" However, as you watch people gain more experience with Rust, they do tend to come around to the borrow checker and realize what it protects them from doing. In answer to the question, is the borrow checker a friend or a foe, I say the borrow checker will become your friend through experience. Along with gaining experience with it, it is also very helpful to understand how it works and why it does the things it does. We're going to dive deep into that.

Background & Outline

I'm Nell Shamrell-Harrington. I am a Principal Software Engineer at Microsoft. I'm also a board director on the Rust Foundation Board of Directors. I am the lead editor of, This Week in Rust.

To set expectations, we will first do a bit of background on the Rust compiler in order to put the borrow checker in context. Then we will do a deep dive on the borrow checker itself.

Overview of the Rust Compiler

Let's go ahead and start with an overview of the Rust compiler. Let's take a look at a code example. This Rust function declares a vector composed of the integers 1, 2, 3, 4, and 5. After it declares this vector, it then uses a for loop to iterate through each integer in the vector and print it out on a new line. Let's go ahead and run this example using cargo. As expected, we see the numbers 1 through 5 printed out on the screen. This seems pretty simple. Cargo builds and runs this piece of code for us, but there is a lot that happens underneath the surface in the compiler when cargo is building it.

Stages of Compilation

There are five general stages to compiling a piece of code. It starts with a lexical analysis of the code. Then parsing of the code. Semantic analysis of the code. This is where the borrow checker comes in. Optimization of the code. Finally, code generation, where the compiler creates the executable binary of our code. When we look at the stages laid out in a list like this, it seems like they will run linearly, and in some compilers they do. However, if you've delved at all into the Rust compiler internals, you might be thinking, isn't the Rust compiler at least partially query based rather than linear based? The answer is yes. For the sake of clarity, I'll speak to the internals of the compiler as if they were functioning linearly. However, if you want to delve more into how the Rust compiler is query based, and what that means, check out the Guide to Rustc Development for more information. This guide has been such a big help to me as I've learned how to hack on the Rust compiler.

Lexical Analysis

Going back to the stages of compilation, let's start with this first one here, lexical analysis. During lexical analysis, a program within the compiler called a lexer takes the raw Rust source code called a lexeme, and analyzes it, and then splits the code into tokens to make it easier for the compiler to parse.

Parsing

That brings us to the next stage of compilation called parsing. In this stage, a program within the compiler called a parser analyzes the tokens generated by the lexer and translates them into an abstract syntax tree, or AST. Having the tokens in this data structure makes it much quicker and easier for the compiler to do the rest of its work. At this stage, before it moves on to the next stage of compilation, the Rust compiler will now take the abstract syntax tree generated by the parser. First, it'll expand any macros included in the code. Looking back at the code we're compiling, if we look at it, and we take a closer look at this line here, println is actually a macro, which means at this stage in compilation this line will be expanded to this. This is what the full println macro looks like when it's expanded. This is how it will be represented in the abstract syntax tree.

At this point, the compiler also desugars some of the syntactic sugar that makes Rust so delightful to write. For example, in Rust, the for loop is a piece of syntactic sugar for an iterator. If we were to desugar this section of code, it will consist of both a match statement and a loop. The functionality of this code is identical to the code we originally compiled, but desugaring it makes it easier for the compiler to understand it, and to optimize it. At this time, the compiler also resolves any imports in the code. If you are bringing in an external crate or using internal crates or modules, these will be resolved at this point as well.

After these steps, the compiler takes the abstract syntax tree and converts that AST or lowers it into a higher level intermediate representation, or HIR. Let's pause here for a moment and take a closer look. It helps to understand the data structures that make up the HIR. The first data structure is a node. This corresponds to a specific piece of code. This is identified by an HirId. That node belongs to a definition. A definition is an item in the crate we are compiling. These are primarily top level items within the crate. This is identified by a DefId, and a definition is owned by a crate. This is the crate we are compiling with Rust. It stores the contents of the crate we are compiling and contains a number of maps and other things that help organize the content for easier access throughout the compilation process. This crate is identified with a CrateNum. This is the original Rust source code we are compiling.

Let's take a closer focus on this section here, the for loop that iterates through the numbers vector. This for loop desugars into a match statement and a loop. If we look at the node in the HIR that represents this match statement, we would see something similar to this. This is a little hard to read for humans, so let's break it down. This Arm struct represents a single arm of the match statement that our for loop desugared into. First we have the HirId for this piece of code. This identifies a node within the HIR, or the high level intermediate representation. That node is owned by a definition. The definition is some top level item in the crate. That definition data structure is owned by the crate data structure. Our for loop is a node within a definition within our crate. That's how we can identify where this node corresponds to in our original code.

What also helps us do that is what is called a span. This span stores the file path, line numbers, and column numbers of the original source code. This will be very important in the future as we continue to optimize and desugar the Rust code. If we encounter a problem with the code after it is desugared and optimized, we still want to be able to show the user where in their original source code that the error was generated. If we were to show them the desugared code, which is different from the code they wrote, it wouldn't mean that much to them. They want to know where in the code they wrote caused the error. The compiler then takes the HIR, the higher level intermediate representation, and lowers it into the mid-level intermediate representation also known as the MIR.

The MIR is constructed as a control graph. The units within this graph are called basic blocks, which are identified with values like bb0 and bb1. Within these blocks, each has a sequence of statements that execute in order. The very last statement in a basic block is known as a terminator. This controls when and how the program proceeds to another basic block. This is a pretty simple example. There's only one direction the basic blocks can go in this particular case. Let's consider if our code had an if-else statement, then the Terminator of bb0 would have the option to either proceed to bb1 or to bb2. In this case, there is more than one path that the program can take when it encounters that Terminator in bb0. There are definitely more data structures involved in the MIR. If you're curious about them, definitely check out the Guide to Rustc Development.

Let's go back to our desugared code, and let's focus on this match statement, which is assigned to a variable called result. If we looked at the MIR for this piece of code, it would look similar to this. I have simplified this a bit for the sake of appearing on a slide. Up here on the top left, we have our basic block identified as bb2. Then we have what is called a local. A local in the MIR represents a place in memory, or more specifically, a place on the stack frame. In this case, _5 corresponds to the value of the variable result. Like in the nodes in the HIR, we have a span, the piece of the original Rust source code that each node in the MIR corresponds to. Again, if we encounter the error when we're operating within the MIR, we can still easily refer to what lines in the original source code caused the error.

Semantic Analysis

That brings us to the third stage, semantic analysis. This is big in the Rust compiler. This is where the compiler tries to figure out what the programmer is trying to do in a way the compiler can understand it and then translate it into machine code. At this point, after it's lowered the HIR into the MIR, the compiler will run several checks, including the borrow checker.

Optimization and Code Generation

Let's focus on the last two stages of compilation, optimization and code generation. These stages are where the code is transformed into an executable binary. In the Rust compiler we use LLVM to do this for us. LLVM is a commonly used collection of modular and reusable compiler and toolchain technologies. The Rust compiler uses it to further optimize the code and generate the machine code to run it. Before it uses LLVM, however, the Rust compiler takes the MIR, or mid-level intermediate representation we created earlier and lowers it again into the LLVM intermediate representation, or LLVM IR. That LLVM IR is pretty unreadable to humans, but it looks something like this. You can see that there's still basic blocks that are organizing it. We then use LLVM and pass it the LLVM IR, and then it runs more optimizations on it and emits machine code. Then it links those machine code files together to produce the final binary. That's why when we call cargo run, we see these numbers printed out. That gives you a bit of an idea of how the compiler works to take your Rust source code and make it into something a computer can execute.

Borrow Checker

I do want to go back and take a deeper look at the borrow checker. In order to do this, let's use a different piece of code. If you have some experience in Rust, and you're looking at this code right now thinking it will error out, you are right. We'll see how and why that happens in just a moment. First, let's go through this code line by line. We declare the variable x and give it the type string, and then we set the value of x to the string, Hi QCon. Then we say the variable y's value is equal to the value of x. Then we attempt to print both variables. Let's see what happens when we try to build this using cargo. When we try to build it using cargo build, we get an error, and this error is a result of the borrow checker.

Tracking Initializations and Moves

Let's go through how the borrow checker identified this error. The borrow checker does several things, including tracking initializations and moves. How this plays out in our code is when we start with this first line where we declare the variable x is a variable with a type of string. X is not actually initialized yet. It won't be considered initialized until it is assigned a value. If we looked at the MIR for this line of code, we see that x is represented by the local _1. Local _1 is assigned the type of string. Now let's look at this line of code where we create and assign the Hi QCon string. Now that x has a value, it is considered initialized at this point. We create a new place in memory local _2, where we store the Hi QCon string. Then we move the value stored at _2 to _1. _1 corresponds to the x variable. We have created the string in memory and moved it to be the value of x.

Let's look at this line where we attempt to create the variable y and assign it to the value of x. This line is where the value of x is moved to y. If we look at the MIR created for this line of code, we see that y is assigned to the local _3. A local represents a place in memory. _3 is given the type string. Then the value at _1, that represents x, is moved to the value of _3, which represents y. That means when we get to here where we try to print out x, x is not initialized at this line, so we cannot print it. That's what generates this particular error. We're attempting to use the value of a variable after that value has been moved.

Something I'd like to specifically call out is how the compiler shows where the error was generated from in the original source code through a span. Even though we had lowered this code to MIR or mid-level intermediate representation, we still tracked what items in the MIR corresponded to what places in the original Rust code. This is very helpful to the end user. What is also helpful is this, the Rust compiler not only tells you what the error is and where it is, it gives you a command to get even more information about the error so you can fix it. If we run this command, we see not only an explanation of the error, but also a piece of example code that would generate it. Then the message gives you even more information about how to fix it. This suggests using a reference to borrow a value rather than attempting to move the value, which is what we saw done in the MIR representation of our code. Let's do this, and change this line so that y is assigned to a reference to the value of x rather than moving the value of x into y. Now let's run it using cargo run. Once the compiler builds the code and executes it, we see the message, Hi QCon, printed out twice. Rather than fighting the borrow checker, we used it to make our code even better.

Lifetime Inference

Along with tracking initializations and moves, the borrow checker also deals with lifetime inference. Let's go over what that means. Rust uses the word lifetime in two distinct ways. The first is to refer to the lifetime of a value. That's the span of time before the value gets freed. Another word for referring to the lifetime of a value is referring to the variable scope. Let's see how this plays out in our code. Let's start with this first line where we assign the value of x as this string. At this point x is live, its lifetime begins here. When we get to here and move the value of y into x, this is the end of x's lifetime. When we get down here and try to use x again, x is dead. Its lifetime is no longer in effect, which is why this program will error.

The other way Rust uses the term lifetime is to refer to the lifetime of a reference to a value. This is the span of code in which the reference can be used. Looking back at our corrected code, where we assigned the value of y to be a reference to the value of x, if we look at the MIR for this line of code, we remember that the local _1 refers to x, and the local _3 refers to y. Then we see that the _3 local is assigned a reference to the value of _1. Looking back at this code, let's alter it slightly again. Let's try to drop the value of the variable x before we try to print out the value of y. When we try to build this code, we get an error. We can't drop x at that point because it is borrowed, and that borrow is used again later after that point. The borrow checker tells us that x because it is referenced by y needs to stay alive for at least as long as y needs to stay alive. X's lifetime must be greater than or equal to y's.

Looking at this in code, again, this is x's lifetime. This is what needs to be the lifetime of y, which is a reference to x. Notice that even though they overlap, x's lifetime ends before y's lifetime is supposed to end. Y can no longer reference the value of x after this line. At this point, when we try to print it out, y along with x would be dead. In order for this code to compile, the lifetime of x must last at least as long as the lifetime of y. The overarching way the scope and the lifetime of a reference relate to each other is that if you make a reference to a value, the lifetime of that reference cannot outlive the scope of the value.

Conclusion

There's so much more to the Rust compiler and the borrow checker. If you want to know more, check out the Guide to Rustc Development. It's a great resource and extremely helpful. Going back to this question from the beginning, is the borrow checker a friend or a foe? I say it's a friend, though a very strict one. The best thing about this friend is it will not only tell you when something is wrong, it will also tell you how to fix it. I find that to be one of the best qualities I can find in a friend, as well as one of the best qualities I can find in a compiler.

Questions and Answers

Synodinos: Would you like to even briefly describe some of the fundamental design decisions behind Rust as a language that make it so different from traditional languages with pointers, concepts like immutability, for example?

Shamrell-Harrington: Rust is interesting. One of the goals of Rust when it was originally created was to create a memory safe systems programming language. Prior to Rust, for most languages, you had two basic choices. You could go with a low level systems programming language like C or C++, but in those languages you have to manually manage your memory. Or you could go with a higher level language like Ruby, or JavaScript, or Python that has a garbage collector that clears out the memory after it is used. It manages it for you. The problem with a garbage collector is it adds overhead. It's not great for those low level programs that are working with very limited resources. Rust was created. It came out of Mozilla originally, which I worked there last year working on the Rust language. One of the Rust compiler engineers said this once, in C++ a feature is considered done if you can do it right. In Rust, it's considered done if there's no way to do it wrong. We wanted a way for developers to be able to write very safe code, and be able to write it and have the compiler do a lot of the heavy lifting for them, and making sure that they're checking memory.

A question I sometimes get is, shouldn't you just hire C++ developers who know what they're doing? Number one, that's rude. Number two, at Microsoft, we have code bases that are hundreds of millions of lines of code. Managing all that memory manually, particularly when someone comes into an area they're not familiar with, or no one has touched for a long time, it's too much to expect people to manage that manually. There's too much chance for inadvertent human errors. The Rust compiler prevents you from doing those memory unsafe things.

Synodinos: You touched on immutability during your presentation. It can be confusing for people coming from traditional programming languages, most traditional programming languages, actually. Are there any other idioms or characteristics of Rust that really throw off newcomers?

Shamrell-Harrington: Handling strings is a little weird in Rust. You can start with it easily. Understanding when you use a string and when you use a reference to the string type is confusing at first. There's lots of great articles out there on how to do it. That is something I struggled with initially because I came from a Ruby background. It was very different. Also, coming from a Ruby background, even though I had done Java in the past, being forced to make sure there's no doubt what type my variable is. Sometimes the Rust compiler can assume what type the variable is. Oftentimes, you have to explicitly declare it, which took some getting used to, but I feel pretty good with it now.

I used to be a pure Vim developer, not just because I work at Microsoft now, but I used Visual Studio Code. Visual Studio Code has an integration with rust-analyzer. That is fantastic for, before I even compile the code, highlighting little areas where I may have forgotten to set the type of the variable. If you're a pure Vim user or pure emacs user, I totally understand that's what you're comfortable with. If you're just getting started with Rust, it's nice to have those little helpers within the editor. There's ones for other editors as well.

Synodinos: What is crate?

Shamrell-Harrington: A crate is a Rust library. It's where you would put all of your Rust code to compile and run together. A package is sometimes multiple Rust crates that all run together as one, that all work together. It's like a gem in Ruby or like an npm package node. If you want to explore crates the Rust community has created, head on over to crates.io. It's a way to distribute Rust code.

Synodinos: During the example where you were dropping x, what if we move x after assigning y as a reference to y?

Shamrell-Harrington: I believe the compiler will error out in that case. I'm not 100% sure on that. I'm going to have to try that out in my little editor.

C is faster to compile. The borrow checker keeps you from doing unsafe things, but there is overhead in running it. The overhead is at compile time, it's not at runtime. I think at one time is much more comparative between Rust and C, though I'll have to double check on that. Rust does take longer to compile. There's a project where the idea is you run the borrow checker at CI time when you are checking the code, and then you build it, check the code, and then you rebuild it for production without the borrow checker. That is a faster build. It's a give and take. Rust offers a lot of benefits, and it does add some overhead. There's some ways around it. Right now Rust compile times are longer than C.

Synodinos: What about adoption? Many of my friends that have a background in C, C++ got really excited about Rust starting a few years ago. A lot of smart people that I'm following seem to have a lot of interest around Rust. How does adoption look like? Are there any really big projects, really good use cases or even code bases that people can open up and see how smart people are using Rust and what the right way to use it is?

Shamrell-Harrington: Crates.io is a great place to browse for the various crates that people have created. Also, the source code repo for crates.io, it's, github.com/rust-lang/crates.io. That is a great case of Rust being used in a web application. If you are used to web code, used to that MVC similar framework, that one is a great one to browse. As for adoption, Rust has been named the most loved language on Stack Overflow for the past five years in a row. With the establishment of the Rust Foundation, which I'm on the board of, the founding sponsors include Microsoft, AWS, Mozilla, Google, Huawei, and we just added Facebook. These big giant tech companies, a bunch of the letters in the FAANG acronym, they are adopting Rust and are using it. I feel like it's hitting a critical point right now in terms of adoption. People have loved it, but now they're using it more in production.

Synodinos: In case of C++, when we use templatized code, if an error happens, it prints out all the backtrace. Depending on the templatized code, it gets complicated to find out the error. Does the borrow checker provide an improvement in these areas?

Shamrell-Harrington: I believe it does, because it shows you exactly in the code where the error was generated. I have used C++ templates myself and had to troubleshoot them, which is very difficult. The Rust compiler in general and also the borrow checker, they do provide a better guide to exactly where in your original code that the error was generated. Also, they give you some helpful information on how to fix it. That's one of my favorite things about the Rust compiler. It's not just the thing checking you. It's also a guide as you're learning to write Rust and it guides you through doing it in a safe way.

Synodinos: Where does Rust fit in the current myriad of languages out there?

Shamrell-Harrington: There are a lot of languages out there. Rust fits in where someone wants to know that their code is memory safe at compile time, but does not want the overhead of a garbage collector. That's really a sweet spot. That's one of the original reasons that it was created. It was originally used for low level systems programming, now it's being used in embedded programming, game programming, web programming, a whole bunch of places. I think people like it because the compiler is that guide, guiding you in the right way to Rust, not just telling you did it wrong and spitting out a long trace.

Synodinos: Do you converge from C++ to Rust with Windows?

Shamrell-Harrington: You can write Rust natively on Windows, and Microsoft has been helping out with being able to hook into Windows APIs through using Rust. If you're asking whether we're taking the Windows source code and converging that C++ all to Rust, not all of it. There are parts that are being rewritten in Rust or new parts that are being written in Rust, but that is a giant, gnarly code base. I don't know if it'll ever be fully rewritten in Rust or fully converted to Rust, but there are parts that are being written in it.

Synodinos: That's super impressive if parts of Windows are getting rewritten in Rust.

 

See more presentations with transcripts

 

Recorded at:

Apr 15, 2022

BT