BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations How Rust Views Tradeoffs

How Rust Views Tradeoffs

Bookmarks
46:33

Summary

Stephen Klabnik takes a look at some tradeoffs in the design of Rust, and how that makes it more suitable for some kinds of projects than others. In particular, he talks about Rust's "bend the curve" philosophy towards tradeoffs.

Bio

Stephen Klabnik is on the core team of Rust, leads the documentation team, and is an author of "The Rust Programming Language." He is a frequent speaker at conferences and is a prolific open source contributor, previously working on projects such as Ruby and Ruby on Rails.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcript

Klabnik: I'm Steve, and this is a talk called "How Rust Views Tradeoffs." I am on the Rust core team. I'm currently in between jobs. I used to work at Mozilla on Rust, but have recently left and I don't know exactly what I'm doing yet. We'll see what happens, but I'm still going to be heavily involved in Rust development even though it's not my day job anymore.

This is a little overview of what I'm going to be talking about today. This is the “Choose the right language for the job” track. I wanted to talk about how Rust views tradeoffs and what would make you choose Rust or not, and also just some ideas when you think about tradeoffs. We're going to start off by talking about what even is a tradeoff in the first place and then we're going to talk about Rust's bending the curve philosophy, it's how Rust approaches thinking about tradeoffs. We got to talk about that first and then we're going to go into a small digression about how things get designed in the first place and this concept of values. Then lastly, we're going to go over these four different case studies in areas where Rust has had to choose some tradeoff and why we pick the thing that we did and the thing that we didn't do.

Briefly, before I get into the rest of the talk, I want to thank Bryan Cantrill. A large part of this talk is based on a framework that he came up with. There's this talk called "Platform," it's a reflection of values from Node Summit 2017, and it's kind about the story of Joyent and Node.js and io.js and all that stuff. He's the one that really got me thinking about a lot of the stuff that ends up being in this talk, I wanted to make sure to give him credit. You should watch his talk, it's good.

What Is A Tradeoff?

Before we talk about tradeoffs, it's important that we all agree what a tradeoff actually is. I am an amateur philosophy enthusiast and one of the hardest problems when communicating with others is making sure that you're not talking past each other. You have to agree what words mean by using words which is complicated. We're going to get into what tradeoffs are before we talk about the specific tradeoffs in Rust.

Everyone always does this, and it's boring and dumb, and I apologize, but the dictionary defines tradeoff, this is a little more interesting, I swear, "a balance achieved between two desirable but incompatible features or a compromise." Now, the first thing is interesting, I think is the sentence a tradeoff between objectivity and relevance. That's an interesting example of a tradeoff. You can be objective or relevant, you have to be subjective to apply to reality. I might agree with that a little more, but the reason I decided to put this on the slide is not because, the dictionary says it's such a dumb meme, but the thing right below this on the webpage, if you Google this, and I thought this was really interesting is this use over time graph. I was like, "Wait a minute, just after 1950 is the first time, we thought of the concept of tradeoffs?" I went into a deep rabbit hole. You're supposed to be working on slides. Instead of working on your slides, you're like, "I need to look up the linguistics about the word tradeoff," and it turns out the reason that graph that was there looked like that when you could see it, is because obviously the concept of a tradeoff did not start in the early 1960s but it used to be the words trade off and we only started putting them together into one word around the 1960s. That's why this graph looks like this. Obviously, people use the words tradeoff and talked about discussions earlier, but I just thought that was interesting. That's why the dictionary thing is still up there, language changes over time, and it is cool.

I got a bachelor's in computer science, and one of the things that were beat into you in at least in the States is "There's tradeoffs and you have to deal with them." Here are three examples of classic tradeoffs in computer science or programming that you might have dealt with before. One of the biggest ones is space versus time, if you google computer science tradeoff, everyone's like, "That's space-time tradeoff." It's the vast majority of things on the web apparently. The basic idea is that you can either make something fast or you can make it small, and these two things are independent of each other and, so that's a common situation when you're designing data structures or talking about the network. The second one is throughput versus latency, these are two different measures of network activity. Throughput is how much stuff can you do in a given space of time and latency is how long does it take you to do the thing. These are two things that are often you can get good at one or not the other.

Then, finally, a big classic one, dynamic types versus static types. I wrote Ruby or Haskell because I was trying to think of the most truly way to describe this description ever. I actually have a Ruby tattooed on my body, I used to do Ruby before Rust, it was a very dynamic typing enthusiast. One of my friends Sean Griffin, I've been going around for last couple of years it's been like, "Oh I have a ruby tattooed on my body. That's how much I love Ruby." My friend Sean had his first-born daughter, and he named her Ruby. He leg one up to me, "You think you are going to tattoo, and you're going to be the biggest thing." but I was like, "Fine, Sean. You win."

Dynamic versus static typing and tradeoffs. These tradeoffs are great and complicated, and they're a core aspect of our discipline, and they're also core to another aspect of our discipline, which is arguing on the Internet. People argue over which one of these tradeoffs is the right thing to choose and in some ways that's dumb but also I think the tradeoffs are the core of what we do. If we think about programming as an engineering task, fundamentally, you have to make choices between things sometimes. That's important.

Bending the Curve

Rust has this interesting approach to tradeoffs that we call "bending the curve." This is an attitude that got instilled in the Rust developers fairly early on. I'm not really sure who started trying to think about things this way, but it's a way that we approach the problem of "You have these two things. What did you pick?" Let's talk about that a little bit. When I was making these slides, I felt very clever because I looked at these three tradeoffs and I was like, "Wait a minute. one of these tradeoffs is actually different than the other two." I don't know if you have any ideas about which one might be the different one here, but I think I made a slide. Since I don't have my laptop, it's totally blank, we're going to have a little bit of fun. Did I guess my next slide correctly? Yes.

Throughput versus latency. This is often a function of physics, but that doesn't necessarily mean it's always a tradeoff. You can sometimes have systems that have more throughput and less latency than a different system. It's not always inherently a tradeoff. At some point physics and wires and stuff I don't fully understand comes into play, but this is an interesting entry point into this idea that these things don't always have to be at odds with one another even when there's a tradeoff and things are usually at odds. If we think about the other ones, this also becomes true.

These dictionary definitions, I'm sorry, bear with me briefly. This is what Wikipedia has to say, instead of the dictionary, I'll quote Wikipedia. "A situational decision that involves diminishing, or losing one quality, or whatever in return for gains and some other aspects." Basically, one thing goes down, and another thing goes up. If you think about this a little more generally, this is weird with these things I presented to you as tradeoffs. A lot of times, something that's smaller is actually faster. If you do high-performance computing and you can fit your computation into your one cache, it becomes massively faster. What normally might be a space versus time, like tradeoff, a common thing in compiled languages is, do you inline a function or not? The argument there is if you inline the function everywhere, your binary gets larger, so that's a large issue but you get faster. Whereas if you do dynamic dispatch instead, your binary is smaller because you only have one copy of the code of that function, but you end up being a little bit slower because you have to actually do the dispatch.

In other situations, like in this idea with the cache, these two things aren't at odds and in fact, smaller is often faster. This is also true, for example, if you've ever had to download jillions of gigabytes of JavaScript to load a webpage. You're like, "Man, if this JavaScript was smaller, my page would load faster." A lot of stuff around the web is about making things smaller so they can be faster. Sometimes this is not actually a true dichotomy.

Dynamic versus static typing. The non-trolly resolution to this is gradual typing, you can have things where you start off a little statically and more dynamic or vice-versa. Has anyone ever heard people describe dynamic languages as a unit typed languages before? There's an argument that all languages are statically typed because types are only compile-time construct and languages that are dynamically typed, only have one type, and everything is one type, so, "Ha-ha. You still have static types," but that's totally useless and only good for making other people mad.

Gradual typing is a better example of how static versus dynamic as a real dichotomy. In fact, even in most languages that give you static types, there's some facility for dynamic types and we're seeing an increase in dynamic languages, Python is introducing this thing called mypy which I think is in the language now that lets you annotate functions that you want to be annotated with types and your stuff gets faster and all that goodness. It's not exactly a pure dichotomy tradeoff.

Then, finally, as I already mentioned, throughput versus latency, you can sometimes do better on both. That doesn't always mean you can, but these are actually different measurements, throughput is about amount over time and latency is about the distance to an individual thing. It's weird to even argue that they're against each other because they're just fundamentally not measuring the same thing. With this idea that tradeoffs don't have to purely be one thing goes up or goes down, if you have two options like this or that, you see my amazing presentation drawing skills here, get ready. You really think that we can choose something that's here or something that's there and you have to pick one or the other but in reality, this is more of a gradient and so you could choose this instead.

It's a little bit closer to one than the other but bending the curve is about the idea of "What if you could pick this instead?" This is the best I could do with a curve, I'm sorry. I used to write all my slides in HTML and CSS and stuff, but I'm really bad at CSS, so I don't know why I kept doing that to myself, and then PDF export is terrible, and this conference loves PDFs for your slides so I just did Google Slides and it's fine. Even the idea of just grab it in the middle and draw it up towards a different thing, we can do other stuff with tradeoffs than just look at two different options. We can get unique solutions by thinking out of the box a little bit. I hope that was an enterprising enough sentence for you all.

The way that we often think about tradeoffs, as I mentioned earlier, one thing increases, another thing decreases. This is more commonly known as a zero-sum game, if you do any game theory or economic theory, apparently according to Wikipedia, but no. A zero-sum game basically means that when you add up everyone's scores, they end up being zero, if you need to lose then I need to win and vice versa, although of course, I'm going to be the one winning, that's the idea. This problem is that when you think this way, you start believing that other people inherently have to lose for you to win. It turns out that not very many things other than economic theory and game theory, which are theories is actually true. In most situations, including programming language design, you can design things in a different way, and that is a win-win game. This is a game in which everyone wins, or everyone loses. We like to focus on everyone wins part.

It's not really inherently about you must lose so that I must win. It's about trying to figure out, "Is there a way that we can both win?" A win-win strategy means that you can try to figure out how that goes. Then this idea of bending the curve is fundamentally about when we look at tradeoffs, we try to figure out a way that we can have both things at the same time. This bending the curve really boils down to "Is this tradeoff fundamental?" Sometimes it's absolutely true, there is situations in which someone has to lose for the other person to win, but a lot of times we get too obsessed with that idea, and we apply it where it doesn't have to work. Can we be a little creative and instead have everyone win? This works way more often than you might think.

Design Is about Values

Before we get into the case studies about what Rust actually did, if we think about, "Ok, this is the approach we're going to try to find win-win solutions instead of you need to lose, and I need to win." we need to talk about the game that we're actually playing, and in order to talk about that, we have to think about the concept of design in an abstract, "What is the task?" When I'm the person in charge of designing a language or designing a system, "What is the thing I'm even trying to do?" What is that activity? If you know any architects, you may ask yourself, "What is their job really?"

This is the part that largely relies on Brian's work. Fundamentally, design is about values, when you're thinking about a system and you're thinking about building it, you need to understand what is valuable to you and beyond that, you really need to think about it a little more complicated than just "What do I care about?" You need to think about what are your core values? That is what is this stuff you absolutely are totally not willing to give up on? Is there a hill that you will definitely die on and you're going to die for sure? What is that stuff? Then also secondary values, there are some things that would be nice to have, but if you don't get them, it'll be ok. This is often a little more complicated because a lot of people to think that you can never compromise on anything and I definitely am that person sometimes, but a lot of the creative process actually sticks into this, when you're willing to give a little, what do you get back from it? Having more secondary values. You would think that having a lot of core values is very useful, but it turns out those are useful in some situations, but they're not as useful as something you're actually willing to trade off for something else. Having a lot of secondary values is also pretty good, what stuff do you want to have but you don't necessarily need to have?

Then, finally, what stuff do you just not care about whatsoever? Identifying this is really important too because it means that if there's something that you do really want and you have a thing you're willing to give away, it's really easy to get that thing if you can figure out how to do that particular tradeoff. Being explicit about the things that you don't care about can be just as important as caring about the things you actually do care about.

Let's talk about Rust's core values when it comes to designing things. Now, I will say that I am not on the language design team anymore, it's complicated, I'll get on the history a little bit later. This is what I see, please take this as my personal interpretation. Rust is designed by a lot of people, and so I'm not saying that they necessarily 100% agree with me. That's another funny part about design is you get to argue about things with lots of people. When I look at Rust core values, I see these three things as being what Rust cares about a lot. I mentioned that they're in a particular order because the funny thing about core values is you certainly also need to decide "If these things come into conflict with each other, what do I actually pick?"

The thing that Rust cares about above all else is memory safety, and there is historic reasons for this. Largely, because Rust's whole idea is "What if we could make a C++ that's actually memory safe." If you were to give up memory safety, it'd be "What are we doing here? This is the whole point of the enterprise is wrong." Rust also really cares about speed, but Rust cares about safety more than speed, this is also why I said they're in order. Historically speaking, these two things are at odds, if there's a situation in which we need things to be safe, but we have to give up a little bit of speed, we will do it, but because speed is still a core value, we will try our damnedest to make sure that we can find some other way to make that happen. Every once in a while there's a situation where that's not actually the case.

Then finally, I put productivity here, which is a little bit of, I don't want to say weasel word exactly, it's a little hard to define what productivity means, but Rust cares a lot about being a language that is useful to people. You'll see this express differently in the things Rust doesn't care about later, but basically Rust wants to make decisions that will make it be a thing that you want to use Rust. That sentence is terrible, I'm sorry. Programmers need to use a language and Rust's language that wants to be used, and there are some languages that don't want you to use them, surprisingly enough. We'll get into that, and that's totally fine, it's not a judgment about values, it's about your judgment of your values. These are the core things that Rust really cares about.

Rust secondary values, and these are the things that we would like to have, but we're willing to give up a lot of the times are ergonomics. In order to achieve safety and speed, Rust has some stuff that makes it a little harder to use. Getting ahead of myself, but we'll give up that ease-of-use sometimes to achieve those other goals, but we still would really like it to be as easy to use as possible. Another one, and this is unfortunate if you've ever used Rust, I'm sure you're not surprised by compile times being a secondary value. The Rust compiler is slow, it's a problem, we are working on it, but we care more about the final speed of binaries than we do about making the compiler fast. We will always make your program faster if it makes your compile time slower and that's just what it is. That said, after this talk I'm about to post the proposal for this year is Rust roadmap, and one of the major features of it is "How are we going to make the compiler faster?" We do care about this, and we want to get it done, but we give it up maybe a little more than we should even sometimes.

Then this is interesting, correctness is a secondary value. What I mean by this is Rust really cares that your programs are right, but we're not willing to go full dependent types, proof assistant, make sure that your code is right. It should be right, but we're not going to make you prove that it's right. That's why it's a secondary value is because we're willing to give up a little bit of correctness sometimes in order for, for example, ergonomics. Proof assistants are really hard to use, and I don't expect that many of you in this room have even used one, let alone are comfortable using one. You have to give up a little bit of those correctness tools in order to achieve ergonomics and productivity.

Things that Rust does not care about. I think this first one might be a surprise to a lot of you, but it's actually in the name. Blazing a new trail and programming language theory. The name "Rust" evokes a lot of different things and it actually, there's no one reason why Rust is named "Rust." The guy who made it originally, Graydon [Hoare], used to just come up with new things of why whenever someone would ask him. There's six different new reasons out there, but one of them is, is that Rust's programming language theory technology is 2000 to 2006 era programming language terminology. It just happens that most of the languages we use today were made in 1995. Rust seems it's this really big conceptual leap forward but if you talk to somebody who's trying to get their PhD in PLT, they're going to be like, "Rust is real boring." A lot of the tech that Rust is built on is actually pretty old and so Rust is not trying to be a research language. We're not trying to push programming language theory forward completely. Some of the people in language team might disagree with me a little bit, they have PhDs, it's cool. We do some new stuff, but it's not a thing we're trying to do as a goal.

Secondarily, worse is better. Rust is not interested in just throwing something out there and hoping it's good enough and iterating. We spend a lot of time trying to get things right and so on the Jersey versus MIT side of things, we are more on the MIT side of things, Rust will spend a lot of time iterating on features until they are good and we are not willing to just throw stuff out there. The way that you can see this is in our standard library, Rust has a very small standard library, and that's because we're not totally sure that we have great libraries for things yet. We're not just going to throw an XML parser in the standard library unless we think we've got a great XML parsing library because the standard library is where libraries go to die, and that's no fun for anyone. We tend towards the correctness side than just the throw something out there side of things.

Then, a last one, which is interesting for systems languages, supporting certain kinds of old hardware. An example of this specifically is two's complement versus one's complement integers. If you're a C and C++ programmer, you may know, I hope that the representation of integers is undefined and that leads to all sorts of fun shenanigans. That's because C and C++ were developed in an era where a lot of hardware had different implementations of integers and so you're allowed to pick one's complement, two's complement, or assign magnitude for integers. We basically said, "Listen, literally all the hardware that gets made today uses two's complement integers, so we're just going to assume you have two's complement integers and you can use a different language if you are programming the machine from the '70s."

This hardware support is so true, there's actually a paper right now, the feature freeze for C++ 2020 just passed, but the next iteration of C++ 23 might also declare that it only supports two's complement hardware because it turns out that it's been a long time since anyone's made one's complement machines except for one company and everyone's like, "Come on,". Anyway, we're willing to skew hardware support for certain kinds of old things, we don't have those kinds of integer undefined behaviors because we're willing to just say it's two's complement, and that's fine. That's a tradeoff that we are willing to make, those are some examples of our values.

A little bit more about values and design, an interesting thing is that it's not just you that has values in the system you're trying to build, it's also your users. They have a certain set of values or the things they're trying to accomplish. As a programmer, I think it behooves us to think about not just the values that we hold, but the values of the people that use our software hold and, as a programmer, you should use tools that align with your values. I really like programming languages and learning new ones, but there are some that I have seen where I'm like, "You know what? This language is not for me, so I'm just not going to use it." I'm not going to denigrate any languages by naming them, but it's true that I would be unhappy if I had to program in some languages and that's because they value different things than I value and that's totally chill. There are other people who have different values than me, they can use those languages, they are super happy. That's literally why we have different languages, it's fine. I've had frustrations with tools where I was forced to use something, and I was like, "Man, this tool sucks," and then I realized it wasn't that the tool sucked, it's that it cared about different things than I cared about. That weirdly made me more okay with using the tool because I was able to just be like , "I understand why this friction is happening," and it made that job easier.

In general, those kinds of mismatches can cause those problems. I find that a lot of programmers arguing on the Internet about whether something is great, or terrible, or awesome, or horrible, really come down to that person has a certain set of values for the things they create, and they're talking about a thing that cares about completely different things and that's where there are a lot of arguments happen. For more than that, watch Brian Stock, it’s great.

When should you use Rust? Before we talk about the specific tradeoffs, I figured I would put some examples of when Rust might make sense to you. If you find these values to be true in the software you write you may want to use Rust, if not, don't, it's cool, there lots of great languages. I think that Rust is ideal whenever you need something that's both reliable and performance. Yes, performance is a word, I don't care what you say. Language changes over time, deal with it, I've had a lot of bad arguments on the internet, I'm really sorry, that's really shaped my worldview in many ways. There are people who care if you use the word performance and they will get mad at you, and I'm expecting tweets about it later. Performance is important, reliability is important, when you need those two things, you might want to look to Rust.

It's interesting because a lot of people were like, "Well, when wouldn't I care about reliability and performance?" And let's be serious, think about some systems you built, there's been a lot of them that have not been reliable or performant. There's times in which you are willing to trade away those things, and that's totally cool. A lot of the “rewrite it and Rust” meme comes from places that have built something in a system that is not necessarily reliable performant and then got to scale and realize "Oh my God, we need reliability and performance," and they rewrote a portion of it and Rust and we're happy, that's a really great strategy for managing these kinds of tradeoffs, I'll talk a little more about that later. Sometimes you need these things, sometimes you don't need these things, and that's cool and yes, as I mentioned with the rewrite stuff, sometimes you don't need these things immediately. We'll be here, it's cool, go write stuff and other things.

Case Study: BDFL Vs Design by Committee

Let's talk about some case studies, the first couple of case studies are going to be about the design of Rust itself and tradeoffs in the design and the way we approached the design process and then I'll get more specific, we'll talk about threading models at the end, this is going from broad to concrete. BDFLs versus designed by a committee, this tradeoff involves who is building your system and who gets to make the calls, who's the decider? One model is the BDFL model, which is the benevolent dictator for life. They rule over their project with hopefully a velvet fist, not an iron fist, hope not mixing too many metaphors. They need to be benevolent or else you've got a dictator, and that's bad, but if they're benevolent and generous, it's probably good. A lot of people this model and a lot of programming languages are designed this way.

The other option is "designed by committee," where a bunch of people who are not invested in the system make the decisions. There's this quote, I forgot about this when I was looking at these slides "A camel is a horse designed by committee." I don't think it's really fair to camels. I also have a pearl camel tattoo but, when you look up the definition for this, a lot of people think "Oh, if something's designed by multiple people, then that also has chances to really go awry" We have these two tradeoffs, we let one person make all the decisions, and that if they make a bad decision, we're totally screwed or make a lot of people make decisions and when they make bad decisions, we're totally screwed. Which one is actually better? How can we do things differently?

Rust didn't ever really truly have a BDFL, but we went from "one person makes decisions" to lots of people making decisions over time as the project developed. Originally, Rust was a side project by Graydon [Hoare], he got to decide everything because he was the only person working on it. That's just what happens, you start a project, you're in charge. He was always extremely forward that he was not the BDFL, which a lot of people were like, "That's a great sign in the BDFL." Eventually, he gave up his power to a bunch of other people in which even more people wanted him to be the BDFL because they're like, "You're willing to give it up now," , "you're going to be great," and he was like, "This whole thing makes me uncomfortable. No." We developed the Rust core team, and so that became a small group of people whose decision was to make decisions about the language.

Eventually, we ran into issues of scale, I believe that's my next slide. We transitioned from having a group of people to having a group of groups of people. Now, the Rust project has our core team, which I'm a member of, but we also have what used to be called sub-teams are now just called teams, so for example, I'm a member of the documentation team as well as the core team. I'm on both, there are some people who are on only one team, and the idea is that all of the teams are actually equally in charge. Rust core team is more of a tie-breaking organization at this point than it is a hierarchical thing, but that's also complicated and weird. I'm going to get into it, we don't really vote so we don't have ties to break, but it's fine. The important being a part is Rust used to have one person in charge, and then it had six people in charge, and now it has about 100 people in charge, we've changed a lot as this works out.

The reason this happened is basically due to scale. As the project grew, we ran into limits, I was on the core team whenever it was the only team, and the problem was that it was our job to decide on things. Every week we have a meeting, and we would decide on all the things we did decide on, and by that I mean there'd be this big giant list. We get through some of them, and the next week there'd be even more added onto the list, and so it just started to grow, and people became frustrated that the core team was becoming this bottleneck. Members on the core team were frustrated because not every one of those decisions was relevant to every member of the team. If you wanted to talk about variants in our type system, I like read Reddit while those meetings happens, I wasn't paying attention, but I still had to vote because I was on the team, so that's weird and dumb. Then when I really wanted to talk about whether we choose British or American English for our documentation standards, that was my jam, the people that have the PhDs in type theory, were like, "Yes. Whatever Steve says, that's fine."

It took so long to get through all these decisions that people would be like, "I've been waiting on a month for you all to make a decision on my pull request, what's going on?" We'd be like, "Sorry. We got a lot of stuff to decide." In order to scale, we decided to make more than just the core team. That was just creative solution to this problem and that's been helpful, but then that comes with new problems of its own because now when you have 100 people and 15 teams, they all have to coordinate. Recently, I would like to announce that we're about to make our governance team, which is basically a team's team. Its job is to figure out where coordination issues are between the teams and help in the team stuff work, it's a team making team, programmers' love recursion.

This also means that originally, one of the problems with BDFL over designed by committee that people bring up, is the BDFL has a grand vision that he toils over an artist or whatever, and designed by committee has no taste. One of the problems is you move to multiple people, you lose this cohesion unless you're explicit about these design values. We all have to agree what are the principles that we use to make these decisions. That's something that we've been getting a little bit better at is communicating to each other, how we make these decisions and why, and dealing with those problems. I don't want to say that having 100 people run your language is a panacea because it is not, but it definitely has helped with the bottleneck of having documentation people decide on language features.

Case Study: Stability without Stagnation

I had an argument on the Internet with somebody about what stability meant, recently. They're like, "You added a new API this release, so it's not stable, stable means unchanging." I'm like, "Oh God." Stability means things don't change, but if you never change, then you're also not growing. Growth requires some amount of change, you make sure that you're stable enough that your users aren't dealing with "We changed everything, now your code doesn't compile," or, "Enjoy the new feature," versus "Sorry. We can't fix that bug because it's relied on in production by this large company." This is a tradeoff that you have to deal with, we want to be able to have change, but we also don't want it to affect people that don't want it, opt-in change. We don't actually think that these two things are inherently at odds.

There's this blog post called "Stability is Deliverable." I'm going to have a couple of little citations from it, but if you want to look it up, it's on the Rust blog, there's the URL. I'm sure you will type out that URL in the two seconds that takes me to describe this, but you can just Google for this on the Rust blog, and it's a thing. This lays out our plan and our approach to stability and I'm not going to get too deep into the weeds, but basically, we don't want to mean that Rust will stop evolving, but we want to release new versions of Rust frequently and regularly. In order for people to be able to upgrade to these new versions, it has to be painless to do so. Our responsibility is to make sure that you never dread upgrading Rust. If your code compiles on 1.0, it should compile on Rust 1.x without problems. This is continuous integration, all the rhetoric around, like, continuous integration, continuous deployment is, if you deploy twice a year and you fear deploy week, if you start deploying every week, you get better at it, you get better at what you do. If you deploy often, you will be better at deploying, so let's do that.

We approached this with the language. If we release the language often, then we will do a better job at making sure that we don't break stuff because it's not once a year that we check in with our users if we broke all their stuff, and so this is what we do, we actually copied browsers. This basically just says we land stuff on Master, we have feature flags, and then every six weeks, Master becomes promoted to Beta, the previous Beta become Stable. If you've ever used Chrome or Firefox, you have probably seen this model, every six weeks, your browser's like, "Hey, a new version of the browser came out." We did the same thing with Rust, and that lets us do these releases, but things don't get off of Nightly, they don't get into a release until they're explicitly marked as stable. What that lets us do is it lets just experiment with new features on Nightly and actually ship the code and put it into the compiler but that won't affect stable users because you're not allowed to use Nightly features when you're unstable.

This lets you as a user of Rust, if you want to be involved and try out new features and get goodies while they're still cooking, you can do that by downloading a Nightly version of the compiler and trying it out and give us feedback. If you don't want to deal with that cause that's a pain, then you can use Stable and never have to worry about that, and Stable becomes really easy to update and all those kinds of things. This says what I just said, I'm not going to read slides to you.

What's the tradeoff here? The thing with bend in the curve is when you introduce a third thing into your this or that, you're also probably introducing a fourth thing, you're giving up something there too. I don't want to always say this means you get everything, this process is a lot of work for us. We have a team for that, it's called the release team, and also the infrastructure team, they both deal with this problem. We got two teams working on this, and so we had to put two teams together to work on it, that's the tradeoff.

We also invested a lot in continuous integration because we needed to be testing. We actually periodically download every open source package in Rust and try to compile it with the next version of the compiler to just double check we're not breaking your stuff. That's really cool, it also means Mozilla's paying a bunch of money for some service, so thank them. We developed a lot of bots, so bors is our continuous integration bot, it makes sure that everything passes the test suite before it lands. This also means that bors is always the number one of our contributors' list because he merges every single pull request. I got lots of stories about that that are funny, but there's no time, so sorry, you can ask me about that later, but bots are awesome. Basically, this is one of those “our users versus us tradeoffs”, we're willing to put an effort to make things easier for our users, and that is a tradeoff that we will almost always take, it is a tradeoff, and we pay the price for you.

Case Study: Acceptable Levels of Complexity

There's two different kinds of complexity, there is inherent complexity, and there's incidental complexity. Inherent complexity is just like, "It's actually complicated," and incidental complexity is like, "You made it complicated when you didn't have to make a complicated." Separating out these two things is important because you can't always get inherent complexity to go away because there's inherent, it's defined in the word, but incidental complexity is the thing that you can fight because it's about you accidentally making things more complicated than you needed to, that's a skill, and I think to work on. What was interesting about this is something can be inherently complex for one design, but incidentally complex for another design. That values list that you picked earlier can often determine if something is inherent or incidentally complex.

Here's what I mean by that, Alan Perlis is this guy, I don't actually know what he did other than write witty stuff about programming, to be honest, but he has a thing called "Epigrams in Programming," and I found several of them, I think that are interesting to Rust, "It's easier to write an incorrect program than to understand a correct one." "A programming language is low-level when it's programs require attention to the irrelevant." That one was my favorite, and then finally, "To understand the program, you have to become both a machine and the program." He wrote these in the late '80s, I believe, I'm not totally sure, but I think these all apply to Rust. What I mean by that is that Rust does want to help you write correct software and Rust does want you to write fast software. In order to do that we expose a lot more air handling than many languages do because a lot of stuff can go wrong when you're writing programs, as it turns out. The network can die in the middle of the connection, your user can type in something that doesn't make sense, all sorts of errors happen. We expose those errors many times and other languages, just hide them away, this happens in a language design level because we have a type called result that returns from fallible operations. We don't have exceptions, a lot of languages hide a lot of stuff in exceptions, which is where you get that "catch e" or, "throw all," the things we were just "Yes, whatever. I have no idea what exceptions is throwing, so I'm just going to catch them all and re-throw it again. Somebody else can deal with it somewhere else." That's not great for correctness, but it is easy to do.

We've introduced stuff like a question mark operator to help reduce the complexity, but it's still always going to be there because we want you to be able to handle errors and that's important. That's a way in which our design has made something inherently complex. Languages that care less about correctness are able to just be like, "Yep. Throw us a bunch of random crap," and that's fine and it becomes much easier to use. They're able to get rid of that stuff, and so, it's not inherent for them.

One way that Rust does safety and speed together - we achieve those two values at the same time - is by having a really great static type system because types or check to compile time so, they're free. Remember what I said about long compile times earlier, they're not actually free, but at runtime they're free. That's cool, but have you ever used a really strong static type system you know they're complicated, and that means as a user, Rust is a little more complicated for you to use, but the benefit of what you get out of that is programs that are really fast. That's cool, but that means that we have this inherent complexity to achieve our goals, and these things actually matter. If your goals are not to have safety and speed at the same time, to only be fast or only be safe, then you don't need these complicated type systems, and things become a lot easier for your users. That's not inherently complex anymore.

Case Study: Green Vs System Threads

Last case study before I go away, Green versus system threads. This is the most complicated, actual concrete case study I have for you here. There's these two different models of threading, there are more of them as well, but for the purposes of this talk, only these two exist. I'm not going to get into the details that much, but basically, system threads are provided by your operating system. They're an abstraction for running code, you say, "Hey, OS, please run some more code at the same time." And it goes, "Cool." It doesn't actually run at the same time, but that's a whole separate story. Green threads are an API that's offered by your runtime. This is a programming language "Hey, I have this mechanism for running code at the same time." And you're like, "Cool, I'll use that." Sometimes this is called N to M threading because you have N system threads that are running N green threads, and sometimes system threads are called one to one threading cause one system thread is one operating system thread. These terms are also incredibly loose, and you can argue about them a lot on the internet if you want to. You can argue about a lot of things on the internet if you want to.

Some of the tradeoffs involving picking these things are system threads require that you call into the kernel, the kernel API, and they have a generally fixed stack size, yes, you can tweak it. This is a slide, I'm not putting every last bit of politics onto here, you get 8 megabyte by default on x86, x64 Linux. Green threads, however, because they're run by your program, they have no system calls. That's cool, no overhead to call into the kernel and they have a lot smaller stack size. For example, a go routine has currently 8 kilobyte of stack, it used to be even smaller, they found out that was too small, they made it a little bigger. From these set of tradeoffs, it looks like you always want green threads. Why would you ever use system threads? These are just better.

As I mentioned earlier about the way Rust development changed, sometimes your values change over time. Rust was before 1.0 for five whole years. Originally, even though Rust had the same design goals and values, it expressed them very differently. Originally, Rust was actually much more similar to Erlang than C++ just little weird, it was awesome, but weird. It provided only green threads because that's what Erlang does, and as the previous slide showed you, obviously, you'd pick green threads in every situation. Over time, Rust got lower and lower level, and we were able to commit more to our performance goals by doing shenanigans.

We had to reevaluate this choice. This was such a contentious change, there were people threatening to fork the language over it actually. That's one other story I don't have time for. The argument goes like this, "You're supposed to be a systems programming language, but you don't provide access to the operating systems API? What does that even mean?" And we're like, "Yes, that makes sense," and then also the downside of green threads because you have these small stacks and they're different stacks, if you want to call in to see, you have to switch to the regular operating system stack. That has a cost, that performance is totally at odds with our previous performance stated goal as well. We tried to bend the curve, and we failed.

What if we had a unified API that let you pick? Do you get green threads or do you get system threads? Whichever one you want. You're writing code one way, they're just threading models. You spin up a new thread, it doesn't need to be a green thread, our system thread is fine, let's do both. We had this "libnative" versus "libgreen," and you pick the one you wanted in your program and people would write libraries that would be abstracted, that didn't care about the threading models. You just get to do whatever you want, everything will be wonderful, the problem is that this gave you the downsides of both and the advantages of neither. That was a problem, it turns out that our green threads weren't very lightweight, they were actually pretty heavy. There's some other things, I have some lists here, I'm not going to read all this to you, but basically, some things only made sense for one model and not the other model, but both models had to support both things. That was awkward, it was a problem with IO because only some stuff worked properly across both things or just implementation issue.

Embedding Rust, if you want to write Rust in embedded system, you'd have to say, "I never support the green threading runtime, but the whole point you're supposed to be agnostic." It's like, "How does that go?" Then finally we committed to maintaining both things, you just had to be good enough to maintain both of them, and that's a really big burden. We eventually decided to kill it, and that was bad, we realized that we were able to commit to some values more than others, so the answers were different. There are other languages who only give you green threads. That is awesome for them, they have different values than we do. You should take time as you're designing a system too to check back in with yourself and say, "Hey, have my values changed since I originally made this decision because maybe the decision I made was a bad one and I need to reevaluate it."

Now, we only have system threads in the standard library because when I've runtime, it means that you can write your own runtime and include it if you want. There is two different packages, one is called Rayon, and one is called Tokyo, and they are both ways of doing green threads for different kinds of workloads. Don't have time to get into it, we could talk about it afterwards if you'd like. There's also tradeoffs here as well, for example, now you have to know about Rayon, and/or Tokyo, and you have to pick the right one to use, that's complicated. Then finally, what happens if people made six packages to do this instead of two? There's some downsides, but I don't have any more time.

With that, thank you so much for coming to my talk. Three things you take away from this are tradeoffs are an inherent aspect of our domain, but if you think outside the box, you can sometimes have your cake and eat it too. You should use Rust when you really care about something being both robust and fast.

 

See more presentations with transcripts

 

Recorded at:

Jun 19, 2019

BT