BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations Everyday Efficiencies

Everyday Efficiencies

Bookmarks
49:01

Summary

Todd Montgomery explores the everyday things that those with an eye to performance and efficiency do that can be leveraged by anyone to build better software faster.

Bio

Todd Montgomery works as an independent consultant on high performance systems and is active in several open source projects, including Agrona, Aeron, ReactiveSocket, and the FIX Simple Binary Encoding (SBE). He has researched, designed, and built numerous protocols, messaging-oriented middleware systems, and real-time data systems, done research for NASA, and co-founded two startups.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcript

Montgomery: My name is Todd Montgomery. We're going to talk about everyday efficiencies. I always say this, but I actually do mean it, every time I give a talk, it feels pretty neat to be able to put a lot of the thoughts that I've had and try to reflect on them and try to talk about the things that bother me.

This is a talk that has a little bit of a rant to it. It also has a little bit of, I hope, some information that you'll find useful. I'll try to leave a good bit of time at the end for questions, because there's some details right at the end and I want to go over in a little bit more detail, but it wasn't sure how to get it across. Any questions you have, feel free to raise your hand.

A little bit about me, I was actually a researcher for NASA in Software Safety. I got to work on some cool projects like Shuttle and Station and Cassini and Mars Rovers in various capacities, which, as a young engineer, had a lot of impact on me in the way that I still view things. Today, after NASA going through and looking at high-performance software, mostly in financial markets, there's a lot of things that I've noticed. Today what I want to talk about is why should we care about efficiency. Is it just for things like fintech and ad tech that we actually care about performance and efficiency, or does it go deeper than that?

We'll also talk a little bit about some of the ways that we tend to do things inefficiently and think that it's ok. Also let's talk about some things we can do that have a dramatic impact, but anyone can do it. It doesn't matter what software you're doing.

To give you an idea why you should care, here's a couple headlines. Our data centers are growing, and a lot of data centers have talked about being very energy efficient using renewable energies, things like that. When you actually look at the data, and these are from three different articles, one from Forbes, one from Nature, and one from DatacenterDynamics. This diagram from Nature highlights it. In 2030, in 10 years, 21% of the projected electricity demand will actually be data center demand. That's a lot of energy. If we are concerned about climate change, shouldn't we be concerned about the amount of energy that we are using in our data centers? I think it would make sense. It goes a little bit deeper than this. This is the projected demand. It could be as little as 8%, or it could be a lot more, it depends on where things go. Looking at this, the biggest reason why we might want to care about performance and efficiency is because we do need to think about being a little bit better with the resources we have at hand.

Let's think about this from another perspective. When we think about efficiency or performance, it's a nonfunctional requirement. Here's some other really big nonfunctional requirements: performance, quality, robustness, safety, stability, usability. Here's an even bigger list from Wikipedia. I want to point out a few things that are nonfunctional here: privacy, reusability, stability, supportability, disaster recovery, documentation, compliance. These are all things that we normally think of as nonfunctional requirements.

When you hear that word nonfunctional, do you actually think about this? Is the system not nonfunctional when they're not met, that double negative means that it's now functional? Of course not. It's a little bit facetious. These really are unspoken incomplete functional requirements. Sometimes these actually become functional requirement in the way that we talk.

The uncomfortable truth is that all these nonfunctional requirements at best are an afterthought. Because they aren't really an issue until suddenly they are, I'm sure this has happened to you. Think of quality for a minute and the software you develop. Everything is going fine, then a critical bug is found that affects all the customers. What is the immediate reaction? It's usually an overreaction. What happens when this is a performance issue? Same thing. Performance is more than just a nonfunctional requirement. It's now crisis to be solved. When this happens, it's often too late. It's too late to really do something drastic, so we look at quick fixes. Whether it will be quality security or any of these things, a lot of the quick fixes don't work very well. They are very specific, they are very point-related, and they don't hold for very often.

Here's an example of one. In the age of the cloud, we just throw machines at it. We make things usually fairly good scalability-wise, and we just say, "We'll just deploy a few more servers." Well, I got news for you. That's about to come to an end. The blue curve there is Amdahl's Law. What it says is that as you add processors, your speed-up will actually plateau based on how much of the percentage of that workflow is serialized, how much of it is contended. This is a graph of just 5% of that workload is serialized. No matter how many processors you start throwing at it, it will never go about 20, your speed-up. You can go ahead, and you can throw in a few more thousand processors, but you're not going to go anywhere.

It turns out that Amdahl is an optimist, because that red line is Neil Gunther's Universal Scalability Law. What that says is that the coherence penalty that is associated with making all the nodes coherent actually means that you don't speed up; you get slower. The idea of, "We can't handle demand. We'll just throw some servers at it," that starts to crust out. This is with 200 nanoseconds of coherence penalty, a very small amount. This is what you end up with. You can't just simply throw machines at it. At some point, you come across this, unless you've been very lucky.

Premature Optimization

We've been told that the root of all evil is premature optimization. Let's take a look at this real quote for a moment, because I think that we use it as an excuse. I think it takes just a moment for us to talk about what this actually comes about from. It's from Donald Knuth in his Turing Award lectures where he basically says, "The real problem is that programmers will spend far too much time worrying about efficiency in the wrong places and at the wrong times. Premature optimization is the root of all evil, or at least most of it, in programming."

First, Knuth was actually talking as he was programming in Assembly. Remember that. Secondly, we didn't have a lot of the same issues as well as tools that Knuth had. It was a very different world, a world that the thought of what you needed to do had a lot of different constraints on. It had CPU constraints, it had memory constraints, it had IO constraints.

Let's even look at it further. There's a couple different quotes. In fact, Knuth has said that it's actually Hoare's dictum later. Actually, Hoare then pointed it back to Donald Knuth, so we're not sure who started this. The context is quite different than we normally think. "Programmers waste enormous amounts of time thinking about or worrying about the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are concerned. We should forget about small efficiencies, say about 97% of the time. Premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%."

There's couple things to highlight in these two quotes: "Wrong places at the wrong times," and "Noncritical parts of their programs." Is he really talking about the fact that you should never optimize your applications? No, of course not. He's talking about doing it at the right time and in the right places that you should be concerned about it.

What it really comes down to is actually something we know a lot about. Pareto principle is sometimes known as 80/20 rule, or sometimes known as a power law. What it basically says is, concentrate on the 80% and you'll have much more effectiveness than if you concentrate on the 20%. This is what he's really talking about. He's talking about, first, when you are concerned about efficiency, and you want to optimize, and that may be at the start of a project or in the middle or at the end, that's when it is important. It doesn't matter anything else besides that. When you look at it, concentrate on the big wins, not on the small ones, at first, because he really does say, "should not pass up our opportunities in that critical 3%." It really is going in with your eyes open as opposed to simply just ignoring things.

What I take about this quote is that he wants to be guided to the where of optimization and efficiency. The when is usually when it's needed, we'll talk a lot more about that, but the 'where'. Where in the code should I look? Where in the design should I look? That needs to have a lot of thought attached to it, because you can go down a rat hole and not actually have something that is impactful. The quote has been often taken out of context, and sometimes used as an excuse, and that's fine. Trust me, if that's the last quote that we've done that with in computer science, we're lucky.

There's also other things that we've done. I've seen this in a lot of my clients that I go to and I start talking about performance and trying to figure out the performance problems they have, and I can't count the number of times that I've heard this: "It doesn't have to be fast." I thought about this a lot. I've also usually come back with this: "It doesn't have to be slow either." We don't intentionally make things slow. It happens. The question is, how does this happen?

It Doesn’t Have to Be Fast

Let's think about this even in a slightly different way. Let's look at more nonfunctional requirements to show that this statement is wacky. It doesn't have to be fast. It doesn't have to be secure. It doesn't have to scale. Does it have to work? I mean, that's quality. It may not have to be a certain level of fast, but if it's too slow, is that good? Then I think a lot of this comes down to is we assume speed, security, and quality as a special characteristic added later. How many of you think that you add speed later? How many of you think that you add quality later? Security?

You can improve it later, but you can't just simply add it if it's not there to begin with. You can try. If we think about this a little bit, when we use the "but it doesn't have to" argument, we want to say, "It's not my fault. It's not my concern right now." That's fair. Let's look at something which is akin to this but in other engineering disciplines: mechanical engineering. What is the top speed of your normal sedan out there on the road? We're in the U.S., so we'll go with miles per hour. Anyone believe that it's bigger than 120 miles per hour, no more than everyday car? How many of you think it's below that? Ok, so that seems roughly that. How fast do you think a Formula One car goes? Let's say top-end it can go 250, 260, What's the difference between those two? Two times, three times?

Let's think about that if we move it over to software. I've worked on systems that do 100 million transactions per second. Should that mean that the slowest systems that we have, or the more nonstandard ones, should only go about 50 million? How many systems do you know that go at 50 million? Are our systems more in the 300 transactions per second, 3,000 transactions per second, or that 30 million or 50 million?

The differences between the top-end in mechanical engineering and the middle-of-the-road in that discipline for just automobiles is not even over a magnitude, It's smaller. Are we doing something wrong, or is it just that it doesn't equate? Something to think about. It comes down to why are things inefficient? Why do we have things that end up being inefficient a lot of times? This is really what it comes down to as the crux. When we have done something that's inefficient, we may not know that we've done it. If we are aware that we run that risk of making something inefficient, then we can address it. This is what I really want to talk about. What I have looked at when I've watched is the critical things. Why things get to be inefficient is that we don't have enough time. The famous quote from Benjamin Franklin, "If I had more time, I would have written a shorter letter." It’s that idea that if we have enough time, we can do it.

Sometimes when we see managers look at it in hindsight, a lot of times they go, "All those lazy devs, they didn't think about that." Ok, that's fair. Gaps in knowledge, maybe you didn't know - perfectly fine. Too much complexity, our systems are incredibly complex. Not being able to see everything that goes on and wrap our head around it can often lead us to doing things inefficiently, because we don't know that something is already been error-checked, or it's been gone through a certain process already before that data arrives, a piece that we're working on.

Lots of other things in it. None of these are bad, including being called lazy. In fact, I love being called lazy. The end result of a lot of all this is, it's bad design choices. Now we should all actually be happy about this. Bad design is simply not enough time. It's not being able to take everything and hold it in our head. All these things are fixable. We can do them better next time, and that's fine, but it all does start with design. That's what I really want to talk about in terms of everyday efficiencies.

Performance, Quality, Security

In fact, I think it goes beyond that. Performance, quality, and security all start with design, because the idea of adding it on later sometimes is possible. It's not easy, but if you think about certain things upfront, it's a lot easier to actually add them and make them better going forward.

There's three things that I think about when I think about what I value in terms of the everyday things I see, no matter what code it is, latency critical code, non-latency critical code, even just scripts that I write to do things. First, I want to be lazy. In fact, I'm a big guy, I don't like to move a lot, I want to be lazy. I don't reward bad ideas. I don't want bad ideas to stay around, I want to move on from them. I also don't want to be naive. When I talk about being naive, I'm talking about I don't want to just use dogma. I don't want to say "always" and "never" and just adhere to those as if they were rules written to keep me in a lane. I want to think, I want to be open to ideas.

The first thing, being lazy. Good engineering is laziness, I fully believe this, I've always believed it. You want to be too lazy to do something complicated. You see something complicated, you can't heat fit it into your head, and you're, "I can't implement this. I'm lazy. This will take me a day to do. I have to find something which will take me a couple hours to do. I want less typing, more thinking."

The quote from Abraham Lincoln of "If I have to chop down a tree in eight hours, I would spend six of it sharpening my axe." That's laziness and may not seem like it, but it is. Thinking, while it consumes calories in your body, is something that you can do while you're doing other things and have things on your mind. That thinking and being lazy, "How can I save myself work, how can I save the company work?" that's good laziness, but you can't be too lazy because you want to keep making it better. You think about it, you implement something. If you're too lazy, you're going to look at that and go, "I'm done now," You have to be not too lazy that you abandon it. You have to be just lazy enough that you're dedicated to keep making it better.

You can think of that more of as maybe an obsessive-compulsive disorder. Certainly, there's no one that I know who is obsessive about algorithms and trying to figure out a better solution or things like that. I do that, I obsess over the things that I'm working on to the point where sometimes I can't put them down. It'll be 3:00 in the morning, and I wonder why I'm so tired. That's not really lazy, but I think it is kind of. You want to find a better solution, but it is lazy in that, you're doing it for the reason of saving yourself work later.

The second thing, don't reward bad ideas. We do this in business all the time. We invest time in something we didn't decide that it's a bad idea, and then we can't abandon it because we've already sunk the cost into it. The last thing we should do is to do this in our designs. If you come across a piece of the design that does not work, does not do what it's supposed to do, move on. Don't be afraid to do that. It goes against the lazy part, but it does stick with it too, because you're trying to solve something and save yourself work later. The idea that you let bad ideas stay around means that you do more work later to fix them. Don't be afraid to move on. Don't be afraid to try something new.

Also, don't be naive. Absolutes are naive, and we do this all the time. We're really bad about this as a discipline. We say, "You should always do this," or "You should never do that." Some of them have really good basics in mind, but there's always an exception to any of those rules, and that dogma is what holds us back from actually doing the second thing, which was moving on from bad ideas. Better to think of it as, at this point in time right now, favor one thing over another. Consider that first. If that doesn't work out, don't be afraid to try something else.

Discussions about immutable data structures versus mutable data structures. I've got news for you - there is no right answer. It is highly dependent on a lot of different factors. The last thing I tell anyone is that you should always do something or, you should never do it. The always and never are things that I've actively tried to beat out of myself over years. It's partially about being open, but it's also partially about being true, because every time that you apply something based on, "Todd said to do this, always do that," I guarantee you I'll be wrong at some point.

Concrete Suggestions

Here's some concrete suggestions. Go through these as a high level of, "because design matters." The first thing is ownership dependency and coupling. When you look at a system, the reason why we have a hard time fitting it into our head almost every time is that an ownership of a particular piece can sometimes be nebulous. We might have it in a map here. We might have it as a member here in this class. If we free it, we might have to free it from a couple different things before it's actually freed or available to be garbage collected, or actually freed from the heap.

The dependency of one thing on another and the coupling between different objects and functions. You may think, "I do functional programming. None of this applies." It does. The ownership of who owns that function, not always clear, especially in large functional projects. What that does is, it creates more complexity. When you don't think about clear ownership, when you don't think about the fact that you might have 600 different package dependencies, where are they used? You don't know? That's complexity, and the coupling that's associated with all that.

Cohesion is usually thought of as a good thing, and usually it is, but it also can have a side effect as well, which is the coupling part. That's the way I think about it anyway. What it means is that complexity kills. If something is not simple, and you can't reason about it, how in the world can you imagine to then optimize or running in the compiler is going to be able to optimize it? Sometimes it can, a lot of times it can't. Complexity strangles most optimizers.

Layers of abstraction are not free. People hate it when I talk about this. I think that premature abstraction is the root of all evil. I'll put that without any equivocation. When you abstract too quickly without actually understanding what the abstraction is prematurely, you're just creating potential problems and hotspots out throughout the code. I'm not saying I never abstract. I'm saying that these layers of abstraction that we often jump to really quickly, they're not free. Polymorphism is not free. Virtual dispatch, not free.

We should also take care to think about what Dijkstra said about abstraction, and this is applying to his authority, but to think of it in a way that, to me, makes sense, when you abstract, you should be more precise and not more general. How many times have you abstracted to make something more precise? It's hard, our normal concept of an abstraction is to hide. That's not what he's saying. He's saying that you should abstract to make something more precise in how it fits in.

Also, managing your resources. This goes against, I'm sure, most thoughts about Java and some other higher-level languages with garbage collection, but the idea of ownership has a lot to do with managing your resources. You can't manage something if you're just letting the system handle it whenever. Ownership is one of those things that it doesn't matter what language you're in. Something owns, or a set of things owned that object for a period of time, or they own that function, or they own that particular collection.

When you think about ownership, you should automatically think about lifetime, when is it going to get created, when is it accessed, what's accessing it, and when is it going to go away. Even in garbage-collected languages, you still have this. It's funny that I've written in C since '92 pretty regularly. I routinely still do occasionally have memory leaks. Everyone does, but I have actually spent more time in Java tracking down the source of memory leaks than I ever have and C in total.

You may think, "That's just memory," but it's not, because there's disks file descriptors, there's sockets. There are all kinds of different resources, database connections, even higher-level concepts that have a lifetime, have an ownership, have all that. When that is simplified, when it's easy to all easier to hold in your head, the complexity goes down, and things get better.

Understand Your Tools

Understand your tools. The operating system, the language, the CPU, the disk, the libraries that you bring in, all of those things you should have a good an understanding of. It's why we have open source in a lot of reasons. We don't like a big black box that we give it some input, it does something. We'd like to see what is there. We'd like to be able to look into it, to be able to figure it out. That's because the more we know it, the better we can use it. By understanding these things, we also actually a lot of times figure out new ways of looking at things. The complexity of the system can get simpler when we know these things.

The next thing is, the compiler is better than you. No matter what you do optimization-wise, if you make things simple, the compiler will find optimizations that you have not even thought about. What I like to think of it is, when I'm doing development, there's a couple of really simple things I'll mention here that I do to let the compiler have a better view of things and to be able to actually enable optimizations better. The compiler is better than you. It can take a global view, it can take in jittered languages. It can take a profile guided view and look at the whole program and be able to optimize certain things out.

The next thing is idioms. No matter if they're operating system idioms, language idioms, disk usage like how you stream the disk, things like that, the idioms matter, because the way that you use something always matters. You want to use it in the most effective manner.

What this is really saying is, to be simple, a lot of times we want to think about abstracting later, understand what you're doing, then abstract later to keep complexity down. Design for composition, design for things to be coming together, this is actually the heart of why functional programming, in a lot of cases, is a better solution for certain problems, because you can compose these things very easily.

Here's a list of specific things, and I want to go through them, because to me they're very important. Counted verses uncounted loops. Loops that have effectively like a for and a variable that you're incrementing, a counted loop, and uncounted loop, where it's like a while some condition, and you're doing something within it or a do-while. A counted loop has actually lots of optimizations that a compiler can actually look at and assume and imply. There are some with uncounted loops, and there are some places where the compiler will look at it and go, "That uncounted loop, I can turn into a counted loop, and then I can apply optimizations for." I wouldn't say always use counted loops, never use uncounted, because that's just that's silly. You should realize that a lot of times the better usage might be a counted loop. Here's an example of a count of loops that you may not think of as a counted loop and that would be like a for-each loop within a collection. Those actually, a lot of times, for languages, are treated as counted loops. What you end up with is something where you can reason about it from a mental standpoint as well as the compiler can look at it and go, "I think I could do some more optimizations on this." Like anything, you should never shy away from an uncounted loop if it makes the code more readable. It doesn't matter if you look at it and you go, I can write it as a counted loop, but then there's all kinds of additional things that come into play here. Abandon that idea. It's an uncounted loop, if it makes sense as an uncounted loop, then use an uncounted loop. If it's a counted loop, and it makes sense as a counted loop, use that.

We also have, from the CPU and up, the idea of predictable branches. Speculation is the heart of our chips. Our CPUs have taken bets for the last 15 years on speculation and being able to speculate what is going to happen next. In other words, we have little time machines. As long as the prediction is right, things go really fast. When the prediction is wrong, they can slow down.

If you think about the flow in your application, and you see it as a set of predictable branches, from the CPU's perspective, that will go fast. Additionally, if they are predictable branches, the profile-guided nature of things like jets will see that as a predictable branch and will actually optimize that path even further. Again, no absolutes, you should never say, "Always make your branches predictable." It's not going to work, and you're going to have a mess on your hands. Think of it from the standpoint of, "Can I structure this that the branches that are taken or not taken are more predictable?" Sometimes it's possible, sometimes it's not. It only matters when it makes sense for us conceptually to do it. Never force that, it doesn't make any sense.

Simple conditionals. How many of you have written conditionals like an if or a while or something that had like more than four predicates in it? Everyone has. Logic is full of this stuff, you can't get away from it. The thing is, sometimes you can simplify them, and it takes maybe a little bit of time to think about how to simplify them, but it always pays off.

I'll give you an example. I work a lot with Martin Thompson, and we review each other's code on a pretty much daily basis, and we ask questions to one another. I once had a conditional that was two different predicates, but it was a dense set of predicates. It was not a simple thing. You look at and it spoke to you. In other words, you could see what was happening. You had to think about it for a moment. I had simplified it by utilizing short circuiting. Before that, it was actually more complex. He looked at it, and he said, "I don't understand that. Can you make that simpler?" I thought, "I spent so much time on this, and I can't think about how to make it simpler." I went off, and I scratched my head for a moment, and I realized that I actually didn't need that conditional at all.

When I looked at it, I was, "Wait a minute, what this is saying is, if true and false. Wait a minute, that's just false." Then I thought about why I had developed that, why that had stuck in and how why I'd spent time to do it. It was a fundamental decision that I had made earlier that I would never get in this condition. I'd already checked for it well before. Spending some time looking at those conditionals just to see if they can be simpler can, a lot of times, reduce the complexity of the code.

Languages that provide stack allocation are great. Languages that don't provide stack allocation, I wish it was there. Stack allocation is not simply what value types do or escape analysis or something like that. None of those. Stack allocation is actually even more specific than that. How many of you use thread-local storage a lot? Ok, a few. Instead of it being thread-local, why isn't it on the stack? That's thread-local. Does it make sense to move it to the stack? Sometimes it does, sometimes it doesn't, as everything, but I will say this. A lot of times what you want on the stack is something which is more thread-local, and in languages that don't have stack allocation, it feels odd to not be able to put them there.

I've done a lot of different things with just primitives like, primitive ints and longs instead of making a class around them in something like Java, so that I had them on the stack as opposed to actually being somewhere else. Stack allocation itself is a tremendously powerful abstraction. It's a very useful tool. To go with the favoring language over the never or always, the way that our CPUs, the way that our operating systems and the prefetchers, whether they be virtual memory prefetchers or they be memory prefetchers or any of the other types of load and store structures we have in our CPUs, by and large, all of the things that we learned about lists in computer science are more and harder to predict the nature of.

Our operating systems, especially virtual memory sub systems, as well as disks, as well as CPUs, have figured out a lot of the normal common patterns, and they can actually be very good with even lists. What's even simpler is make it simple for them. Arrays, scanning through an array, incredibly fast. Jumping randomly around memory, not so fast. Data structures that are more array-based tend to, at this moment on today's hardware and with today's operating systems, tend to do much better than structures which are randomized in memory that have what we call data-dependent loads attached to them, where you dereference and you have to go to a different location to find your data as opposed to having it right there.

Will that always be the case? Most of this is built on speculation. Spectre and meltdown are showing and all the things from that, are showing the speculation is a security hazard in essence. Our chips are back to the point where they're very aggressively flushing caches. They're not speculating as far, because it leads to issues. If I had to bet the way that we access disks will stay the same, in other words, streaming like an array, the way that we access data structures will probably stay. Favor arrays and those types of things over list-based structures. Doesn't always work, because there are other things to consider, especially when it turns out the big characteristics of a lot of bigger data structures, but that's a good place to start and then carefully work towards.

Then primitive data structures. Languages, especially languages that use boxed integers, longs, things like that, primitive data structures can be very efficient as opposed to boxing them as well. Think about the primitive data structures and using them. When I talk about primitive data structures, I'm talking literally about things like an integer-keyed hash map, or an integer-based or a long-based set, or something like that.

Even to the point when you have the value and the key to be primitive, these can even be much more efficient. Things like ints-to-ints, or ints-to-longs, or longs-to-longs, hash maps and things like that. When you can distill your problem down into this number to this number, that's actually pretty good, because now you can take advantage of a lot of other optimizations, and it's simpler. We can hold that in our head pretty well.

Everyday Efficiencies

Everyday efficiencies - go back, be lazy enough that you don't want to settle for a complicated solution that may take you a long time to implement. Try to simplify it as much as you can. Don't reward bad ideas. When you have a bad idea, admit it, move on. Find a better one. That's the same part about being lazy. Don't center on that idea and obsess about it. Obsess about the next one.

Don't be naive, absolutes like never and always, all they do is delay you from moving to the better idea that may fit better. It all starts with design, because that's what's important. If you can't hold it in your head, the whole design, how in the world can a compiler compile it, optimize it, and execute it fast? That's the things that pay off.

Questions and Answers

Participant 1: Can you give an example of abstraction that's more precise and not general?

Montgomery: Let's think of a disk. If we were to think about a disk object, and we could write to it and read from it, the abstraction that we might pick might have a pointer to its current location. Why would we keep that? Does it make sense to infer that there has to be a position kept of where you wrote or read last? What's the scope of that position?

I would consider a better abstraction, a more precise abstraction of writing to disk to be a set of functions that take the disk to write to, the offset to write, the data to write, and their length. By the opposite, the read would be the disk to read from, the offset to start on, the length to read, and where to store it. That to me is a more precise abstraction that expands to more types. You might think of it as more generic, but it's not, because what you're saying is I'm being more precise with the external pieces of it looking at it from a different perspective. To me, that's a more precise abstraction that covers more, but it also means that what is underneath of it is actually simpler. That to me is an example of a more precise abstraction.

Participant 2: On one of your first slides, I was surprised to see the network data consumption be as high as data centers. Do you have any knowledge to attribute it more to just the sheer distance the energy has to cover, or due to protocol inefficiencies?

Montgomery: None of the articles that I've read have tried to break down exactly what the inefficiencies are. It's just data center consumption. I think that's as far as people have went. I actually would like to see things broken down into common uses of frameworks. Are they efficient or not? How much might they be contributing or not? I haven't seen anything that's been very definitive on that. I would love to see that. I think it would actually be enlightening to see what's business level logic, and what's like frameworks, and what's like operating the system? We have the ability to break that down, but I haven't seen anything like that.

If I had to guess, I would actually put business logic and frameworks are about 75%, and in 25%, operating system. There are lots of things that are fairly efficient where they're really bound by what the operating system is doing. Then there's other things which are just plain inefficient. I don't know where that would live, but that would that would be my starting guess, and I would be probably surprised at the data, no matter where it hit it up.

Participant 3: What's your take on benchmarking and performance testing as applied to everyday efficiencies?

Montgomery: Microbenchmarking, is incredibly easy to do wrong. When you take a microbenchmark, you're taking things out of their natural environment and putting them into a very select harness that you want to test just a simple hypothesis on. I never think that you should immediately jump into benchmarking, especially a microbenchmark, unless you have a real good clue from some other piece of data that that's what you need to focus on.

The problem is that you might actually write a microbenchmark that shows that something is incredibly efficient, but when you put it into a big system, it just breaks down. It's not efficient. Our systems interact with one another in odd and unusual ways that we didn't foresee.

I believe that once you have some evidence, and you can set up your workbench, you're pulling out screwdrivers and hammers and wrenches, and you've pulled everything apart, and you're looking at this, and you're really trying to look at it, and you're making little tweaks here and there because you're seeing inefficiencies. That exercise is extremely helpful, but if you've that, and then you put it back into the system, and things get worse, which often happens, all you've done is move the needle somewhere, and you've made something else now a problem.

Honestly, that's what I spend most of my day doing. It’s moving a problem from here to somewhere else and in trying to figure out why that's a problem now. I think they're good, but you have to know that you're going into the right area and that you've got to know that you're taking it out of its ecosystem and using it in a very controlled way.

 

See more presentations with transcripts

 

Recorded at:

Nov 04, 2019

BT