BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations If You Don’t Know Where You’re Going, It Doesn’t Matter How Fast You Get There

If You Don’t Know Where You’re Going, It Doesn’t Matter How Fast You Get There

Bookmarks
43:29

Summary

Jez Humble and Nicole Forsgren explain the importance of knowing how (and what) to measure in order to focus on what’s important and communicate progress to peers, leaders, and stakeholders. Great outcomes don’t realize themselves, after all, and having the right metrics provides the data needed to keep getting better at building, delivering, and operating software systems.

Bio

Jez Humble is co-author of “Accelerate”, “The DevOps Handbook”, “Lean Enterprise”, and “Continuous Delivery”. He has spent his career tinkering with code, infrastructure, and product development in companies of varying sizes. Nicole Forsgren is co-founder, CEO & Chief Scientist at DORA and co-author of the book “Accelerate”. She is best known for her work on the largest DevOps studies to date.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcript

Nicole Forsgren: How's everybody doing this morning? We're so excited and also tired. We need so much more coffee. I feel you. So we're talking about, if you don't know where you're going, it doesn't matter how fast you get there, because we've all heard of this crazy DevOps thing, right? At least our bosses have? So everyone's on this crazy journey. Everyone wants to get better, right? Who here is working in an organization that has this, like, "We've got an initiative, and we're gonna get better. Go."

And then nobody has any idea what is it we're doing. Really it's, like, dudes in suits are, like, "Go." Usually, dudes in suits because the ladies always know what's going on. Also, by the way, I'm going to do a quick shout out: assume all women are technical and are capable of breathing fire, just in case. Anyway, so that's what we're going to talk about because if you don't know where you're going on this amazing DevOps Initiative, it doesn't matter how fast you get there, which sort of sums up the last five … It's been five years. Our little baby is like going to school now. This kind of sums up the last five years of work, right?

Jez Humble: Yes. Kindergarten, at least.

Forsgren: I will not cry as it goes off to school. So is what we're going to talk about today.

Humble: Yes. And Randy introduced Nicole's book, but also, Nicole has quite a lot of peer-review papers to her name as well, which aren't up there. So I just thought I'd advertise them.

Forsgren: Yes. Don't read them. Academic English is not fun to read. I'll forgive you, Randy, for not printing all of those out and bringing them up on stage.

The Enterprise

Humble: So this came into being because we know all of us, perhaps, have worked in … Who's here works in a company of more than 1,000 people here? That's a lot of you. So for those of you who haven't worked in a company, I'm going briefly describe how that works. So what you have is a golf-playing class on the left of the diagram, you've got a bunch of engineers in the middle, and then you've got IT operations on the right. Martin Fowler assures me this diagram at the top right is valid UML for the big bowl of module running in production. We have a machine that goes, "Bing" to let you know that everything's working. And at some point, you’re going to press play on this diagram so people can do some work. And so what happens is the unit of change in the enterprise is the project. And so someone comes off the golf course after a particularly fine round of golf with an idea for an exciting new product that we're going create.

And after many months of budgeting, and analysis, and estimation, and decomposition, some enormous plan lands on some project manager’s desk, and the team goes off and spends months or maybe years building that thing before tossing it over the wall into production, at which point it gets wider. We make sure that it actually works, and the team that built that thing disappears and goes off and does something completely different instead. And that thing that they built lives forever in production.

So there are many problems with this model, not least of which is the fact it takes so long to get anything done. And, of course, the solution to this is the Agile Transformation, "Yay. Let's go Agile." Everybody goes on a two-day Scrum Course, and we're now taking orders from management standing up instead of sitting down, and that huge backlog of work that we can't ever complete is now prioritized and estimated. And now, we’re agile, yay!"

How does the rest of the organization react to the exciting news, the agile Transformation? Are they filled with joy and delight? Not normally the case. The business aren't very happy about this because it means less golf and more time hanging out with the engineers. I mean, the whole point of agile is better collaborations in the business and development. Engineers aren't very happy about this because now they've got business people coming and talking to them more often, and the whole reason they became engineers is so they wouldn't have to talk to other people. Operations isn't very happy about this either because instead of getting some nasty piece of crap coming over the wall once a year, now, stuff's coming over the wall all the time and IT operations is, like, "Please, stop the madness." And the engineer is like, "What do you mean? We're doing TDD. We have nice solid design principles, nice encapsulated loosely coupled systems." And IT operations is like, "That's fabulous. Well done. Shame it doesn't actually work."

The logical and natural reaction to this is to create a barrier, and that barrier is called the Change Management Process, and the job of the Change Management Process, of course, is to make sure that nothing ever changes. So who has friends maybe who has worked in organizations a bit like this? Quite a lot of you. Don't put your hand up, Nicole, you work with me.

Forsgren: I have friends.

Humble: So can we do better? The answer is yes, we can do better. I love to show this slide. This is from 2011, so they're probably two orders of magnitude, at least one order of magnitude faster than this now. Who has seen this slide? Who's seen this slide before? Not enough of you. So I love this slide because they're deploying, through production, on average every 11.6 seconds, up 1,079 deployments in an hour, on average 10,000 hosts receiving that deployment, up to 30,000 boxes receiving this deployment.

And I like to point out Amazon is highly regulated. They are subject to Sarbanes-Oxley. They process the occasional credit card transaction, which means they have to follow PCI DSS. And so Nicole, and I, and Gin, and our partners have been investigating in how to build high performing organizations for the last five years. I'm going to hand over to Nicole to talk about that program.

Predictive Analysis

Forsgren: So as a quick side note, when we study this, we want to understand what it actually means to do this. And we want to understand it in predictive ways because as Randy mentioned, I was a software engineer for years. And I wanted to make things better, and I would go to management, and I would say, "Let's do this thing. I hear this thing works. It worked in this other team." And I would always hear, "Oh, no, no, no, but that won't work here. But we're not that team." And so one of the reasons I went and got a Ph.D. is I wanted to understand which types of things would be generally predictive for almost all teams across almost all industries.

Now, a quick note about our research and our predictive analysis. When we do this research for predictive analysis, and what we use as inferential predictive analysis, for that to work, one of three conditions has to be met. So keep your skeptic hat on. When you look at any other research that you see in the industry, one, you have to be using randomized experimental design. So ours isn't randomized or experimental. It has to be longitudinal. Has to happen over time. Although we've done this for five years, we don't have linked longitudinal designs. If you want, you can come, we can start out later. The third is it needs to be theory-based design. Now, we do follow this, which basically means I have to have some idea a priori. So ahead of time, there has to be a reason to believe that these links exist, right? I have to have a good hypothesis suggesting that A causes B. Otherwise, I just go fishing in the data and find something crazy.

So anytime you read the book or any of the DORA State of DevOps Reports, when we say it's prediction, these things have all held. Otherwise, we'll only state correlations. Now, you may notice this as well in your own data. Some of my favorite examples come from spurious correlations. I do love the work of Tyler Vigen. Did you know, for example, that per capita cheese consumption is very highly correlated with the number of people who died by becoming tangled in their bedsheets? So definitely keep your skeptic hat on, right?

Now, I'll mention this also for your data in your own systems. We'll run correlations. We may even run some predictions, but if we don't have an idea ahead of time for believing that A causes B, and then it shows up in the data, we'll tell the story in our heads. We'll link that data. And because we're working with complex, interrelated systems, of course, it makes sense that A causes B. This is ridiculous because it's ridiculous.

Also, did you know that the years that Nicolas Cage movies come out is highly correlated with pool deaths? We know it's ridiculous. The problem with the data in our systems is it makes sense in our heads. So we need to make sure that we believe A causes B because the database show us A causes B, but it could actually be that C causes B, or D causes B. So it's kind of some of the background for at least how we run some of our analysis.

Firmographics

Now, quick spoiler alert. This is one poll from the firmographics for our book this year. When we take a look at some of the demographics, we cover several … see list of industries. Lots of industries. We primarily pull from things like technology, but then also financial services, retail, Telecom, education, government, healthcare, and pharma, also, very large organizations. I'll mention that this year in the 2018 report, I did an additional level of analysis to look and see if industry mattered because so often … is anyone here working in a highly regulated companies? Yes. So many times people come back to me and they're like, "Nicole, I have this really important question to ask you”, so I'll get some time on my calendar. “But I work for”– fill in like a highly related field - “I need you to run a new research study. I need you to run a state of Telecom. I need you to run a state of baking. I need you to run a state of healthcare and pharma because there's no way we can make this work." But it does.

This year, I ran additional data analysis, like I mentioned, and I controlled for all industries. What we find is that industry doesn't matter. As Jez mentioned, Amazon could do this, and they're subject to compliance as well, right? They have to hold to the same financial regulations that anyone else does. It can be done. You just have to follow a few extra rules.

Software Delivery as a Competitive Advantage

Now, we've found over several years, starting in the 2014 report, all the way through the 2018 report now, software delivery is a competitive advantage. Firms with high performing IT organizations are twice as likely to exceed profitability, market share, and productivity goals. Now, we've also found that this holds for not just commercial things like, you know, what we just said, market share, but also non-commercial goals because, heaven forbid, we care about more than just money. Now, this extends beyond just not-for-profit organizations, right? Because even for-profit organizations have broader goals, things like effectiveness, efficiency, customer satisfaction, and we found this for the last two years.

Now, how is it that we measure IT performance, technology performance, software delivery performance? We're talking about a couple of different things here. And in particular, if we think back to the story that Jez opened with, we're talking about both speed and stability, right? Because, as developers, we want to get as much change in the system as possible. And as operation staff, we want to stop changes as much as possible. So when we measure this, we measure the two things intention because we wanna be looking at global outcomes that are important for the whole company or the whole organization. So our throughput measures our lead time for changes from code commit to code deploy, and deploy frequency, how often we push those changes. We also look at our stability measures, right? So time to restore service and change fail rate. Now, one thing that's important to note- I know Jez it’s your favorite thing- I'll let you break this down. He has a English accent, so it's more fun for him to say this.

Humble: People believe me even when I'm completely lying through my teeth. It's brilliant.

Forsgren: I mean, that's a polite way to say it.

Humble: But this is actually true. As an industry, we're so used to thinking about these two things as a trade-off that if you're going to go fast, you're going to break things. That's not the case. What we find is that actually, high performers are doing better at both speed and stability.

Forsgren: Yes. And I like to have him say it also because iTel for years has whispered in back alleys that you have to make trade-offs, and so when the dude with the English accent says it they believe him because iTel …

Humble: Although iTEL comes from England, as well. Sorry.

2018 Performance Benchmarks

Forsgren: Yes. So here's what we see. I know this is kind of an eye chart, we will post these slides later. But here's what we see for the 2018 data. Take a look at the highest performers. Again, they're optimizing on both throughput and stability. They're able to deploy multiple times a day on demand. Their lead time for changes is less than an hour. They can get these changes through their pipeline quickly and easily. Time to restore service is also less than an hour. And then that change fail rate is between 0% and 15%.

I'd like to point this out because look at the lead time for changes and time to restore service. So when you have a service incident, anything at all, like an unplanned outage, a service impairment, if anything goes down and you need to push code, you can push code in the same amount of time in an emergency situation that you push code in a normal situation.

When we think about some of our organizations, our infrastructure right now, when we push code in emergency situations, we often end up skipping things like test, which is usually not a great idea. But the highest performers have found ways to be so efficient, and we can talk about how to get there, that their emergency change process looks just like their regular change process. It's just as safe, it's just as stable, and it's just as reliable.

Now, if we compare that to the low performers, they deploy between once a week and once a month. Lead time for changes is between one month and six months. And so then even when it comes time to restore services between a week and a month, you're really scrambling. And their change fail rate is between 46% and 60%. There's a big difference there.

Elite Performers

Now, one thing I do want to call out is this elite performance group. So the data this year showed the highest performers are kind of a subset of this high-performance group, and, again, they've been optimizing, and we've seen a high-performance group optimizing on speed and stability for five years in a row now. But take a look at this high-performance group; it's 48% of the data that we collected this year. And this should be absolutely encouraging, right? Anyone can achieve this. In the previous years, we've been in the teens, right? The percentages hovered in the teens. This year it's 48%. The group is growing. High performance and excellence is absolutely possible. It takes some work, and it takes some execution, right? We all know that tech is hard, but it's also fun and it's challenging. Anyone can get there. And I think that was one of my favorite findings this year.

Availability

Now, moving on to availability. We added another measure this year to really focus on the end user and the customer and what it means to deliver services. Do you want to take this one?

Humble: Sure. So again, extending it beyond release into actual production. So you've got the whole life cycle here from development through to actually operating services in production. And again, what we find, is this is not a trade-off. High performers in software delivery and release are also high performers in availability, and elite performers are 3.5 times more likely to have strong availability practices, where that means not just they're keeping their services up and highly available, but they're also able to make promises about the availability of their service to their users as well.

So obviously, you want to be a high performer, or preferably an elite performer. And so after finding out, firstly, that software delivery performance matters to your organizational outcomes, we wanted to find out what actually predicts high performing teams. So you won't be able to read this from the back, but Nicole is going to go through some of the key …

Capabilities That Drive High Performance

Forsgren: All right. So this summarizes the first four years of research, and this is in the book. Basically, anytime you see an arrow, that's prediction, and boxes are constructs, things that we've measured. So here are some of the important points to pull out. Software delivery performance, that's the speed and stability metric you can see there. Can you see software delivery performance? It's kind of that middle to the right column. Software delivery performance predicts organizational performance, the commercial measures, and the non-commercial measures. So that's where we're kind of centered.

If we move left from there, we can see, if we want to improve this, if we want to make things better for us, and if we want to understand how to go from low performance to medium performance, to high performance, what types of things can help us get there? Well, the way to read that is, let's invest in things that are predictive of that, right? Let's improve the things that help us get better. Those things will look like lean product development. They'll also look like lean management, and they'll also look like, at the very bottom, those technical practices, which by the way contribute to continuous delivery. At the very top, we also have culture, and we'll talk about that. We'll talk about all these in a minute, but just so you can kind of get a quick overview. And at the very left is transformational leadership. And I'm sure we've all worked somewhere where we have a great leader. And these great leaders make everything better, right? They seem to make our work stretch farther, do better. Or we have these leaders that are just suck the life out of everything. So that's how you can kind of look at this diagram.

So we can see that transformational leadership underlies a bunch of the work. These are all the paths that we've tested through the 2017 report, lean product development and lean management, and technical practices drive into software, the lead performance. One thing that's interesting about lean product development, notice that it drives software delivery performance as well as organizational performance directly. Good product development practices drive the bottom line directly.

Another thing that's interesting, software delivery performance has an arrow back to lean product development. The faster we can deliver and turnaround code quickly and safely, it also helps us iterate on our product. Fast experimentation, that fast feedback. Another thing I really, really love: take a look at continuous delivery there on the bottom. See, the other arrows coming out of it decreases burnout, decreases deployment pain, decreases rework. Makes our work better. We'll dig into this in a minute. That's our quick preview and outline.

Humble: Just one more thing that I want us to point out quickly is culture, which we'll come to a bit later on. But if you want to … culture predicts software delivery performance and organizational performance. If you want to know how to improve culture, those practices on the left, product development, management, and technical practices also predict culture. So by investing in these capabilities, you also improve your organizational culture.

Forsgren: And I'm sure we've seen that, right? If we change the way we do our work, it changes our day, it changes the way we interact with people, which is funny. I'm going to get to this later, but people are always like, "How do I change my culture?" Make smart investments in the tech.

Technical Practices

Forsgren: So this covers through 2017, but if we catch up one step, at least into some technical practices, this is 2018's report. Now, a way to read this, again, lines and arrows are predictive, anything that you see in bold is new for 2018. Anything that is not in bold is a revalidation of previous years’ research. So we do revalidate previous years’ research, and what we found is additions to continuous delivery, things that are predictive or continuous testing, monitoring, and observability, database- Baron is going to be talking to us about DevOps in the database later- and security. We also see cloud infrastructure predicting software delivery, performance, and availability. And again, we validate that continuous delivery decreases, deploys pain, and burnout, and as Jez has pointed out, Westrum, that organizational culture measure, contributes to SDO performance, as well as organizational performance.

Humble: One thing we particularly wanted to discuss was one of the things from previous years, loosely coupled architecture. So you can see a bunch of things on this diagram on the left are like regular applesauce, continuous delivery stuff like deployment and test automation, continuous integration, version control. But architecture is something that people spend a lot of time thinking about, and often they think about tools, like, "Are we using Docker or Kubernetes? Are we implementing micro services architecture? And not to say those things can't be very important, they can be. But what we've discovered is more important is the outcomes that those things enable. So I think it was 2017, correct me if I'm wrong …

Forsgren: I think so.

Humble: … that we did this particular thing. We found that architectural outcomes were the strongest predictor of continuous delivery in the second DevOps report.

Forsgren: Disclaimer, disclaimer. [Inaudible]. It's not always the most important thing, but it had a strong beta. Sorry, stats hat.

Humble: No, thank you. This is one of these things where actually accuracy is really important in the words use, and I'm not a trained statistician. So thank you, Nicole.

Key Finding: Architectural Outcomes

The architectural outcomes that we care about is whether we can answer yes to these five questions. Can my team make large-scale changes to the design of a system without the permission of someone outside the team, or depending on other teams? Can my team complete its work without needing fine-grained communication and coordination with people outside the team? Can my team deploy and release its product or service on demand, independently of other services the product or service depends upon? Can my team do most of its testing on demand without requiring integrated test environment? And then crucially, can my team perform deployments during normal business hours with negligible downtime? This is what those tools and practices enable, and if you implement the practices and the tools, but you don't achieve these outcomes, that's kind of an enormous waste of money. Equally, you can be using mainframes and achieve these outcomes, and we've seen the teams who do it.

Forsgren: Say it again, Jez.

Humble: You can be using mainframes and achieve these outcomes, and there's a cool video on continuousdelivery.com in the case studies of someone running automated tests and batch jobs against their mainframe systems and validating it. So this is the important stuff.

Forsgren: Yes. And we actually found in 2016 that the type of technology stack you're using is not correlated with performance, the performance profile. What's most important is these architectural outcomes.

Key Finding: Doing Cloud Right

Humble: Moving on to cloud …

Forsgren: Everybody, did I mention I was a professor? It's class participation time. Who here is in the cloud? Hands up. Keep them up. This is your exercise for the day because we're at a conference. I got nervous for a minute. We sell a product and if it's not in the cloud, we're in trouble.

Humble: I didn't get any downtime, just with it.

Forsgren: Who here in your cloud has on-demand cell service? If you have to fill out a ticket that a human has to touch, you do not get credit for this.

Humble: Yes. You’ve got to be able to self-service VMs as a developer without having to create a ticket or talk to someone.

Forsgren: Also seriously, are none of you in the cloud? Are you not putting your hands up? I'm taking this personally. Fine. Don't play. Number two, broad network access. Can you access the cloud from all sorts of different devices? If you cannot, the hand has to go down. I'm real proud of people right now. Next up, resource pooling.

Humble: So resource pooling means that multiple virtual devices are using one physical device, so physical devices share multiple tenants.

Forsgren: Next up, rapid elasticity.

Humble: So that's the illusion of infinite resources …

Forsgren: We're seeing, like, magic. It's my favorite one. Last step, measure of service. You only pay for what you use. Everyone, a round of applause for the people actually in the cloud. By the way, this all comes from NIST, National Institute of Standards and Technology. I did not make this up. Here's what we find: only 22% of teams that say they're in the cloud are actually in the cloud. And it matters. I work with so many executives that are, like, "Oh, we went to the cloud, and I didn't see any benefits." You think? Because you're not in the cloud. Someone decided they were going to move to the cloud and then do one of these and get a gold star on the forehead and collect a fat bonus check. It doesn't count. Remember earlier when I said that high performance is possible, you just have to execute? Here's a really great example of what that looks like.

Humble: You have got to change your practices. You got to change the whole way that you treat your operational infrastructure. You can't just buy a cloud and then treat it like a data center. It doesn't work. And for people who are in regulated environments, we actually published a paper last year on how to do cloud infrastructure in the federal government. I used to work at AT&F. We helped a bunch of federal agencies adopt cloud in a modern way. And as I like to say, if we can do it in the federal government, we can do it anywhere. So we wrote that up in this white paper that you can find on the research page of our website.

Monitoring and Observability

Forsgren: Monitoring and visibility. We found that this was also predictive. So the way we defined it for purposes of our research, monitoring is a tool or technical solution that lets us watch and understand what's happening, predefined sets of metrics and logs. Now observability is a tool or technical solution that lets us, debug reactively, right? Looking at properties and patterns, these things are not defined in advance. Right now this is a pretty hot topic in the industry. We did find that this is predictive of software delivery performance and SDR performance, and the teams with comprehensive solutions are 1.3 times more likely to be an elite performer.

Now, if I want to start a fun nerd fight, we did find that monitoring and observability load together. What that means from a starting point of view is among the people that took the survey, right now, these are seen or perceived as basically being the same thing. It's about looking into our systems and understanding what they're doing. Now, in the future this could tease out. It's still a relatively young field in terms of picking out observability. If anyone wants to nerd fight me about this, find me later. But it is super important. We know it's important.

We also dug into test practices. Now, we've looked at automated testing for the last few years. Now, we're going to science at home. When I talk about load, things load together. What does that mean? Here's how we do some of the science. What we need to do is start with a good solid definition, and then find ways to measure it. But I can't just ask people to do automated testing because I can ask each of you if you do automated testing, or what automated testing is, and I'll get two or three different answers for every different person.

It's not like temperature. I can't just take the temperature of a room. It'll be very, very different. So what we do is we start with a definition, and then I ask several different questions. And then I collect all the data and I run some stats magic and we see which things all tend to measure the same idea. Think of it almost like a super amazing Venn diagram and all of the things that have heavy, heavy levels of overlapping, measure that construct. So in this case, we wanted to see which things measure effective test practices. So these are the things that we originally proposed cover automated testing. I want to say three of them didn't load. When I say load, right, they didn't contribute to that overlapping Venn diagram. Here's what we found. The ones in red don't load. QA primarily create and maintain acceptance tests. They're primarily created and maintained by an outsourcing party, and developers create non-Venn test environments.

Now, here's what we found on subsequent analysis. So first of all, they don't load. Let's talk about this in a couple of different ways. They don't load. It's fundamentally different, right, especially developers creating on-demand test environments. So it's kind of a different thing. It's more about provisioning. Now, the other thing that we found on subsequent analysis, QA primarily creating and maintaining tests, and tests being created and maintained by an outsourcing party. That was very negatively correlated with performance. So it wouldn't load because if we hand that off to someone else, if we handed off to a totally different group, we're going to slow down. We're going to be less stable. And so it didn't work very well from a measurement perspective.

Now, if I excluded these items, and then ran with everything else together, we have a very strong valid construct, and reliable construct. It measures only what it's supposed to measure. It doesn't matter what it's not supposed to measure and it's very reliable. So almost everyone across, over 30,000 people around the world, consistently read it about the same way, all of these items. And so then I can test it for our correlations and predictions. And that held … I want to say, four or five years in a row. [Inaudible], there's drama. Wait, drama in tech, that doesn't happen.

Humble: So we go into a bit of trouble. Basically, a lot of people have this misconception that continuous testing means automating everything and firing all the testers. I wat to be really clear, that's absolutely not what it's about. Testing and tester are just as important, if not more important in continuous delivery than they are without it. So we wanted to make sure we actually modeled this and got some data and research …

Continuous Testing

Forsgren: So also this is what Jez says, so if there's drama in the community and then Jez comes in and is like, "We should test this." And then Nicole with science is like, "I don't know, maybe it doesn't matter at all. Let's test it." And then there's more drama with people who are like, "What do you mean, 'it doesn't matter?'" I'm, like, "Numbers. Science." So we actually tested it.

Humble: And I will say there's been many times in the last five years where I've been really, really nervous in case something that I've been saying for years doesn't work out.

Forsgren: Why couldn't we test things out of continuous delivery?

Humble: Yes. And guess what?

Forsgren: … data. And I'm like, "Maybe it's not going to hold." And he panics for 24 hours.

Humble: Yes. Which means that sometimes I'm wrong, but Nicole is never wrong.

Forsgren: To be clear, I am wrong all the time. But I'm also totally fine with being like, "Data. It's fine. Changed my mind."

Humble: So these are the things that we validated this year, and this is what we found. These practices predict continuous delivery, which in turn predict software delivery performance. So making sure that you're curating your test suites, you're throwing out tests that don't actually ever fail, and you're making sure your tests actually do things that real users will do with your site. It's not things that your software did five years ago, but doesn't do anymore.

Making sure that testers and developers work together throughout the software delivery lifecycle, basically, making sure that testing is not a downstream phase that happens after dev complete. Making sure that you're performing all those manual activities throughout the software delivery lifecycle. Doing TDD. Writing unit tests before you write the code that makes the test pass. It turns and works even though it's the last agile practice. Who here does TD, writes tests before they write the code that makes the test pass? That's about 25% of you, which is about average I found everywhere in the world for the last 10 years I've been talking about this. And then being able to actually get fast feedback from your CI server in a few minutes.

Lean Management

I want to spend a bit of time … well, we want to spend a good time on culture, so I'm going to race through a couple of slides so we can cover it. So this is what the lean management box looks like. The interesting thing about this is you can't just do one of them. One of the interesting results we found is that tracing working in process limits doesn't in fact, correlate with software delivery performance. What you have to do …

Forsgren: It does. It's just a low correlation.

Humble: But very low correlation?

Forsgren: Yes.

Humble: But what actually does predict software delivery performance is doing these three things together. So having the whip limits and the visual displays to monitor quality, productivity, and work in process, and using information from production to inform your business decisions. You have to do those three things together.

Similar deal with lean product management. We find that this thing that was talked about earlier, actually, taking and implementing customer feedback, working in small batches. If you get some work, and it's going to take you more than a few days to do it, send it back and find some way to get some real value delivered to users that only takes a few days. Taking big bits of work, and splitting them into smaller bits that actually create value is key to everything whether it's product development, or organizational transformation, or process improvement work.

All those things benefit from working in small batches, making sure that teams can actually see the flow of work, and know what's the state of the workflow through the organization is through visual displays of some kind, or electronic displays. And then making sure that teams actually have the authority to train specifications where things won't work out. If you're taking orders from upstream and you're … even if they're on little cards with as so that you know, all that stuff on it, if you can't then change them, if you can't ask questions and say, "Well, this isn't going to work. We should do this instead." It's still not agile because you're just taking orders from upstream.

Forsgren: You're just doing real fast waterfall. You're doing waterfall in two-week increments.

Culture Impacts Performance

Humble: So we have two minutes to talk about culture. Firstly, as I say, we found that culture impacts both software delivery performance and organizational performance. Our model of culture comes from this guy called Ron Westrum. He studied safety outcomes in healthcare and aviation, which means where things go wrong, people die. He looks at six-axis. The extent to which people cooperate between departments and across the organization. How do we treat people who bring us bad news? Do we shoot those people? Do we ignore them? Or do we train them so that we can react quickly to things going wrong rather than waiting until they become catastrophic or cascading failures? How do we deal with responsibilities? Do we ignore them because we know we'll get in trouble if things go wrong? Are they defined narrowly so you know who to fire if things go wrong? Or do we share risks because we know we'll succeed or fail as a team? How do we deal with bridging between different departments?

And then two things that are very intimately connected, how do you deal with failure? Does failure need to lead to firing people? To justice? Or to actually, how do we improve the system? And if something goes wrong, ask yourself, "If that had been me, with the same and information, could I have made the same mistake?" And often if we're honest, the answer is, "Yes, that could have been me." And so if you fire that person, hire someone new, and the same situation happens, the same mistakes are going to be made.

So we've got to be asking ourselves, "How can we get better information? How can we build better tools so that we can avoid failures, and so that when we make mistakes, we don't end up with catastrophic, cascading failures?" And then novelty. In an organization where people are afraid to take risks or try new things, you will not get any novelty. So our research shows that this is important, and you can change it by implementing the capabilities we've talked about.

Effective Teams

Forsgren: And lots of other people's research has found that culture is very important as well. So Google dug into this, and when they wanted to look at culture, or team performance, they originally thought that the most perfect team would be the perfect mix of skills, like, technical skills. What they found is that doesn't matter at all. There was no statistical significance. What did matter was team dynamics. And far and away, number one was psychological safety.

Can team members feel safe to take risks and be vulnerable in front of each other? Also, dependability. Can you depend on your other team members? By the way, this sounds a lot like Westrum we just talked about. The rest are structure and clarity of work, meaning of work, and impact of work. Does what you do matter? Is it important to the organization? Which also sounds a lot like the rest of DevOps. We delivering value to the organization. Is it meaningful? Can I see the flow of work? Do I know what I'm contributing to is important?

Climate for Learning

We're going to talk fast. How do I influence culture? As I mentioned before, smart investments in tech and process change the way we do our work, which then changes the culture. John Shuck talked about this for years, but I was like, "Well, what else can I do?" Okay, we can influence our climate for learning. We can do retrospectives, and like real retrospectives, which are also called Learning Reviews, some people call them Blameless Postmortems.

When we do this, it influences our organizational culture, and also contributes to our climate for learning, which we found in our work within our assessment or [inaudible], is super important for organizations because technology moves so fast. A climate for learning is one in which the organization sees learning as an investment, not an expense. We all embrace learning. We all embrace change because it brings us new opportunities.

Autonomy

Next up, autonomy. What can leaders do? What can we do when we lead our teams? We can give our teams autonomy. If leaders set clear goals and clear directions, and then let their teams decide how to achieve those goals, that does a few things; autonomy directly contributes to performance. It also contributes to voice and trust. Voice is the team feeling safe speaking out about things that aren't working, which can lead to improving the tech. It can lead to improving the process. It can lead to several things. It also leads to greater trust in management, greater trust in your leaders, and then both of those, in turn, contribute to a stronger organizational culture.

Highly Aligned, Loosely Coupled

Humble: And I just want to kind of tie a few threads together by talking about this quite old slide, the Netflix Patel. And I think 2010 when Reed Hastings talked about Netflix culture, he talks about highly aligned, loosely coupled culture. And I think what you can see here is, the research that we've done around architecture, and around autonomy comes together in this slide around being highly aligned where we define strategic goals clearly, but leave it up to the teams to work out how to implement them, which is one of the things we asked about.

And creating loosely coupled organizations where it seems very cross-functional, which, again, we looked at in our research this year and found that cross-functional teams where everyone who is required in order to experiment and design code and deploy that code works together in teams. That's really important in creating high performing organizations as well, so we kind of validate some of this stuff as well.

Innovation Culture

I'm going to quickly end with a story from the early days of Amazon. So this story is from 2006 and it talks about something that happened before that, which was a team that Greg Linden was working on, designing the recommendations engine. So he had this idea for making recommendations on checkout. If you've been to a grocery store, maybe you've got kids like me and you're checking out, and there's candy at the checkout aisle, and your kids are like, "I want candy." And you're, like, "It was Halloween last week …"

Forsgren: For yourself?

Humble: "It was Halloween last week. You just stuffed your face with about a pound of candy. You're not having candy." And your kids are, like, "I really want candy." Maybe you're a bad parent like me, you're like, "Fine. Go and have candy. Whatever." Anyway, so he said, "Well, maybe we can give people customized recommendations on checkout based on what other people who bought the same things as you've bought, have also bought." So he makes a prototype, goes up to the VP of product with a prototype and says, "What do you think?" And the VP of product says, "No. You should not build this. It will distract people away from checking out and we will lose sales." So Greg is a bit sad, goes back to his desk, brushes up his prototype, pushes it into production, gathers a bunch of data from production, which shows that actually this will substantially improve conversion. Goes back to the VP who does not fire him, maybe not very happy, and says, "Well, actually, this is clearly really important. Let's build this out." And there's a recommendation engine, which gives you recommendations on checkout today in Amazon.

So first of all, who can actually do that, deploy to production code against the express instructions of executives? So that's actually a decent number of people. Look around you. There's actually people in this room, multiple people who could do that, and then multiple people who were laughing. So Greg ends his blog post with this quote, "I think building this culture is key to innovation. Creativity must flow from everywhere. Whether you're a summer intern or the CTO, any good idea must be able to seek an objective test, preferably a test that exposes the idea to real customers. Everyone must be able to experiment, learn, and iterate." If you want these slides and a bunch of free stuff, please send an email with the subject DevOps to jezhumble@sendyourslides.com. That's DevOps subject, to justhumble@sendyourslides.com. Thank you very much for your time. Have a fabulous conference.

See more presentations with transcripts

 

Recorded at:

Nov 30, 2018

BT