BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations Qualitative Analysis for Digital Transformation

Qualitative Analysis for Digital Transformation

Bookmarks
44:04

Summary

John Willis discusses how Computer Assisted Qualitative Data Analysis (CAQDA) and a QDA approach can be used to analyze group, individual, and leadership interviews to better understand Digital Transformation outcomes.

Bio

John Willis is the Founder of Botchagalupe Technologies. Before this, he was VP of Devops and Digital Practices at SJ Technologies the Director of Ecosystem Development for Docker, which he joined after the company he co-founded (SocketPlane) was acquired by Docker in 2015. He is the author of 7 IBM Redbooks and is co-author of the “Devops Handbook” along with Gene Kim and Jez Humble.

About the conference

InfoQ Live is a virtual event designed for you, the modern software practitioner. Take part in facilitated sessions with world-class practitioners. Connect, see, and speak with like-minded people. Join us to accelerate your learning, be better informed, and drive innovation.

Transcript

Willis: My name is John Willis, also known by Botchagalupe. You can find me at jwillis@redhat.com these days. This presentation is called Qualitative Data Analysis for Digital Transformation. I joined Red Hat last October, effectively a year here in a couple of weeks. The gentleman to the far left there is Andrew Clay Shafer. Him and myself, were probably five pioneers that were originally involved in the whole DevOps conversation. Andrew helped me start the first DevOps days in the U.S. To the right of him is Kevin Behr. He is the co-author of "The Phoenix Project." I've known Kevin for many years. That's me, the short guy. That's Jabe Bloom. He's been working with Kevin for years. Jabe is our resident PhD. He has a degree in Design Transition from Carnegie Mellon. A really well rounded team. It was, I think, about the first 10 years of DevOps. We've done a great job as a community, a lot of the 4 of us, including a lot of other people who were involved in a lot of the conversation over the last 10 years. In 2020, how do we think about the next 10 years? Should we be thinking about doing a better job than we did in the first 10 years? I think a lot of what we think about is that.

Andrew likes to say we wrote some books. Kevin, as I said, "The Phoenix Project" co-author. I've written probably over the years now, somewhere in the neighborhood of about 12 books. A lot of them nobody's ever heard of, IBM Redbooks. "The DevOps Handbook" is probably the one I'm probably most known for, in collaboration with Gene, Patrick Debois, and Jez Humble. I also did a labor of love project with Gene called "Beyond the Phoenix Project." Andrew has written chapters in both the "Web Operations," Site Reliability. I had influence on "The Unicorn Project." I wasn't an author, but I worked with Gene. I was an advisor for many years around that.

Then myself, a couple of things of note, I would say, the handbook and "Beyond the Phoenix Project." I'd also worked on some really interesting reference papers over the years. One that I've been really passionate about is DevOps Automated Governance. If you look me up, you can find a fair amount of presentations I've been doing about DevSecOps, Automated Governance, Automated Cloud Governance. Plus, I've been doing this stuff for a really long time, 40-plus years. I'm only listing the last 10 years here. I always worked with Canonical, actually pre-OpenStack. The first private cloud at Canonical. Then I sold the company, Docker. I've done about 10 startups over the 40 years as well. Early in it, CHEF, sold the company to Dell. Then of course now I'm at Red Hat.

Three Transformation Killers

One of the things, in my travels, what are the things that I've learned about transformation? How do we get people to change at scale? I would say it's relatively "easy" to help somebody sort of DevOps. There's enough literature, enough books that if you're a reasonably intelligent person, you can go in and change a couple of value streams, and automate them. The question is, how do you get a company to scale, how do you get a company that's 100,000 people? I've found that brick wall of getting maybe less than 15% or 20% penetration of these DevOps-y things, and getting to 60% or 70% has to do with probably these three things, what I call premature frameworks, an impersonal approach, and mental models.

Frameworks

This is something I've been saying over the last couple years is you can't Lean, Agile, SAFE, or DevOps your way around a bad organizational culture, or bad organizational behavior. I'm calling things like mean value stream mapping, a lot of tools that are very effective. I'm just calling them frameworks, but you're Agile. You're all these things. These are incredibly good tools at the right time. One of the problems I find is if you go in premature with these tools, and you don't really find out, what's really going on? What's the gap?

Impersonal

It leads me to the second thing is a lot of times the "experts," DevOps experts, or the digital transformation experts will come in and say, "I'm smarter than you. Do these five things and everything will be great." I've got this from Christina Maslach. I've gotten to know her, she's one of the leading burnout researchers. I've done some presentations and research on burnout. I did an interview with her one time, and I just took this quote she did in the interview when we talked about burnout. I think it applies here to the wrong approaches that we take, in fact, this impersonal like, "I know better than you. Do these five things, and let's see what happens." She said, "Whenever we're talking about any kind of change, or improvement, you are counting on a bunch of human beings to change and make this happen. If they haven't been part of figuring out how to do it, the change efforts will be dead on arrival." You have to include the people that you're trying to change in the conversation. This notion that some expert team will come in, and basically say, "Do this," and just tell everybody, we're doing these things. Typically, it doesn't scale.

Mental Models

Then the third important point, going back to three things, I think, of why organizations can't scale these transformations. We'll go back to Peter Senge, "The Fifth Discipline" author, or some people might say the father of systems thinking about mental models. These mental models, he said, "Mental models are deeply held internal images of how the world works, images that limit us to familiar ways of thinking and acting. Very often, we are not consciously aware of these mental models or the effects they have on our behavior." I'll use the example in criminology. You take three witnesses that saw a crime, and one will say the person had red hair, and another person will say, they had blonde hair. I've even done this, where I've taken three people on the same service team, put them in three different rooms, and had them map out the service that they managed. You wouldn't be surprised that they were significantly different. We have this thing about mental models of how I think about the representation, the beliefs, the premises, even how we operate. What do we think the goal of the organization is? One of the reasons why I've come into this idea of qualitative analysis is, I think it is a reasonable scientific approach to unbundling these three things that usually, I think, get in the way of transformation.

Quantitative

When we talk about qualitative, the first question that comes up is, quantitative versus qualitative. There's been a lot of work in our industry, DevOps, DevOps surveys by a number of organizations and companies, most notable the DORA and Accelerate, which has been fabulous work. Just to put it in perspective, a quantitative approach, say, maybe a survey, or a psychometric survey, starts with a generalized theory and uses correlation to draw specific conclusions. It's deductive. It draws conclusions from general principles. It's mostly numerical. A lot of stats, math, statistical analysis. It is impersonal. The surveys are your best chance for asking questions to try to pull out. Again, people who are experienced in psychometric surveys are really good at getting quality statistics, using statistical analysis based on the questions. At the end of the day, they are impersonal. You are not able to have that human to human conversation regarding the question. They're close-ended. They're typically, not always, a score, like strongly disagree all the way to strongly agree.

Industry Doctrine (Quantitative)

The thing that I think when you're doing qualitative or quantitative analysis, you typically start with industry doctrines. One of the things that in a lot of the DevOps surveys, quantitative approach has been these four variables: lead time, deployment frequency, change failure rate, and time to restore. The theory was, so I think it was six or seven years now, some of these reports around is that, depending on how you address some of your behaviors, and some of your technology choices, that high performing organizations will be 100 or 200 times better at lead time, deployment frequency, and time to restore, and lower in change failure rate. That's the theory. You collect a ton of data, and you try to prove out. If you look at most of the reports, they do identify high, medium, and low performance.

An example, how often do you deploy code? More than six months, between one month, between one week, so you get the point, or on-demand. Again, you get a lot of information. Here's the question, what if it depends? What if I actually have multiple value streams, where one is on-demand and the other one is once a week? What if it's, again, that beautiful or terrible answer of it depends. It's closed-ended, you've got to pick one.

The idea of quantitative approach with the pros is, it's easier to administer. You get a lot more data than a qualitative approach. It is definitely objective at its core, using scientific method. The cons, however though, are, it's impersonal. You can't ask somebody, what do you think that means? Somebody can't ask in the question, the word risk. I've done a lot of research and work with organizations where some people call risk, governance or compliance, and it's a mixed bag. Certainly, close-ended. It is definitely theoretical. A qualitative approach tends to be empirical. It's context sensitive. I don't know, as the person designing the survey, how you're going to interpret the context of the question.

Qualitative

Now a qualitative approach. I've tried everything. This is where I've landed in 2020, right or wrong, how I think an approach has been working really well for me. The qualitative approach moves away from the theory driving the data to an approach where the data drives the theory. I think, all of us, we have industry doctrine. If you go in the door of a large financial bank or something, if you've been doing it long enough you know basically what problems you're going to have. You generally have patterns and practices of solutions. What I try to do in a qualitative approach is try to not make any prejudgment. It's hard, but try to just ground up to date. It's an abductive process: inductive, abductive. Draws general principles from specific instances. You're trying to create categories of information. You're listening to multiple people tell you multiple things. Typically, as opposed to a survey, a wanted approach in qualitative is interview: group interviews, individual interviews. It's interpersonal. If I ask you a question, and I see a weird look on your face, I can ask you, "Did you know what I mean?" Or, the person answering the question can say, "What do you mean by this?" You can have a human to human interaction to drill down on the interpretation, which, again, creates open-ended conversations. In fact, most times when I'm starting these conversations, I just simply get a group together, and I'll say, what is your company not doing that it should be doing? That just explodes into incredible discussions about governance, and automation, and toil.

Industry Doctrine (Qualitative)

For example, I've been doing these presentations called the 7 Deadly Sins of DevOps. These are just four variants of them. From my approach, the industry doctrine is that most organizations have low levels of visibility. They can't see a lot of the work in the pipelines. There's not a well-documented dependency map, so there's a lot of coupling. My service requires the three services. I don't really know. I can't control the flow of those services so I'm constantly in this variance. I think, recently, I had an organization say one of our problems is we've got these dependencies, other value stream or the service dependencies, and we don't know if they're going to return the work in two hours or two weeks. Imagine trying to create any type of flow, when you have that variance, and certainly inconsistency. Inconsistent environments, so non-application of automation. I try not to be judgmental, but it amazes me in 2020 to have a top 25 bank that still hasn't bought in on infrastructure as code for building infrastructure. Capacity. Remember, I said four, so a quantitative approach, you're going to take those four variables. In a qualitative approach, I'm going to basically say industry doctrine is that there are these things, let's find out if they're there.

As an example, just very simple, what is the audit process like in your organization? A couple things. One, I do get to interact, that if you want to question me about, what audit? Is it external auditors, internal auditors? Again, I get to have an open dialogue about that. More importantly, I get to aggregate the questions. I start hearing, person one says they're terrible. Person two says they wasted about 30 days. Person three says, we don't tell auditors things they don't already know. Maybe it would ground up to human to human conversations, but also, I'm getting multiple answers to the same question in context. Either in one conversation. You also have these beautiful side effects where somebody answers a question, and somebody says, "Bob, that's not exactly true." Again, you can't get that dialogue. Then Bob is like, "You're right." You can't get that dialogue in a survey.

The pros of a qualitative approach is it is empirical. It's evidence. It's verifiable evidence. I can ask you and pinpoint or drill in. It's open-ended. As you saw in that last example, it's generally combinatorial. I can have multiple people correcting or adding value to the answer to the question. Certainly, hard to administer. In a quantitative approach, literally, if I get it right I can interview thousands of people. The largest qualitative approach that I've done is 350 people. It was like, I spent a month at a large bank. Less data, and it is subjective. In general, even a quantitative approach is subjective. In other words, you're taking your theory to mend the data into it. In either case, you have this approach of industry doctrine.

Tools for Qualitative Data Analysis

The other thing that I've come to find is there are some fabulous tools for qualitative data analysis. I found this one too. There's a whole category called computer assisted qualitative data analysis. Typically, without tools, I bring 10 color coded highlighters with me, and I just start taking paper notes and highlight. This is called MAXQDA. There's a couple of ones. I happened to really like this one. You literally take the artifacts that you gather, mostly, sometimes recording depending on if the company allows it, but always transcripts, and then notes. I like to have at least myself to catch highlight notes and a scribe. Then I have transcripts. I'm actually using a three-pattern approach to the actual transcript. A scribe is somebody who's just taking notes in detail. Then I'm taking highlight notes of really important things. Then you can add all that into the approach.

Qualitative Data Analysis Process - Grounded Theory

One of the approaches I've taken, just to get a little academic areas, or academia, is what's called grounded theory. This is an approach where, for example, I have this data. I have interviews. I have transcripts. I have notes. Then I go through this computer assisted, co-opted data analysis tool, where I actually code things. Then I start applying a methodology to concepts, creating a model and categories. Then ultimately, data drives the theory. I try to validate. There's somewhat of the scientific method here.

The approach that I use as part of the grounded theory is just work my way through coding. There's different schools of coding, but generally, you're identifying things inside the artifacts, the notes, or the transcripts. Then you're working that up, grounding that up to concepts of similar codes. These tools also have amazing ways to add context field notes, general field notes. Then you roll those up into categories that is basically driving your basic theory. Then in the end, you have some theory, which is, how do you approach code concept?

Here's an example. Code might be, I highlighted the answer to the question on audit. Audits take 30 days a year and they consume a lot of time. The concept roll-up is I've heard this a bunch of times through combinatorial answers. Audits are inefficient. The category is risk. The theory might be, you should start thinking about applying automated governance.

Industry Doctrine (7 Deadly Sins)

I had my version of an industry doctrine, which I call the 7 Deadly Sins of DevOps. You can see visible work, management system toil, misaligned incentives. If you read "The Phoenix Project," knowledge alignment, the breaths. Organizational design. Understanding complex systems. It all funnels down to security and compliance theory.

Logistics (Assets)

Typically, the engagements that I do, I wind up interviewing, depending. In COVID time, they're all virtual. Pre-COVID, I used to go spend some time in an organization, have all-day meetings. That's where you get real value out of this. A development team on Monday. A development team on Tuesday. An infrastructure team on Wednesday. It's really good when you start. By the time you get to the infrastructure team, you can ask questions about, are there any issues with getting resources from software engineers and developers? The infrastructure team, "No, why do you ask?" I'm like, "I've heard from the developers that you've cut all their memory and storage requests in half." "Of course, we do. They're idiots." In COVID time, I have to take what I can get virtual. Maybe 10 meetings. The largest one I did was 310 minutes, so the artifacts are lots of transcripts, a lot of documents.

Analyze Process

Then the analysis that I go through then is I'm constantly updating this MAXQDA. I go back and listen. If there's audio, I go back and make sure I'm coding and creating categories. You see here, this is an example of one that I did where I have the interview notes. I have recordings in this case. Then I have participant notes. Then those are the artifacts here in the document system. Then you see on the left, there's a bunch of codes. Then I've got some categories that I'm building, like automate delivery, outcomes. Then the good tools allow you to do word mapping. Here's an example of highlighting and coding things, and then putting them with some type of name, so I can say it's an impact, or it's cost, or it's toil.

Qualitative Data Analysis: Single-Case Model (Code Hierarchy)

Then these tools also have a really good tooling. You get all these. You've got 20 documents. You have maybe a novel, like 100,000 words of information. What you try to do, the really good tools allow you to tease data in multiple ways. The word map is a good way to start like, let me find out what the lexicon in here is? See, they use this word quite a bit. Then I can also start thinking about, let me start seeing what the data looks like graphically. Here's an example just playing around with risk, consistency, knowledge, alignment. How do all the codes line up? The whole thing is you're trying to force yourself not to make judgments or opinions, and then using these really incredible tools to see if the data is telling you something as opposed to you trying to make the data fit a certain way.

Qualitative Analysis: Two-Cases Model

Then this is interesting, because a lot of the times the notes, and the best transcripts can't capture all the words. This is one way I'll go in and do a quick check on comparing the transcript to the notes and see where the gaps are.

Analysis Phase 2 - Creative Coding

A lot of times the stuff that people really get excited about is, how are we dealing with governance risk and compliance? Start creating some graphs to drill in on, what's InfoSec? What's business risk?

Theory - Top Three Areas of Concern

Then what you're able to do is aggregate that report. The true qualitative data analysis scientists will actually get furious at me at this point. You actually do get a little bit of a quantitative here, in that, quite frankly, if I've gone through this ground up process, the categories, what I wind up doing is I start looking at, how often do I find instances in that category? Then I'm able to look at, consistency just showed up way more than capacity. There's a subjective nature to it, but what you would hope is the analyst has a pretty broad spectrum of knowledge of the industry. Then they're able to create a pretty high percentage of subjective analysis, if you will. Here's a case where consistency, funding, and toil just seem to be top of mind of all the conversation in interviews.

Thematic Observations

Then this leads you to your theory, your thematic observations. In a lot of cases, there's anti-patterns of DevOps that turn out to be relatively true everywhere, low-trust. You have your centralized authorities, your CABs, all that stuff. You're hearing it from the people doing the work like, I don't understand why we have to do a manual handoff to go from stage to prod. I can't believe there are other companies doing this, like us, other banks, like us, can do this, why can't we? Really figuring out where that comes from? How do you arbitrate risk and policy? Lead time. Again, I think companies that haven't invested in full automation like, why does it take three weeks in these modern organizations to get storage or a VM? I heard one recently where it still takes two weeks to be able to use an Amazon instance, or too many active projects. We've got all these active projects, and then somebody screams, we need something here. Then you pull a bunch of people off and so you've got all these open projects. Clarity, people just sit there and tell me they just wish they'd get more clarity from leadership on exactly what they want them to do. Instead of every couple of months, we're changing the game on. They just feel like they're pawns. That's the beauty of having these conversations with people, then they get to tell you, "We can't make sense of this stuff. We start the year off with this grand plan, four months in, we're sort of, 'Ok, now we're going to do this,'" with no explanation. Then funding, there's this notion of Agile budgeting, which very few companies have really figured out. This idea of your yearly budget, you get this much money. It has a life cycle. That's the problem is you can be DevOps and Agile all you want, but if you're still waterfall budgeting, there's a definite mismatch.

Economic Impact

Also, I tried to drive into economic impact. I think this gets fun which is, let's look. One of the things you get to do in a qualitative approach is you actually get to ask impossible questions. Say if you just spent the whole day with the team. A lot of times when I'm doing these group meetings they're usually an hour and a half, two hours, they're much better when I'm on-site, and I got a whole day, but I find there are people that just have really good input. I'll ask if I can do a one-on-one with certain people, because they seemed to be just very knowledgeable, very transparent. When I get those people on an interview, I'll ask questions like, "Given everything we've talked about, what would you say is the aggregate amount of waste that goes on here, either through service management, or ITIL, or things that don't make sense, or really don't help in their mind?" You'll get answers like, 30% or 40%. They're very subjective, but you're asking leaders in the organization to come up with a number. I always feel gun shy of asking that question, are they going to cringe? I cringe when I ask it. Then, more often than not, they answer it right away. That's an indication that it's pretty clear to them how much waste they're doing. They live it every day. You look at a $500 million budget, this is a medium to medium-small bank. Again, when we're talking about large enterprise, and Global 1000, this is relatively a smaller budget. There's $150 million wasted on general process. Then, I think you can easily aggregate in lost opportunity costs for low or no automation. Again, if you're manually building infrastructure, and not using Ansible or Terraform, or whatever.

Then there's this whole discussion that I'd love to learn more about, what is your cost of what I would call negative risk ROI? That means you're doing things that you believe are making you safer from a risk category, from a risk or a control point, risk control, but you're really not making anything safer. You are then stealing time, opportunity costs away from making yourself safer. That has a scary cost, especially if you're in a regulated business, and you can be fined or suspended from areas of business.

Economic Benchmark

I play around sometimes, and especially in a financial institution, I've done this a fair amount of times with some banks. I round the numbers up and down. It's generally anonymized, but basically, a categorical around those aspects. No bank on this slide is a real bank. They're within range of banks that I've done. I looked for, what are the things that you would want to think about comparison from an economic benchmark? What's your revenue? How many employees? Asset holdings could be interesting, but number of apps. Then the question was, if you've gone through some transformation improvement, what was your waste before and what's your waste after? Then it's math.

Modern Operations

Then, what you want to do is apply generalized and specific theory. We have some industry doctrine. We've gone through the qualitative approach. Now here's the things that are out there, but these ones are going to match what I heard. I would say, one of the things I like about this approach is, when you report back to the CIO, it's like, I'm not telling you as company X that you need to do things. I'm telling you, from what I heard from the people that work for you, told me, these are the things you need to do. Usual suspects, Site Reliability Engineering. I believe DevOps Dojos are the right way to do things. Increase your automation, that's just DevOps 101. I do believe, in 2020, I think the evidence is clear that platforms are the way to go, containerized cluster management. Not just because I work for a platform, or a company that has a platform, OpenShift, which is [inaudible 00:32:05] Kubernetes. You'd be surprised how many banks understand that. I've had a leader in a large bank tell me that in banks, chaos engineering is absolutely critical, so if you don't understand how this makes you stronger. Then be able to understand your skill set. We talk about I shaped, T shaped, E shaped, if you heard those conversations. What's the liquidity if you're moving people around, if you're picking up new technology, or you acquire another company, or you get acquired?

DevSecOps

Then the DevSecOps, what is your pipeline? What is your DevSecOps referenceable architecture for security? Do you have really security embedded in all stages of your delivery, from an IDE plugin that identifies malware libraries or vulnerabilities, all the way through your build, your scanning, your DAST, your SAST, your software composition analysis? I would point you, just Google me and Automated Governance, DevOps Automated Governance. I've done a ton of work here. We've got some reference architectures in the slide deck. Then more recently, working on Automated Cloud Governance is a really interesting offshoot of that.

Design Leadership

Then, design leadership. I think people hate the word soft skills, but it's, how do you think in your organization about leadership, the cultural, the behavioral? Some of the things that we've been doing in GTO, our team here with Andrew, Kevin, myself, and Jabe, we have this idea of a five elements assessment. Typically, people talk about Dev, or prior to the DevOps conversation there was Dev, a monarchical dev focus. Then DevOps created a Dev and Ops conversation. The question here in 2020, where are your architects? Are they in the conversation? Where's leadership? Where's product? We look at all five elements, Dev, Ops, product, architecture, and leadership, like Chinese medicine. If you're out of balance on that holistic system, then you're generally out of balance.

Value stream mapping is one of those framework conversations, which is an incredible tool that you must use. I don't like to use it prematurely. I like to understand the gaps to a qualitative approach, where you behaviors are. What are the things that you hear the most, or some others? Then look at value stream mapping or value chain mapping, which is Wardley mapping in here. Three economies, we've got some good work on looking at platform engineering broken up in three economies. Again, more on that. You can Google. Anything you have questions on, Botchagalupe. I can point you to really good literature. I'm a huge fan of the "Team Topologies" book, which came out from IT Revolution. In fact, I've been using it for a few years. I got an early copy of it, a few years ago. Some of the stuff they talk about peak team boundaries, cognitive load on team, norming and storming. It's just a great book that has really meaty research. They're telling you stuff and they point to the research. It's called team APIs, like how teams communicate, not technically, but social, technically. Then safe to fail, psychological safety, all those things.

Areas of Concern (Categories)

I talked about, you roll up data from codes and you get to categories. You find that there are certain categories. These are instances of categories. Remember earlier, if I find that there's more things in consistency than anything else then that might be the first place to look for solutions. Funding, different financing. Then toil, obviously, toil is usually everywhere, like bottlenecks, whatnot. Visibility, and then communication. Different clarity. How the risk get translated to the modernization efforts? Frozen middle. Risk capacity.

Transformation Opportunities

Start with a notional industry doctrine. In general, if you're going to do the interviews, you have in the back of your head that I want to tease out these. I typically think about the 7 Deadly Sins of DevOps. These patterns that seem to be universal, as my frame of reference of how I'm going to try to have qualitative interview conversations with groups and teams and individuals. Then you walk through the data to ground that up. It tells you what the categories are. Then I have industry doctrine on terms of, how do you solve problems for organizations that are trying to do transformation? I find that the areas where companies struggle is just taxonomy. Go back to that open-end conversation, and human to human. I can get a sense if different groups, or individuals within an organization are using different words to describe the same thing. There's the criminology, red hair versus blonde hair. I can hone in on like, maybe one of the first things or the easy things to get a hold on is, let's create some common taxonomy. We know the words we're going to use.

In fact, years ago, I remember Eric Ries' Lean Startup, had a conference and he was interviewing on our fireside chat with Beth Comstock, who was the CMO of GE. He had bought in to lean startup in a big way, for whatever reason. Then I remember seeing that, and about a month later, I was actually with one of my startups that I worked with, we were actually helping them implement a private cloud infrastructure. I remembered all the people that were at the edge dealing with the people that are doing the work, were using the same terminology. They were using build, measure, learn, MVP, pivot. I thought, what a beautiful thing for an organization? I forget how many employees he had, but 80,000, 90,000, 100,000 people, and imagine you get everybody using the same words and having the same context of those words.

I talked about team topologies, and models. Roles and responsibility. Really understanding how you want to transform to platform. It gets a little technical there. I think you have to be under a rock to not realize that containers and Kubernetes, or cluster management is the right way to go. The cloud titans figured that out years ago. Outcome based metrics. I'm a big fan of flow metrics, if you haven't seen those. Of course, automation. Really playing in one-seventh of your emphasis on skills liquidity. Not just skills update. Not just onboarding. I've seen really good examples of gamification, maybe a guild, pitting how often you presented the guild and what you do, and the Net Promoter Score of those things actually go into your bonuses, make up your bonus. You want to achieve skills liquidity, but you have to put things in motion that are out of the box and a little different than normal. Then, the whole safe to fail model is really important to understand.

Transformation Resources

There's a ton of really good reference papers, forum papers that I've been working with Gene Kim and the IT Revolution for many years. Each one of these areas, the seven. Here's the five elements. I talked about Chinese medicine of balancing. Product and development. Architecture and operations and leadership. Some good papers from IT Revolution. The flow metrics. Mik Kersten, his "Project to Product." I would say it is even product to service. Then, platform as interface. What you're trying to do is collapsing operations and infrastructure, so right to left, SRE and then three economies. Then you're moving product focus to model focus. The platform is that thing in the middle, a clutch. We call the three economies as a scope economy in the middle of a differentiation and a scale. Here's what I was talking about, the three economies model. Typically, we think about the world as a two-economy model. Let developers just do developer things. Let's move all that stuff into the scope. Let operations move the infrastructure scale things into the scope economy.

We think a lot about a platform as an interface, as opposed to a platform as a service. Change management. Then here are some great papers, forum papers. There's 60 or 70 forum papers out there. Dominica DeGrandis, "Making Work Visible," the five thieves of time. Mik's "Project to Product." Metrics, I talked about flow metrics. I think you have to have the common four that we talked about: lead, deploy, MTTR or mean time to restore, change success. Those, I would say, are latent. In other words, you don't really get to decouple them. Flow metrics actually include wait time, and you get to decouple or change failure rate by team or work. Then here are some really good papers and books, automation. I talked about trusted software supply chain. This is an example. Here's the DevOps Automated Governance Reference Architecture guide. Some of the papers, and the Automated Cloud Governance paper. Skills Liquidity. Dojos Continuous Learning. Safe to fail. Chaos engineering. There's this notion of continuous verification, a variant of chaos engineering. There's the "Chaos Engineering" book.

 

See more presentations with transcripts

 

Recorded at:

Feb 11, 2021

BT