Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage Presentations Thinking Methods - Systems Thinking at Work and Play

Thinking Methods - Systems Thinking at Work and Play



Wil Wade takes an introductory look at identifying and understanding systems in companies, projects, and software. The goal is to expand view of the entire structure and relationships to better understand behaviors and limits.


Wil Wade is a senior developer and lead at Carbon Five - a strategic digital products agency. While at Carbon Five, he has worked on projects covering a multitude of industries including finance, insurance, logistics, and education at company sizes covering small team startups up to Fortune 500.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.


Wade: My name is Wil Wade, I'm a software developer at Carbon Five, and Carbon Five. We're known in our consulting for doing things such as applying Agile pragmatically. That's the big thing that we get known for. I'm here to say that Agile's not the whole story, that there's something behind or bigger than Agile.

One way to think about this is thinking systems, it's this holistic view that we're going to dive into. First off, what are systems? They're all around us. We love to teach our kids about some of the mechanical systems. This is a Walt Disney book from the late '70s. Besides perhaps stereotypical gender roles, it has a lot of great systems. We've got, what's inside the floors? What's the electrical system? How does the refrigerator work? How does a rotary phone work? Things that kids need to know these days, these important pieces. We love to teach our kids about these human systems that we build.

We also love to teach about natural systems. How many people learned about the water cycle from the magic school bus? It's a few of us. We love to teach our kids about this. We love to build these systems, and we love to even build imaginary systems, going very deep sometimes, whether it be in science fiction, or Game of Thrones, all these wonderful, imaginary systems that we keep diving into piece by piece.

There's a lot of different systems in the world. Often, systems themselves are composed of systems. In fact, I would say its systems all the way down. One of my favorite things to do - and the kids are always laughing at me when I take them to the grocery store - I stop for a second and I think - and I encourage you to do this - you sit at the grocery store, and you say, "In this one store we have stuff from all around the world, produce that was literally on the other side of the world just days ago perhaps, that is fresh and ready for me, all these different systems that are required to get my food to me, and I don't even have to think about it because I can just walk into the store, use another economic system, and acquire those items and walk right back out."

Systems thinking though does kind of tend towards a more generalist view. I love this quote from Tyler Cowen, at George Mason, "Sometimes generalism gets you into trouble." You end up being able to do nothing but make observations and nothing else. Hopefully, we're going to do more than just make some observations, because it's systems all the way down. Even if you tend towards being more specialized, or tend towards being generalized, there is much in systems thinking that can help you understand what you're working with, and what you're interacting with. Before we can get to any of that, we got to talk about language.


Especially with systems thinking, words tend to be reused a lot, we tend to use a lot of natural words that we might use in conversation, and meaning much more technical things. In fact, there's a whole corpus of wonderful jargon that people have thought up, and these are great words. They have deep meaning, macro system, cybernetics, sub-optimization. These are words that are fun to use in conversation, but we're going to ignore as many of them as possible, and we're going to break down a system into this. This is my definition, "A system is a structure of things that is connected by relationships that produces behaviors." We have three key pieces, we have things, we have relationships, and we have behaviors. Systems thinking tends to like to look primarily at relationships, those hidden pieces.

I have something here, this is a picture of our living room floor one day. For all of the vendors giving away Lego, I have a wonderful story to tell you. It's not a system. I know it says on the box, "The Lego System." It's not a system. Why isn't it a system? Because there are no relationships between these bricks yet. They're merely a pile of bricks. They have the potential for relationships, they have the potential to build things, they have the inevitable potential of a foot coming into relationship with the pieces on the floor. Until they come together, until they have relationships between each other, that is when we start to see behaviors. Behaviors perhaps of spaceships, and rockets, cars, and houses, all these wonderful complex relationships that my kids love to build, those don't come into contact, they don't come into actuality until they have those relationships.

I'm a big lover of games, so, let's stay on a brick game tangent here. Let's look into the classic Soviet game, Tetris. We have falling bricks. How many people have played Tetris? For the few of you who haven't, this is a simple diagram of Tetris. We have blocks that appear on the top of the screen, they fall down, they collect at the bottom, and if you get a complete row at the bottom, they disappear and we clear them. We can start to build up some relationships between these things, between new blocks. We have the speed that it falls at, how fast something falls. The faster it falls, the higher the blocks are going to get, the higher the blocks get, the faster they fall because they have less to fall in.

We have how well you place these blocks, for me, not very well, for my wife really well. It depends on how many rows you get cleared. We can build out these systems. We will play around with a couple of these today, where we step back and we say, "There's more to it. There's always more to it." We'll have to draw some boundaries. You might think of Tetris, the amount of time you get per block, the falling speed is a relationship to that. It has a positive or a negative relationship, the faster it goes, the worse that you place those blocks.

We can look at a game over time. We have in blue the speed of the falling blocks. You can see the levels of the game, they get faster, or the speed gets faster as the game goes on. We have a bit of an inflection point in the height of the falling blocks right about there where suddenly the game starts to get a whole lot harder and my placement quality gets worse. One of the other pieces that we'll dive into is time. Relationships between things evolve over time.


We got some core language, we're feeling pretty good about this. Let's dig into something that Tetris is not. Tetris is very linear, it's got a start, it's got an end, but most of the systems that we deal with, most of the things we talk about, they are nonlinear. They don't have pretty starts, they don't have pretty ends, they go through iterations. Here's a game that doesn't tend to have as much of a start and end - it has a starting condition I guess.

How many people are familiar with Conway's Game of Life? Had to program it in CS. For those of you who aren't, that's fine. Conway's Game of Life is very simple. It's played out on this grid, and there are two states, we have these blocks can either be alive or they can be dead. There are a few simple rules on how things translate between being alive and being dead, and it plays out over time. It's nice, it's cyclical, it's a closed system. There aren't any really interactions outside of it, and so it builds up this nice little set up for us. We have some simple things. As opposed to rules, we have some simple relationships, and we end up with a multitude of behaviors.

Not all systems are made up of simple rules, we're dealing with complex systems. Complex systems tend to have a vast complexity, but even something with simple rules, simple relationships, can build up vastly complex behaviors. Enough play, let's talk about work.

This is what I'm going to say Agile is today. These are some of the core tenets of Agile. We've got small steps, we've got feedback, loops, change is good, some of these nice, classic Agile things. We can also diagram Agile, here’s a really simple Agile chart as a system. Remember, we're talking about system of things, connected by relationships that produces behaviors. Let's add in a whole bunch of relationships here.

Let's look at features, the amount of features you produce, those are things you produce. Sickness tends to reduce the number, that's a negative relationship there. Learning, what you're doing right now, currently you're not doing work. Learning is something that tends to reduce your feature count now, but when you get back, you might have new ideas, great new things to do, and so it might increase it later. Hopefully, everyone's taking vacations at times, and when you're on vacation you're not working, but when you get back, maybe then you're actually well-rested, unless you have kids, in which case you're happy to be back at work and resting.

Those are things that will impact the number of features. We can build up these kind of relationship concepts, we can look for these relationships in Agile. Critically, just like Conway's Game of Life, in Agile there's no real start and end to this system. We have new features that kind of come out of ether, but in the end, we're just iterating on a process. We're building new things, we're fixing new bugs, we're fixing old bugs, and we're continuing on this process that just kind of carries itself on in a nice, nonlinear fashion.

Three Keys to Understanding Systems

Next up, we have what I'm going to call the three keys to understanding systems. Really, these aren't just understanding systems, but I'm going to claim it today. There is, at the end of this though, a pop quiz. I know you didn't expect to have a pop quiz, but there's going to be one.

Here's my first key, right versus wrong. This is a wonderful paradigm that humans really like to talk about, "I'm right, you're wrong." We talk about this historically, "They're right, we're wrong." Us versus them. We talk about it politically. There's all this talk about right versus wrong. Even in technology, we do a lot of right versus wrong from the very beginning. Some people right now are actually thinking, "He's wrong because he has Tesla up there instead of Westinghouse." There's lots of right and wrong. We love to talk about this, we even forget at times about some of the right and wrong talks we used to have. One of the things that I found is that the more experience you get, the less you talk about right and wrong, at least for the most part.

I've now broken every single one of the Thanksgiving table rules. I've talked about politics, and I've talked about religion. We have this concept of right versus wrong. We tend to think in this concept, but when we get to systems, I'm going to say that right versus wrong is entirely the wrong paradigm. That is not a paradigm we can work with. I propose, instead of right versus wrong, a paradigm of being useful, or as my kids would say, really useful - it's the important thing to be for Thomas. When we switch from a paradigm of right versus wrong to a paradigm of usefulness, we end up in a place where we can do collaboration. As long as you're wrong and I'm right, you can't collaborate at that level.

When we remember that our mental models are all simplifications, they're all wrong in that way, but, hopefully I have a mental model, I have a systems model that is useful. If I have a useful model, and you have a useful model, hopefully, your useful model is different than my useful model. That's the point of getting more than one person in the room. If everyone thought the exact same way, had the same viewpoint, that's not going to help you, and so, that's where you want to gather thought diversity, and thinking diversity, and model diversity, and you find out that your model's useful in one way, my model's useful in another way. Together, we can build a better model, not a right model - we're don't care about right versus wrong - a better model, a more useful model.

Some of this may be passed off a bit as utilitarian, it may be passed off as pragmatic. I think it's that core to collaboration, of where you're bringing something to the table, I'm bringing something to the table. Together, we can take the systems that we've thought up in our heads, and we can create something that is really useful.

Number two, discovery vs creation. When we're building systems, there tends to be two kind of systems that we're building or thinking about. One is the discovery, where you're trying to dig in and see, what's the system that was there? How does it interact? How am I building a digital version of some older paper version? That's what a lot of times we end up doing.

On the other side, you can actually have creation, and there are two classic companies that kind of represent these two. IBM started out not really changing how the Social Security system, or the census actually worked, it was merely adding machines. As IBM was getting started, Hollerith had to sit there and figure out what exactly is the system that's there now? What are my technical limitations? How can I enhance the existing system to provide value through my effectively calculators, nothing more. We aren't really at computers at that point.

On the other side, on the more creation side, where you're building entirely new systems, classically, we have Xerox PARC, Research Center famous for literally half the things that you use on your desk. There, you're building entirely new systems that still yet has to interact with old systems that are out there. How do we take paper and change it from paper into digital? How do we also work with that in this new framework of digitalness? Let's create things of a pointer and a mouse. Simple pieces, but entirely new systems and abstractions working with these technologies.

Number three is the other side of right versus wrong. As opposed to taking things as right versus wrong, you have to internalize that your own mental model is wrong. As you're building up that mental model, that system view, holding things, shall we say, weak opinions weakly held. Taking the thoughts, as you're maybe you're digging into a new domain, you're digging into let's say it a legacy codebase. You don't know what all the pieces do, but you're like, "This piece over there, I think it does this. I don't care, I'm just going to shelve that. That's the authentication side of the app. I don't care about that right now."

Maybe, as long as we remember that it's a temporary assumption, we're going to think, maybe we were going to find out we were wrong later, but hopefully we can kind of start with this fuzzy picture. We're wrong, we slowly clarify over time, maybe not all the way down, but we clarify over time, how each of the little pieces of the system work, and quickly leaving behind any kind of assumption that we have that turns out to be wrong.

There we go, three keys to understanding systems. What was number one?

Participant 1: Right versus wrong.

Wade: Wrong. Yes, that was the whole point of that, right versus wrong. No, it's usefulness. We want to talk about getting rid of right versus wrong, talking about usefulness. Number two - nobody ever remembers number two.

Participant 2: Discovery.

Wade: Discovery versus creation. As you're building a new system, which, thinking about, are you creating that new thing? Are you looking for something that was there before? What was number three?

Participant 3: Temporary assumptions.

Wade: Temporary assumptions. Hopefully building up correct assumptions but being able to drop them as we go through.

Modeling: The Very Short Version

Now we got some core principles. Let's talk about the very short version of modeling. First off, what are you going to do? You're going to look for some things. What are some things?

Participant 4: Dice.

Wade: Dice? There you go. Dice are things.

Participant 5: Chairs.

Wade: Chairs. What's sitting in chairs? People - there we go, you have a relationship to the chair right now. Hopefully, it's a strong relationship. We have lots of other things. We have computers, we have sub-systems, could be viewed as things. We have all these different things to look for. As you're building out, whether in your mind or whether on actual paper, and you're hopefully writing these things down, and then, of course, you're going to look for some relationships. Yell out some relationships for me. What do we have?

Participant 6: Above.

Wade: Above. That's your relationship to the chair right now, which is a good relationship to have. Hopefully, you're not underneath the chair. Let's think about connections, what about in the more servers' world. What's a relationship for your servers? Everybody fails, yes, 101. It's HTTP currently, mostly, TCP/IP, UDP.

We have all these nice connections between servers and we can view those in this kind of relationship manner. Some of those relationships can be positive and they can trigger other things. They can be negative if you're getting DDoSed. We also have things such as rules, when you think about the corporate environment, you have any kind of rules that the corporation has that says, you have to dress a certain way, that's a relationship you have to the company. All of those relationships.

Now we've got some things, we've got some relationships, and we ask ourselves, can we describe some of the behaviors we're seeing? That's kind of the power of systems thinking, is, when you can take these things that you found, you can take some relationships between them, and you can say, "That helps me understand the behaviors I'm seeing."

The idea behind systems thinking is that behaviors are core to those things and those relationships. The interactions between things create behaviors. We don't just see behaviors randomly. Of course, just like in Tetris, where we had eventually the end of the game, time plays an important piece into systems. As time goes on, systems change, dystems aren't static, humans aren't static. We have things that previously, maybe that relationship was positive. Let's say, in Tetris, how you're thinking about that next block, sometimes in Tetris you'll know what block is coming next. As the game speeds up, that could actually become a negative relationship, where you're paying attention to what block is next, and by the time you've actually seen what block is next, you've already lost the chance to change the current block.

This is one of my favorite ones, incentives matter in systems. They tend to be one of those more hidden relationships, and so it helps to look for incentives. There was a story I have, once upon a time, as a consultant, we're working with a company, and they brought in another consulting company as well, so there are two different contracting firms. We were doing Agile, we had retrospectives, we had stories, and we found that the new contracting company was having some trouble. They were opening a lot of pull requests, and then just leaving them open and moving on to the next story. People would leave feedback on it, and it'd take them a while to get back to that when they'd sometimes even start a new story when they had pending feedback on a pull request. In the Agile way we brought it up at retrospective, and we said, "We have too many stories in flight, we should try to keep the number of stories down that we have in flight." For a little while, it worked, and then slowly it trend back towards having too many. We were trying to understand, why is this? Eventually, we found out that they had incentives to do this. The contracting company decided that they would have metrics on their employees. Nothing terribly wrong with that. One of the metrics that they had was how many stories they completed. That's good, we want to complete stories, that's the point of shipping features. The problem was that the definition of finished was different from our definition of finished. Our definition of finished was, when the pull request was merged and the story was accepted. Their definition of finished was, when a pull request was opened.

If you tell someone, "Your job depends on you opening pull requests," then, no matter how many times in retro you go back and you say, "We should have fewer stories in flight." Eventually, when their job depends on it, whether they want to or not, that incentive is going to empower them to say, "We should actually just open up another pull request. I'm going to wait on that email. Let's go start a new story."

Incentives are one of those hidden, or more hidden relationships as you're looking for relationships, that can impact greatly the behaviors that you see. Sometimes, those simple Agile concepts, which work really well - retrospectives are a great thing for producing iterative change - but those don't always work if the incentives don't allow for it.

Close to incentives is your goals. I'm not talking about the U.S. women's soccer team, who is very good at goals. We have goals, and in Carbon Five, we tend to talk about this as being product-focused. We want to focus on the product, we want to talk about being user-centric design, but if your goals aren't necessarily in alignment, you're going to have some friction there. As you think about goals, you can think about how to make sure those goals are explicit, and that everyone agrees on them.

In addition to this more goal-centric thinking that hopefully systems thinking leads you to in a way, is that you tend to disconnect from how, and talk more about what. In the end, it doesn't matter; if you have a goal, there's lots of different ways to get to that goal, probably. Sometimes you'll think, "There are wrong ways to get to this goal and right ways to get to this goal," but we've already gone through that - the right and wrongs, the wrong paradigm. If we think about goals, we can start talking about, what's a useful way to get to this paradigm? What's a useful way to get to this goal? How are we going to get off of what used to work and talk about what we actually need to do to meet the goal, to make the product?

We see this in a lot of innovation, I think. I'm going to talk about some innovation that's not well talked about. How many people have seen a sewing machine? How many people have seen the result of a sewing machine? If you haven't, you can look down, it's right there. Traditionally, when we think about hand sewing, you have a needle and one thread, and you're going up and down, in and out. The switch to more mechanical sewing was when that paradigm of caring about how we sew, or in what way we're sewing, they had to get rid of that in order to move to a mechanical sewing and think about the goal. The goal is to put two pieces of fabric together, and then actually, we use two threads now in most sewing machines, or more. Primarily, two threads in your modern kind of sewing machine, and no more is the needle going all the way in and all the way out. It's able to just go up and down and create this - I say modern, but it's been around for hundreds of years - this modern way of putting clothes together, that if we'd stayed focused on how we're doing something, we would never have gotten to, "what's the goal?" until we got away from the how.

We mentioned this one earlier. Since systems are composed of systems, and interact with lots of other systems, it tends to get messy. You have to pick where's the edge? Where's the boundary to the system that we're dealing with? Once, working at a different company, we were part of a department that was sending out text messages. That's a good thing to do. We'd send out a text message to the customers, and the customers would reply, yes or no, or a few other options, and we'd parse that reply. All great. If we couldn't parse the reply - humans tend not to send very consistent replies - we'd send back a message that says, "We don't know what you're talking about, but it sounds like you need some help. Call this telephone number instead." That's all great, except, there was another system out there from another company, and they also sent out text messages. They'd send out text message, and if you reply to the text message, instead of trying to parse it, they just tell you, "We don't deal with replies. Call this phone number instead."

Some of you are already thinking about this, and that is true, that eventually the inevitable happened. One of the telephone numbers from our system got into the telephone number for the other system. One computer system eventually sent out a message to our system, and we said, "I don't know what you said. Call this telephone number." To which the other system replied, "We don't care about your reply. Call this telephone number." To which we replied, "I don't know what you said. Call this telephone number." Fortunately, text messaging is fairly slow, and so it was only several thousand Twilio dollars later that we discovered that these two systems were interacting on their own. We decided to redraw our boundary of who our user was from just the real people we were sending our messages to, to include the possibility that there are other systems that would interact with our text messaging.

That's the very short version. There's lots more pieces to building a model, but there's the very short version. Eventually, though, you're going to want to - that's one of the reasons you're here - you're going to maybe find some change, you're going to find some bugs. Let's talk through some common system problems.

Common System Problems

Here's my favorite common system problem. I see this in Agile, probably more than anything else - delayed feedback. You start a story, you finish the story, you push it to whoever's going to do the acceptance, you start the next story in the chain, that person's may be on vacation, then you start the story after that, and the story after that. The person comes back from vacation, looks at the first story and says, "It's entirely wrong. What were you thinking?" You're like, "Oh, shoot. Now I got all those other stories I got to go fix."

Some of the Agile principles is about shortening feedback loops, doing small iteratable steps is that it disconnects the cause from the effect. If you don't know that something happened in the past, it's hard to say that the new thing that's happening now is a direct result of that thing in the past. The closer you can connect any of those events is better. This tends to be one of those classes of system problems that you might see.

There's another one. I'll tell you a secret, there are always limits. You might not know what they are, that's a current problem, but limits always exist. Maybe it's the number of developers you have, maybe it's that you have too many developers, maybe you don't have enough features, maybe you don't have enough users. There's always some level of limit to you. From the keynote this morning, Moore's Law, it has a limit. It has a physical atom level limit at some point.

These limits always exist, but they tend to inform our products, and they shape us and form us. Here is a battleship going through the Panama Canal. The Panama Canal is 110 foot wide. When they built the battleship, they had to know how wide the Panama Canal was in order to build the ship, 109 foot wide, I guess, slightly less. We even have a term for this, we have cargo ships that are called Pan Max. That's the maximum size that they can be and still fit through the now slightly expanded - it's not 110 feet anymore - but how big they can be and fit through the Panama Canal. You're thinking and looking maybe for a systems problem, or maybe you're coming up against a systems problem, thinking about, what's the limit? Where's my constraint? Make sure that those are known so that when you go change them, that you'll know to go and look for a new one, because there's always going to be a new one. There's still a width to the Panama Canal. They aren't going to just chop off all Panama and ship it out. Then you'd still have the width of Panama.

As you're changing these systems though, you're going to find out that systems are resilient. It's kind of a survivorship bias. If a system has been around for a long time, that means it's been resistant to a lot of things, and so it's probably pretty hard to change. The longer it's been around, the harder it can be to change. Sometimes, systems will get brittle, they're resilient until they're not. When they get brittle, some of the signs that you might look for is a lack of give. Most systems that are very resilient are going to give some push. We worked with contractors with those incentives that they had, we push against them, and they'd change their ways for a little while and then slowly work their way back.

If you're making change within systems, it's important to think about it at that level of, is this something that I'm actually going to make a real change in? Is the systems' going to reach a new equilibrium or is this something that's going to slowly flow back to the previous equilibrium?

Then, any time you're pushing against a web of systems, you're going to get some unexpected side effects. We see this a lot in technology. I feel like technology has more than its fair share of unintended side effects, not just at the low level, where you're saying, "Let's add some additional capacity here," that suddenly overloads you somewhere else, but it works with, on the human level, as well.

We even think about the word. These are startups that are disrupting things. Disrupting means it's disrupting a system that exists, and that can cascade into multiple effects. When we start Facebook, we don't think about all the different interactions that Facebook is going to have. We don't think about whether it's going to impact how elections work, all we're building is a system.

As you're building a system, as you're interacting with systems, when I change something in that system, when I make something new in the system, what are the ripple effects of that? Looking at those relationships, how does it affect the near relationships? How does it affect the far relationships? You can end up with a lot of unintended side effects.

Here's a story for you about nice, unintended side effect, where we built something that people liked. I worked for a college, and we disliked our application. We had an online application, and it was slow, it was clunky, it was taking people one, maybe two hours, to apply for the college. We were, "You can do it faster on paper than you can online." That's a bad sign. We worked hard, we built this entirely new application process. We got all these different departments on board, and we were like, "Can we strike that question? Can we use these questions instead?" All the multitude of stakeholders that you have in a university setting. We got it and it was great. People could finish it in 10 to 15 minutes, which is a pretty good improvement over before, and, as expected, more people finished the application.

That's great, it's a good thing, but, there was one piece we forgot about, and that was, the previous models, for the estimated number of people who are likely to matriculate and actually come to the college was based on, was one parameter, how many applications were filled out. The number of applications had increased by a factor of four. The model that the administration was working off of said, "The number of people that we're expecting to come is going to increase by a factor of four." Of course, it wasn't. We made something easier, we changed the system. The system was no longer the same system. It wasn't what was previously expected, and so, of course, we had approximately the same, a small increase, a more normal, more expected level. We just made something a whole lot easier and less of a pain point, and worked through that. We had that unintended side effect of a whole lot of people being really scared that they were going to need more beds in the dormitories for this massive number of students.

There are a few of the common system problems. There's a lot more classes of systems problems. In the end, I think, coming back to the structure of things connected by relationships that produces behaviors. This is the thing I'm harping on. It's that core to what I view as systems thinking, is thinking not just about the individual things, thinking about those connections between the things, and then trying to use those to understand behaviors. Always being wrong, but always getting better at understanding those behaviors, and watching for all those changes, and side effects, and individual pieces.

Before we have time for some questions, there are a lot of great resources out there. Donella Meadows' book, "Thinking in Systems: A Primer" is a great one from a very general sense. A lot of great examples, a lot of great graphs and charts. I think I prefer the second resource here, even though the actual content's not very good. "Once upon a complex time." One of the things we don't talk about with systems thinking very much is that it's very close to a story. It's things connected by relationships that produce behaviors, that's a story right there. The power of stories to communicate systems is, I think, underrated, because stories tend to talk about a lot of relationships. It tends to, hopefully, give those kinds of underlying pieces that we often miss and we're just putting out a list of requirements. That's somewhat of what an Agile story, if you're writing Agile, is trying to encompass. You try to write a user story. It's that idea to encapsulate and capture story. Any time I suggest if you're trying to think through a system as well, just start telling yourself it has a story, and that will tend to help you look at some more of those relationships, pieces.

Then I suggest "How it works in the home." There's a whole series of "How it works" that Disney did back in the '70s and early '80s. It's great for kids, it's great for adults. You get to see how all kinds of things work in a very simplified manner. It shows some of those relationships and some of those times as well. My kids have worn two sets of them out so far. There may not be many left on the market by the time my kids are done with it, but that's a great resource as well.

Questions and Answers

Participant 7: What do you think is the hardest thing for managers when they're trying to switch over to this kind of thinking or adopt it if they haven't really considered this before?

Wade: I would say breaking free of a lot of those paradigms of right and wrong, is one of those really core things. I know I'm harping on it again, but, it's really hard to view yourself as wrong. It's easy to view other people's wrong, and that tends to just infect your thinking really badly. You don't develop new ideas, you don't understand alternatives as well.

That's one key, the other key I would focus on is relationships. At a first glance, a lot of relationships will seem very surface-level and very easy to understand, but it's that second level of digging a little bit deeper into incentives, understanding what the goals are. Sometimes it's that collaboration aspect, you don't know everything. You've got to get other people in the room to say, "What are your goals?" Hopefully, maybe we can find out what are some unstated goals. There are some great talks that work through that I've seen here at QCon. Those are those keys in making sure you stop thinking in right and wrong, and then getting other people in the room so that you can uncover those relationships better.

Moderator: Also, one additional resource for managers that I would recommend is Will Larson's blog. He actually wrote a little library where he takes some management processes, and then he models them using software to kind of understand the different tensions. Will Larson, I think it's

Participant 8: Actually, you partially answered me. My question was, in addition to analyzing dependency graphs and maybe some communication metrics, what are the ways to quantify systems and analyze systems?

Wade: I think, and Will Larson's blog does great at this, of finding patterns. The human brain is great at pattern matching, and leveraging that when you look at finding out about common system problems. That's a place where you're pattern matching these more generic ideas, you're getting exposure to them, to look at,"How does this map on to what I'm actually working on now?"

Building up that library, and that experience is almost as important as or perhaps even more important than actually manually graphing it yourself. It's that level of getting exposure to different types of systems. It's hard to get rid of right and wrong, but it's even harder to expose yourself sometimes to more systems, and to more ideas. You're doing a great job because you're here at QCon, hearing lots of great talks, which is part of that exposure, but allowing your brain to work through the pattern matching on its own sometimes, even taking a break, and taking a step back even before maybe writing those dependency graphs.

Participant 9: I'm curious about the incentive portion that you brought up. Have you seen or do you know of good resources that talk about incentives as is applicable to software teams?

Wade: Economics is really big on incentives, but it can be a fairly dense subject. It tends to talk a ton about incentives, and how pricing works with incentives. As far as software goes, I think you're going to probably find more resources on the side of actually talking to direct managers, not just high-level managers, but direct managers of the software engineers and say, "What are your software engineers responding to?"

Different people respond to different incentives. Some people respond to more time off maybe, or more challenges, or different challenges. You get into that very difficult level of where incentives are extremely personalized. It's not as common, you talk about them a little bit less when they're these overarching company level ideals of incentives that the company creates, but each individual person's incentives are a little bit harder to dig into. That's where it takes that personal relationship and personal connection. You can have that with every one of your engineers, or whoever is actually creating that management position. This is where you have to go to start talking about what are their incentives at a more personal level.

Moderator: Just to add a book recommendation, as a manager, "Drive" by Daniel Pink talks a lot about mastery, purpose, and autonomy as a lot of the things that drive and ultimately add to motivation of people.

Participant 10: I just wanted to chime in with Charlie Munger, who talks a lot about mental models, he writes in a blog in Farnam Street. His quote is, "Show me the incentive, and I'll tell you the behavior." He writes a lot of interesting information about how to deal with, how to understand incentives and try to figure out what the incentives are that are going on. That's everything from your compensation system, to how people get promoted, to how the metrics that are being used to track all those things are driving behaviors.

Moderator: Also, Greg mentioned Farnam Street. It's a blog by Shane Parrish, who is one of the people that is sort of leading the movement to catalog different mental models on his website. He recently also came out with a book, as well as another book that published in the last month, by Gabriel Weinberg, the founder of DuckDuckGo, so, lots of material coming there.

Charlie Munger's original thesis, which is really why I wanted to have this talk, is about, as you navigate the world, you add to your collections of mental models, and you find the particular problem, and you're, "This mental model kind of maps nicely to this problem," and that sort of helps you reason better about the world.

Participant 11: I wanted to get your reaction to something. You talked about right and wrong and getting people started. One of the things I seem to find when people start thinking about systems thinking, they let the perfect be the enemy of the good, and that comes from, I think, right and wrong thinking. Have you seen a lot of that?

Wade: At Carbon Five, we tend to be extremely pragmatic and product-focused. That helps to remove a lot of that concept of perfection. We've worked with a lot of startups as well. When you have a runway, there's never an end to perfect especially if you have a very short runway.

Participant 11: In larger companies, as was asked about how do you get started with system thinking, the fear of being wrong often leads to that type of thinking.

Wade: I would agree with that. It tends to be a lot of that, that's where I fall back on more like Agile principles, of, what's our goal here? What's the small goal? Where can we come together to say, "this is the next step"? You can stop worrying as much about how are we going to get there? Because that's never how is merely a discussion. What we're actually doing tends to stay rather static.

Participant 12: One of the things you said, weak opinions weakly held, that kind of resonated with me. I'm team lead on a team. Sometimes people will talk about, "How do we do this little thing?" You're, "I don't really have strong feelings on it, so however you want to do it." Can you talk to that? Is that a good way to think about things like this? Whether there are two ways to do it and they're very similar, and it's just, "Whoever does it, just pick one." Is that healthy?

Wade: I think it can be. There are two levels to it. There are times where it really doesn't matter, and letting the person pick whatever and carrying on is fine. It also - and this is where I think systems thinking really excels, in thinking more holistically, is that you're saying, "This is the decision now," and that decision could stick around for a long time. We're going to forget why we made the decision, decisions tend to stick around and reasons tend not to. When you're thinking more holistically, you're going to say, "That's great for now. Where are we trying to go eventually?" Maybe there's some level of difference there.

There's also a level of, how quickly can we change directions? If we get to a point where the system's getting brittle because of that, can we replace that with something that's less brittle so that we can go forward? It is really not perfect, but is it just good enough for now and we'll figure out, we'll learn more. We're always expanding our system, always expanding our understanding of it. In addition to that, it's changing underneath of us, on top of all of that, which means, your assumptions now about, "This is definitely the better way to go," making sure that you understand, are those assumptions really going to hold in the long term?

That's not answer for you, on what to actually do. That's the key to collaboration. You're going to put those mental models together, hopefully, and come up with a solution, and then eventually, someone, generally I prefer the person who's actually doing the work, picks the one, if you're literally at a 50-50, and carries on; you can't have everything perfect.


See more presentations with transcripts

Recorded at:

Sep 18, 2019