BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations Who Broke Prod? - Growing Teams Who Can Fail without Fear

Who Broke Prod? - Growing Teams Who Can Fail without Fear

Bookmarks
38:20

Summary

Emma Button explores the factors that contribute to psychological safety on a delivery team. She looks at some of the steps that leaders and team members can take to foster a culture of blameless failure that encourages innovation and collaboration. She looks at soft-skills, CI/CD practices, retrospective processes & tooling that can help build a culture of trust and ownership within a team.

Bio

Emma Button is the COO and Co-founder at nubeGO Cloud Consulting. She leads engineering teams going through cultural change; inspiring team members to transition from traditional working methods, through Agile and Lean practices and into the DevOps mindset.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcript

Button: The title of my talk is quite a personal one to me. It comes from a couple of years ago when it was a bank holiday weekend and because it was a bank holiday, I really couldn't find anybody else on my team who wanted to be the developer on call. So, I took the phone myself. It also happened to be my daughter's fifth birthday. I had to, at one point, answer the phone, deal with a production outage while simultaneously handling a group of about sixteen 5-year-olds in a bowling alley. That takes a certain kind of skill, it's a parenting skill. It's one that I have available to me, thankfully. Anyway, I spent my whole bank holiday weekend either on Slack or on the phone trying to diagnose an issue.

As a manager, I'm kind of hands off these days, so I wasn't the best person to be fixing it, if I'm very honest. But we got through it, fixed the problem, there wasn't a big issue in the end, but I walked into the office on the Tuesday after the bank holiday, I was tired and I was stressed, and the very first thing anybody said to me as I walked in through the door was, "Hey, Emma saw that you had a fun weekend, who broke prod?" And my shoulders sank because through that whole weekend, not once had I stopped to think about who had broken prod. In fact, I wasn't that worried about what had broken prod. I was heads-down trying to work out how to fix it and get things back online.

Before we move on, let me introduce myself properly. I started life as a Java developer. These days I only play at development, I like writing Alexa skills, I teach AWS, but really my sweet spot is helping development teams transition more into a DevOps mindset and to embrace agile lean ways of working certainly from the people on process side, a little bit of the technology side, but mainly the cultural side of how do we optimize it the way that we work and that's really important for DevOps.

Bad Stuff Happens

Let's face the facts, bad stuff happens, and you heard James talk about it this morning, if you were on the DevOps and DevEx track. You can't avoid it, bad stuff just happens. It doesn't matter whether you've got in place the most beautifully architected environment, it doesn't matter if you built for resiliency, it doesn't matter if you built for high availability, you can put in all the processes around stage gates and change control, but eventually, bad stuff happens. And the sooner that we get used to it, the sooner that we can sort of shift our focus and start preparing for failure and learning how to deal with it and learning how to prepare ourselves, both from a technology point of view, but also personally, how do we deal with failure?

Respond Positively to Feedback

Why does it feel so rubbish when something that we've been working on breaks? Why does it hurt? That day when the production system broke and I was on call, not only was I stressed because I missed out on birthday cake, I was depressed, I was sad that the system we've been working on for the last 18 months had broken, and it feels very personal. And the reason why that happened is because we're human and over millions of years of evolution, animals, people have developed. And failure is a form of feedback. If somebody gives you rubbish feedback, it hurts personally. If somebody makes a comment about a website that you've created, that is reasonably negative, it makes you hurt inside. Even if you don't show it, it happens. So, we need to teach ourselves to try and respond more positively to feedback. Now, I'm not the greatest person for advice on this because I'm really rubbish at dealing with negative feedback.

A couple of years ago, I was offered the services of a career coach and that career coach, on our first one-to-one meeting said to me that he thought I was somebody that wasn't very good at handling negative feedback. And the very first thing that I did was I burst into tears. That's my emotional response to negative feedback. I've seen developers on the teams that I've worked with their responses can sometimes be anger and deflecting that feeling by getting angry. Most commonly though, people sort of curl up and there's little armadillo ball and go a bit quiet. And that's one of the signs you can spot when people haven't responded positively to that feedback.

Can you see that? When your production system goes boom or maybe when somebody has reported a particularly embarrassing defects in your system, people go into their little armadillo ball. They curl up, they put up their emotional barriers and they stop communicating. And that, when you are investigating a production issue, when people curl up and stop communicating is when a small problem becomes a big problem. And we'll get to talk a little bit in my talk about real things that you can do, using the tools that you've probably got available to you already to try and help you learn to respond more positively to feedback.

Some of the things that I'll talk about are quite soft, touchy-feely, but some of them are tools that you've probably got.

Stop Seeking Blame

But before I move on and start showing you some of these little things that you can practice, let's talk about blame. The real theme of today's talk is around blame. And it is human. We naturally look to blame others. It's a survival instinct. So, if you think back to the days when we were hunter-gatherers, if we failed to catch the week's kill, then others would look on is not very kindly, would they? And you start making excuses for yourself, you start trying to deflect the blame because it's your way of trying not to get punished. That's fine. We all do it, it is a human thing to do, and if we want to learn how not to seek blame, we have to practice it. It's hard.

Responding to Failure

What I'm going to talk to you today about is little improvement katas. Now, a kata is like a martial arts term. You see it quite a lot in the lean teachings as well. A kata is a little thing that you can practice to try and turn it into a habit. You practice it every day until you don't start thinking about it anymore. It just becomes habit. So, lots of words, slides will be available. The improvement katas for learning how to become more resilient. The first one is sticky notes. Now, it's quite unusual for me not to have sticky notes with me. I live by sticky notes but I have no pockets so no sticky notes today. But after the "Who broke prod?" incident, I actually wrote to myself some sticky notes to try and help me improve the way that I would deal with that sort of environment next time it happened. And I stuck the sticky notes on my monitor alongside all the other sticky notes that I have. Try and do it. Write down how you wanted to react. So, if somebody said, "Why did you make that mistake? Why was that log file not in the correct format?" Write it down, write down how you felt, and write how you want to react next time.

The next one is just to say thank you for the feedback. Someone gives you crap feedback about something, just nod, say thank you, and move on. You can go brood it in your own time, don't try and fight it, don't try and defend it there and then just say thank you for the feedback, and move on. How do we stop ourselves in trying to seek blame? How do we stop ourselves in trying to single out that point of failure? We have to practice. So, you can consciously correct others around you. When somebody asks you, "Who broke prod?" turn the question around, play it back to them, "What are the factors which contributed to the problems we had in production?" And we'll talk a bit more about that later when we talk about postmortems but this is just a little thing about replaying the question.

And this one is a cliché, and I know it's a cliché, “It is okay to fail”. It's a cliché, it's fine, but it's about messaging. We have to be able to teach others that it is okay to fail. This is a real thing, this works. I'll give you an example. This time last year, I was working on a project trying to get a piece of work GDPR compliant in time for GDPR deadline and the team had underestimated the complexity of the work and we had to go and tell our senior managers that we weren't going to be finished in time. A scary thing to do. Now, my manager at the time asked me to communicate out to the teams that that wasn't okay. "Why did you miss the deadline? That's not okay. It can't happen again." But it will happen again. We all know it will happen again, people miss deadlines, it happens. So, instead of doing that, I went into the retrospective and help lead the retrospective where instead of sending out the message that it can't happen again, we talked about the things which had happened to cause the deadline to be missed, and we took some real tangible actions, so that we could learn from the lessons and take it through with us. So, the messaging is important and it's okay to question your own boss on the messaging.

Brutal Transparency

That reaction, the whole armadillo little ball thing, is most apparent when you come into a major incident. The "Who broke prod?" situation was a major outage, it was impacting end users of the system. That's why it was so important we had a developer on call that weekend. And if you put up your barriers and you stop communicating, people start thinking that you don't know what you're doing and people start thinking that there's nothing happening and then they worry about the customers thinking about there's nothing happening. So, I like to use the term brutal transparency. When you're working through an incident or major outage or, even if you're working through a serious customer-reported issue on a non-hosted system, use brutal transparency and that term comes from a little flash card that I have on my desk. On the back of that flash card, it says, "The only way that you can build trust is being honest in all circumstances."

If you are investigating a production outage, you have to be honest about the scale of the failure. Be brutally honest, share all of the detail you have, in fact, over share it. Like, “gush” about the scale of the problem. If you're honest with yourself and you're honest with those around you, then you can get help from people.

Collaboration and Shared Accountability

And we'll talk about some ways that we can prove our ability to gush about the transparency of a major incident and the terms of collaboration. Lots of us have got collaboration tools, but just having a collaboration tool doesn't make you collaborative. You have to force collaboration, you have to go and set an example to those around you, go collaborate with your peers, and it'll feel uncomfortable to start off with, but consciously do it. Get up out of your seat and go and speak to somebody that you've never spoken to before.

In my example, the developer on call example, I was on the phone for nearly 48 hours solid with people I'd never worked with before. So, members of the support team, members of the operation team, people whose names I knew but I'd never spoken to face-to-face. And bringing us together on that weekend was a chance for me to learn about their ways of working and a chance for them to learn a bit more about the software workings underneath the covers. And the group of us - it's probably five or six people that worked solidly through that weekend - learned a lot that weekend and we felt invested in the product. And after that, I went and said hi to them in the office. When I went back on the Tuesday, I said, "Hi, I've never spoken to you before the weekend, but hey, let's talk more."

Incident Response

The improvement katas, once again. These are all available in slides. (I know there's lots of words.) How do we increase our transparency? How many people are using collaboration tools such as Slack or Microsoft Teams already in their dev team? Pretty much all of you, right? If you have a really serious issue or a production outage, go practice this and go set up a dedicated channel or a dedicated chat, specifically for all the content related to those. That's reasonably standard. You can go use that information later when you're doing a postmortem. But when you're in that Slack channel or that team's channel, go over-share, go write down all of the steps that you're taking, all of the assumptions that you're making. Or if you're investigating an issue, go and write down your thought process - basically, all of the things that you're doing.

That way, you're increasing the visibility for other people to share from you, but you've also got a timeline of the thought process you went through, so somebody can learn from it. Next time, maybe you won't be a person on call. Next time, somebody could go and walk through the steps that you took. But don't do what one of my teams did. They created a separate channel for managers, to real people. As a middle manager, I had the dubious privilege of being invited to both of them. The one which had the real problems in it and the real chats and probably had slightly fruitier language than our executives really wanted to see, but it had the details, and then the manager channel, which went quiet for about two hours at a time and then the CEO got on the phone to me said, "What's happening? Nothing's happened for two hours." "But it has, you just can't see it." There's lots of people swearing in this channel. So don't do that. Just be open and honest. People don't mind swearing, GIFs, they really don't mind.

Increasing collaboration. I love this, pair incident management, a bit like pair programming. So, in pair programming, you have two people sat right next to each other, two pairs of eyes, one to write, one to guide. You can do that when you're investigating a defect, you can do that if you're on the phone trying to work through an issue. At one place where I worked, I can remember going through an issue. It wasn't a really scary production outage, but going through an issue with a support engineer sat right next to me. I was searching through log files because I knew my way around the code. I knew what I wanted to look for in the log files and he was writing notes on a wiki, there and then, watching me work and typing it up, and we instantly had a record of what somebody would have to do next time.

Pair incident management, it really helps with communications. You've got two people checking each other's communications out, checking for the swear words and the GIFs, or checking to make sure that the messaging out to your end users is clear, relevant, and suitable for external parties to read. And the other benefit that you get when you're pair programming, is a level of quality assurance and instant review there and then, and you can do that when you're investigating a production outage as well. In fact, probably the most important place for you to be pair reviewing each other's work, is when you're making changes in the heat of the moment.

The last improvement kata here on this slide is the most important thing from the whole talk and it is the most mushy and the most soft, but it's the most important thing. I've been practicing this for probably six years now, and it's still not built into me as a habit. Try getting into that habit of using the word “we”. “We” is inclusive, “we” is empowering, and “we” builds trust. So, as a manager, I've had to build trust with a new team. Every time I go on a new project, I have to build trust. If you've got new people joining your dev team, you have to build trust with them. Use the word “we” to mean you, to mean your team, to mean your colleagues, maybe everybody in your company, maybe your company is “we”. It's so hard to do, and you will have to check yourself, but if you take nothing else away from this talk, please try this one because I think you will like it.

Blameless Post-Mortem

The blameless post-mortem. When people hear that I talk about blameless failure, most people go, "Uh, blameless post-mortem, I know about that." Well, I don't really talk about blameless postmortem because the only place where you're looking to eliminate blame is after an incident happened, then you've got it all wrong. Blameless culture starts right from the very beginning that you need to be trying to eliminate that finger pointing right from the beginning of your development cycle. That's it. Blameless post-mortems are one of the most painful things you will ever have to experience. So, whether it's a major incident review after the production outage, or whether it's a sprint retrospective after a pretty disastrous sprint, you conduct a post-mortem to try and learn from your failings, which means it has to be honest and it has to bring out all of the failures that happened.

Now, past that, I don't talk a lot there, blameless post-mortem mainly because I haven't been an awful lot of major incident reviews. But there's a gentleman named Dave Zwieback, he's written a little book that's called "Beyond Blame" and in it he talks about blameless post-mortem actually being a learning review, and simply by changing its name to be something a bit less deathly, helps us feel a lot more cheerful about sharing an honest account during a post-mortem. I want to move swiftly on basically my improvement katas for blameless post-mortems is A, do them, and B, do your very best to try and help others to remain blameless during your retrospectives. Let's go on to the good stuff.

Make Failure Visible

One of the ways that we learn to deal with failure is by making failure visible and not hiding from it, and we have the tools that we need to do that. In most cases, we have the tools to do that already. If you make the failure of your software visible, then you get used to failing. So, if you've got a big health monitor on your wall where it says the status of your system and it suddenly goes red, it feels crap. It really does feel crap, but you get used to failing in public, and the more that you practice it, failing in public, the less worried you are about it. So, that's the first one and I'll talk about some tools that I've used to try and make failure visible.

And the other thing is to know what normal looks like in your system. If you're running a hosted system or an on-premise system to understand what the normal patterns of data are. If you're running on-premise, or in my old place, the place I've just moved on from, if you're running with no visibility whatsoever of your clients' installation, trying to understand what the normal usage of your system is, it’s hard. So, you try and use metrics, and I'll show you a little bit about that in a minute, but the secret here is understanding what data comes into your system, who's using your system, what parts of the system they're going to. And as long as you understand what normal is, you can start looking for failure. If you look for failure first, you probably won't spot it early enough.

Visualizing failure. How many people in this room already have a big screen like an information radiator which shows either your system health or your CI pipeline? Probably about half I would say. If you haven't already got one, and this feels a bit overwhelming, don't panic. Most CI/CD tools, such as Jenkins or K Pipeline, provide dashboards for you, go use them. Start with one screen, maybe in the corner of the room, put it on the wall and put it somewhere where everybody can see it, not just your developers. That way, when the beautiful green screens change to screens of red doom, you can get used to failing.

One place I worked with, we actually put the build monitor in the kitchen and then every time you go into the kitchen, you go, "That's green, that's good." Or, "Wow, that built in 15 minutes, it shouldn't be that long." And people start having a conversation about it. And then a remarkable thing happened. One day I was in the kitchen, making a cup of tea, and there was a member of the finance team next to me and she said, "Emma, is it supposed to look like that?" "No." And there was a little red blip in the corner of the screen and she had noticed it because she went in to get a cup of tea every day, and she saw that green screen, and one day it wasn't green anymore and she was the first person to highlight it to us. So, go visualize your CI pipeline, go visualize the state of your system, and put it somewhere really public.

But one of the projects I worked on, we actually made everything visible, not just our Jenkins pipeline. So, we put up our Kanban board on great big walls, like this big, and we put up a data flow diagram, which we'd drawn based on points that we understood about the system. In fact, you see him, he was drawing it here. We drew it on the wall and that way we had a reference point. If somebody from the support team came over and said, "Hey, I've got a problem, I think the data might be stuck somewhere," we could talk through it, of working through the diagram on the wall.

But it wasn't just the techies that drew stuff on the wall, or just the development team that put up their screen, it was the whole business. So, we got sales guys to put up their sales targets and their graphs of tracking where they got to, and put the marketing team showing their pipeline, and all the events that they got lined up. And that way, if the sales guys missed their targets for the month, everybody could see, they were learning to fail in public, too. And if one of our team broke a test, the sales guys could see that. So, make it really, really visible and you'd be surprised the people that were pointing fingers and saying, "Hey, you broke prod," stopped pointing fingers when it's really obvious and it's there for everybody to see. It's not as fun to point and blame then.

Know Your Normal

And then know your normal. This diagram is a reasonably small one but I've worked on a project which had like 162 microservices, trying to draw that as a dataflow diagram, it got a bit scary. I think we probably spent a week, going interviewing end users, working with the product owner, and our subject matter experts to really understand what normal looked like in our system. And we were surprised as developers, we'd gone and built a system on an assumption about data coming in at certain times a day and in certain patterns, and when we actually went to work out what a normal day looked like, we probably built the wrong thing because the pattern of data wasn't anything like we'd expected it. So, go visualize it.

The tool on the left is AppDynamics. Now, if your system is way too complicated to go and draw it all out on a pretty wall, then you can go use the tools that you have available. There are lots of metrics tools available these days, which use agents on each of your individual services. And just by having that data reported, you can start to understand the way data flows through your system. AppDynamics is pretty clever in that it learns what normal is. So, it will track the patterns of data for you and will start adapting to know what normal is, and that way it can alert you when stuff is no longer normal.

Make Time for Experiments

Somebody told me, after the first time I did a speaking engagement, that we've stopped learning how to experiment. One of the reasons why we're not very good at responding to production outages, is because we've lost that skill of experimenting under pressure. Now, maybe there's some truth in that. We're all taught to build highly available, resilient, redundant systems these days, we're taught to put in place practices which should prevent us from having to pick up the phone when we're on call. So, maybe we are forgetting how to fix issues under pressure. The most effective developers on call I've seen are the ones that just try stuff, take risks under pressure. And if it's been drummed into you that it's not okay to fail, and that you've got to go through a million stage gates before you can get approval to make a change, then you're not going to try stuff out in the heat of the moment. You're just going to do the safe stuff.

How do we breed a culture in which it's fun, you've got the time and space to experiment? As a dev manager, I would always go and allocate time for my teams to go and experiment. Just a couple of hours a week is all it took. At my last place, I gave all of the teams an Amazon Echo, go build stuff. It's like they had nothing to do with what we're working on a normal day job but a chance to experiment, a chance to go try stuff you don't know already. So, go ask your team leaders, go ask your managers for that time to go and experiment with new tech. Go practice trying stuff you don't know because when production goes boom, by definition, you don't know how to fix it because if you knew, it wouldn't have broken in the first place. So, try stuff you don't know how to do.

And then one of the ways that we can practice experimenting under pressure is to host a game day. And if you've not done a game day before, basically somebody switches off the plug, everything goes boom, and you have to learn how to deal with it. It's a way of practicing, whether your high availability and your redundancy and your resilience, whether everything that you've put in place is going to work properly. But it also helps you practice the people element, it helps you practice the way that you're responding to pressure, and maybe you'll spot patterns where perhaps communication between the developers and the ops engineers probably wasn't optimal. You can fix that next time and you'll practice seeing failure in a safe environment. Unless, of course, you're practicing in production, in which case, God help you.

Reward, Don’t Punish

We're approaching the end, and this is my favorite part of my talk. The good stuff. We should never punish people for trying stuff out. Even if you've been stuck for 48 hours, trying to solve a blooming issue in production. Even if you've got angry customer on the phone, never punish people for giving it a try to fix it. In fact, a manager that I told you about, he wanted me to say it wasn't okay to have failed. No, you shouldn't punish people for that, celebrate somebody having tried to fix the problem, celebrate people who were taking actions, reward the positives. Rather than punishing the mistakes, fine, mistakes happen, go reward the positive behaviors. And that feels like it's a manager-y thing for me to say, but we can all do this. We can do it with our peers, you don't need to be a manager to reward your peers. And this one, change starts with us, it's true. Every single one of us in this room can make a change culturally. So, whether you're a DevOps engineer, whether you're a team leader, make one of these changes and make people see it and maybe the ball will start rolling.

Lots of words on this one. The improvement katas for how do we reward when we're not necessarily in a place to hand out cash? We enforce all of the good behaviors. So, at least twice a week, go and make the effort of publicly singing somebody praise. Now, I can tell you as experience as a software developer, that's a really hard thing to do. We don't like gushing about our peers in public. You have to teach yourself to do it, try it, you'll feel stupid the first few times. If you can do it in a public environment, so perhaps by email copying in somebody's boss, say, "Hey, I liked what you did during that production issue. I’d like to see more people doing it, great stuff." Somebody that's tried experimenting with a new thing, maybe somebody that used the Amazon Alexa rather than leaving it sat in the corner, celebrate that publicly.

Something we tried was having a big thank you wall – my love for sticky notes goes everywhere. Big thank you wall where if somebody has done something you really appreciate, which demonstrates good behaviors, write a thank you to them and stick it on a wall in a public place. And start off when you'll be the only person that does it but after a while, people will stop thinking that you're a weirdo and will start doing it themselves, and eventually it will become habit. The more people that join in, the more celebration you have with the good stuff.

We talked a little bit earlier about retrospectives and about blameless post-mortems, and they really are the hardest thing to do. I've been in lots of sprint retrospectives with teams where I felt that the tough subjects weren't tackled, everybody was being too nice. You can't learn, if you don't tackle the tough issues. So, if you've had a retrospective which felt awful because people were raising difficult topics, afterwards, go out for lunch together, go out for a drink together, but go celebrate the fact that you survived the difficult retrospective and that people were honest and shared their account. And it's weird, it's a bit like Pavlov's dog. If you celebrate after something rubbish, then eventually your brain learns that, if I do the hard stuff, I'll get a reward at the end of it.

The next one really is tough. Saying thank you. I definitely haven't mastered this one yet, but I can remember my first major incident review, in which I instinctively corrected my COO when he was trying to point blame at members of the cloud engineering team and I butted in and said, "Whoa, we can't go here." And at the time, I felt really, really weird because I had interrupted my COO, but it reset the whole mood, and afterwards, he did something which I'd never seen him do before. He said thank you to one of the guys who've been on support during the incident. And that thank you lifted the whole mood of the whole room and you saw the smile creep up on the guy's face. He was thanked because he had given an honest account of what had gone wrong during a major incident. So, try it. Go say, "Well done" to somebody that's trying to experiment or going to say, "Thank you, your honesty was appreciated."

The last thing that we can practice is gifts, bribery is the best result here. So, if somebody goes to a sprint retrospective, if they've been to a blameless post-mortem, go and say thank you with a little token. Again, it's a brain thing, "Hey, I got sticker last time. You gave me sticker last time, I'll go to the next retrospective, I'll go to the next post-mortem." Laptop stickers, you can get them customized like this in about 48 hours. It's just a little token to say, "Thank you for being honest."

One last thing I want to talk about in terms of retrospectives, I did mention earlier, something I found is that it's hard to be honest when you're alongside people that you've worked with for quite some time. You don't want to upset them because it's worrying how it will make life feel once you've left the retrospective. A little tool that I've used to help with that is a free tool which is called goReflect. Everybody raises their concerns before the retrospective in a more or less anonymous way. It's an online tool that works for whether you're sat next to each other or whether you're with remote tools, put your feedback in it beforehand, and then you take that to the retrospective. So, you've got your starting point for a conversation and nobody feels like they have to be the first one to raise something scary. I have been Emma Button, we're going to take questions now around blameless culture. So, if anybody's got any questions, you're welcome.

Questions and Answers

Participant 1: Thanks for the talk, it was very interesting. Just going back, one of the points you mentioned earlier about saying “we” instead of saying “us” and “them”. If you've got two teams working, sort of sister team, so they're both working on parts of the system which relate to each other and you're in one of the teams, how do you refer to the other team?

Button: Still “we”.

Participant 1: So, you can say, "We did this and then we did that."

Button: Yes, that feels weird, right?

Participant 1: Definitely.

Button: I started doing this when I was a development manager for a front-end dev team with an interest, obviously, in back-end dev teams as well. You can't avoid it. The two things don't work separately. And because our software was being run in a production environment, also I had an interest in the infrastructure engineers and the support teams. But when I started using the word "we" the rest of my team started speaking to those other teams. It was really weird and that's when I first noticed it.

The first piece of inspiration for this talk actually was I used the word "we" and the part of the reason was because I was trying to get lots of volunteers to be on call. So, I was trying to make it sound like we were all in this together because we were and we all had a problem to solve, we all needed people on-call, and then it stuck. I've worked really hard on it, you have to practice it. “We” feels weird when you're referring to a team that you know nothing about but "we" helps and I've really, really seen it, it's probably the most transformational change I see when I'm working with development teams.

Participant 2: Thank you for your talk, it was very interesting. I was wondering, as a development manager, you might have to interact with other teams which might be outside of your jurisdiction. How would you build a culture to avoid, I would say, a defensive situation if something outside of your jurisdiction starts to affect your teams?

Button: How would I do it? First of all, I would use the word "we," and secondly, I would go work with my equivalent on those other teams, get to know them, get to understand their pain. So, the gentleman that I actually decided to found the company with used to be the head of infrastructure at a place where I once worked, and we hated each other. We were definitely adversaries. It was definitely a venomous situation. Developers causing problems, developers writing too many tests which failed, developers breaking stuff. How did we resolve that issue? We went on and understood each other's pains and embraced it. I understand his world is very different to mine now, but I had to go and watch it, and I had to go and understand what it was he did in his world, and he had to understand what I did in my world.

Participant 3: I've tried for a long time to say "we" instead of us, but when I break things, I kind of want to make everybody know that it was my mistake, for some reason, I do.

Button: You want to stand up and own it?

Participant 3: Yes. Should I do that or should I still say "we"?

Button: “We broke it”. I think that's up to you. I think I would still use the word "we". Standing up and taking accountability for a problem is a very mature thing to do and most people aren't comfortable doing it. If you want to stand up and say, "Hey, I broke it," that's cool, but think about how that makes other people around you feel and what could they have done to help? And if you go in as the hero, so, “I broke it, I fixed it” then maybe those people haven't had the opportunity to go fix it themselves. So, I think I would still use "we". “We” all the time.

 

See more presentations with transcripts

 

Recorded at:

Apr 12, 2019

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

  • Well said

    by kasib kismath,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Well spoke Emma and really enjoyed every bit of talk.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT