BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations Balancing Risk and Psychological Safety

Balancing Risk and Psychological Safety

Bookmarks
46:56

Summary

Andrea Dobson focuses on understanding the principles of the learning organizations, who can benefit, how to implement, and covers risks, pitfalls and effects of the learning organizational culture.

Bio

Andrea Dobson is Registered Psychologist and a Cognitive Behavioural therapist. She started working at Container Solutions in 2015 to expand their learning culture. In 2018, she started working for the Innovation Office to link patterns of consumer behaviour with their latest product development efforts.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcript

I've actually never seen so many people at one of my talks at a conference. It's really nice to see so many faces. I hope I can satisfy your psychological need for psychological safety. Let's get cracking because obviously today this track is about risk, and how we can counter that risk, or how we can minimize that risk, reduce the risks as you say.

My name is Andrea and I'm a psychologist. My background is in clinical psychology and three years ago I started working for Container Solutions. My daughter basically asked me, "So, mom, what do you do?" And my answer was always really quite simple. "Well, I help people to get better." I think I still do that but in a different way. What I do today for Container Solutions is help them with processes, processes like development processes in the company. How do you develop engineers? How can we get better at doing certain things? Also their hiring process. How do we make sure we get the right candidates, the right people in our company?

When we talk about risk, obviously that's for me one thing to think about potentially. I've also asked a few peers within the company what they thought was the biggest risk in our industry today. Some said it's big companies like Amazon, they have lots of money and they can do lots of things. Others said, like Sarah this morning "Oh yeah, for us it's risky to release an article on the Brexit vote day, we don't do that." Risk is really depending on the type of company you have, and therefore it's really essential to understand what your specific risk is.

Today I'll be talking about three things, three topics that I'm going to pass through in the 50 minutes that I've got. Briefly, I want to start about understanding your own risk and how can you make that risk assessment for yourself, for your own company, and what could be potential pitfalls when you are making that risk assessment. Then I'll move on to one of the solutions for that risk assessment, for reducing or mitigating against that risk. That could be innovation, but also innovation has its pitfalls, so I'll be going into that as well.

Lastly, I'll be talking about the organizational structures that can underlie when we talk about risk and innovation together and how an organizational structure can help you in that, in both the assessment of risk and both the innovation, which is what we call the learning organization, and specifically, like my title in my talk said, the aspect of psychological safety.

The story of Icarus

First, let's start with a story, the story of Icarus. Who knows this story? Everybody knows this story, great. I'm going to repeat it anyway because it's a nice story to illustrate risk and risk assessment. I don't know if everybody knew, but Icarus was locked up in a tower with his dad and he was locked up. His dad was locked up because his dad was a very brilliant engineer, a structural engineer. He built towers and bridges, and that's exactly what the king needed him for. I wouldn't recommend doing this in your own company, locking your best engineer in a tower, but that's how they used to do it in the olden days.

His dad was locked up in a tower with his son Icarus, who was a bit of a teenager, obviously bored to death in that tower. One day looking out the window “Oh, how I wish I could just fly away.” he'd see the birds outside and then he saw their freedom, and that inspired his dad to innovate, to have a new idea. “I'm going to create wings for us to fly out of this tower.” That kicked off a two-year quest of making sure that the birds come to the window, so they sacrificed half of their food to lure the birds to their window so they could take the feathers off the birds, creating and collecting the wax from the candles. After two years, he finally was able to make those full wings they required to fly.

Setting off just before they're about to fly out, Icarus's dad said to him "Son, be careful. There are a few hazards out there. If you fly too close to the water, your feathers might get wet and therefore you'll lose the power to fly, so be careful. Also, don't go too high in the sky because the sun might melt your wax, so don't do that either. Be careful, it's one of the hazards." Off they set out in the open air and it worked. The wings were bringing them over oceans and over rivers and mountains, but Icarus was obviously a young boy and he couldn't resist. He had to test and had to try out what it felt like to be so close to the water.

He came close to the water and he felt his feathers getting wet and he was like, “Oh, probably, I'll not do this.” So he started to rise up and the feeling of rising up high in the sky was amazing, he loved it. He couldn't stop it even though his dad said to him, "Don't go any higher." He did and before it was too late he started to see the wax was dripping, dripping down. Before he could do anything, Icarus fell down back to Earth, obviously dead.

Understanding Risk

This is the story of Icarus. As we can see in this story, risk is not just something that is always easily described. Risk is the product of a few things, and we tend to forget about those things. As I said, risk is the product of hazards, water in this case and the sun, but also the exposure to those hazards. If Icarus stayed away from the water or stayed away from the sun, the hazard would not mean anything to him and the risk of dying would have been low. Because he exposed himself too much to those hazards, his risk went up. The third thing we need to think about is your own vulnerability. There's a reason why we put safety tops on bleach because we know that kids are more vulnerable to certain risk once they're exposed. In this case, it was probably Icarus being a bit daft that made him vulnerable to the risk of crashing when using those wings.

It's great to be able to understand that these three paths make up the risk assessment and it's great if we can also use our rational brain to make that risk assessment. Do we do that? Do we always use our rational brain? Unfortunately, we don't. We use our instinctive brain to make our risk assessments. It's that quick thinking, that quick instinctively almost emotional driven routine type of decisions making. So when we make a risk assessment, we usually don’t think about the rational product of hazards, exposure, and vulnerability. We usually think back about a situation that's happened to us before. “Oh, when did that happen? Oh, that went wrong. It will probably happen again, so we'll do this.”

There was one psychologist and economist who built onto this work and made the distinction between the two ways of system thinking that we have in our brain. And that was Daniel Kahneman, and Daniel Kahneman, as I said, is a psychologist and he really described clearly how a decision making process happens. Firstly, we have that emotional instinctively automated brand, which we call system 1, system 1 thinking. You might have heard of this before. It's more that automatic type of thinking, versus the system 2 thinking, which is more deliberate and analytical and processing and rational. It's great, that's the one we want to use when we're making those risk assessments.

Unfortunately, our brain is slightly lazy, it usually uses just the first system. We have to push ourselves into system 2 thinking. It does not happen automatically because our system 1 is automatic and it comes with biases and it comes with mistakes. But because it's automatic, we can't switch it off. Just to illustrate how it works, I want to show you this illusion. These are two lines with arrows at the end. Who's seen this illusion before? You all know the answer, you all know that the lines are exactly the same but still, but still, you can't help to see the bottom one as longer. You can't help it. It's all your system 1 thinking. So it's good if you do see it because that means your brains are working quite efficiently.

What does Kahneman suggest as a bit of a solution for this problem? Because this is clearly a pitfall for our risk assessment. The way we assess risk is obviously depending on us using our system 2 thinking, and not on that system 1 automatically. Instinctively thinking we usually do not make the right decision based on our emotions, so what can we do to help ourselves slightly? It's hard, he says, it's very hard to do so, but it is possible. He says, learn to recognize the situations where you know or where you think certain mistakes are more likely, or where the stakes could potentially be very high, and then push yourself into sitting down and reflecting and thinking about those three aspects about risk.

Understanding Innovation

What else can help when it comes to risk and risk reducement or prepare yourself for a certain risk situation? Well, one thing is innovation. They are related because risk pushes innovation. Risk pushes innovations like it's always had in the past. When we look at things like the industrial revolution and our technology industry revolution, but also when it comes to viruses. When we see a deadly virus, our risk is very high to contracting it so it pushed us to create vaccinations.

It also pushes progress of a better living standard for us. The government chief scientific advisor gives very clear advice about it's not about preventing risk or avoiding it, it's about managing, about making sure what could potentially happen in order to avoid the big risks. Obviously, innovation is very important in that, but it doesn't tell us when, or what or how to innovate. If we have that risk assessment, how do we know where to innovate or what? Sometimes we know certain innovations can be very useful if we just look at our own recent innovation in our industry. I'm talking about artificial intelligence and machine learning. Great in healthcare, fantastic or other implications where it's fantastic, but what happens when people lose out on that innovation? Google's facial recognition program was going to be used for warfare. Is that what we want that innovation to be used for? Who is losing out? Those are all things we need to think about when we talk about innovation.

Because of that massive uncertainty around where to innovate or how, we need an organization or at least an organizational structure that can help us with those decision making processes. It's about that continuous improvement, but also being able to stop, like Sarah said, and experiment when it's going too far or when it's not leading to the results that we were hoping for.

Growth vs. Fixed Mindset

What could be a potential pitfall for when it comes to innovation, and what stops us from going into that innovation, in that continuous improvement cycle? I think I've already said this actually, because this is where I wanted to go. One of the pitfalls is related to the growth and the fixed mindset. The growth versus the fixed mindset was introduced by Carol Dweck, who was a psychologist in Stanford. She researched how kids looked at their abilities and she saw that there was a difference between the way they saw their own skills and their own abilities.

With the fixed mindset being the mindset that looks at themselves as something that you either have or you don't. That means there's a limit to the amount of knowledge that you have or could have, or whether you are able to do something or you're not able to do something. So people with a fixed mindset believe that they are either smart or that they're either good at math or not good at math. When we look at the growth mindset, this is almost the polar opposite of the fixed mindset. These people believe that their abilities and their skills are infinite and that there is no end to what you can achieve, depending on how much effort and skills and practice you put into it, so this obviously is quite opposite. The fixed mindset believes abilities and skills are finite and limited. The growth mindset accepts the importance of lifelong learning.

We can see now how a set of abilities or a set of a mindset, how you believe and how you look at your own abilities, can impact your own innovation, because the process of innovation in itself requires continuous improvement and failures and mistakes. That's exactly what the fixed mindset struggles with. They see failure or mistakes as a negative thing, it's a negative emotion. The fear of failure is a negative emotion for them so they'll do everything to avoid it, versus the growth mindset, where failure is actually exciting as something they want to because then they can decide which way to go or which way not to go maybe potentially.

The fixed mindset sees failure as a threat. What happens when we see things as a threat? System 1 system 2, flight and fight. It's the automated brain, the instinctive emotional brain. What is in the limbic system is a small little knot and it's called the amygdala. Every time we see something as fearful or as scary or shameful, that part of the brain gets activated. When we are in our system 1 thinking, we are not in our system 2 thinking, and in our system 2 thinking that's where we're more likely to innovate and we're more likely to make proper risk assessments. Having that growth mindset is fantastic, so how do we get that right?

Luckily, Carol Dweck does have a solution for that. If you think you have a fixed mindset google “Please Google, how do I change my mindset?” It is possible because Carol Dweck describes a mindset as a set of beliefs. And a set of beliefs is something you've learned, so it's not something that you're born with, not something that you woke up with one morning and “I've got it,” you've learned it. When we learn something, it's also you're also able to learn something else, meaning you can also change your mindset in more of a growth mindset. It does require coaching, so hopefully if you have that organizational structure in place, you can definitely change people or employees in your company to go more towards the growth mindset.

Learning Organization

I talked at the start about organizational processes as something that I help Container Solutions with. One of the things that I've been trying to help them with was the implementation of the learning organization. The learning organization is actually quite an old concept. Some of you might have ever read "The Fifth Discipline" by Peter Senge. It's slightly controversial, but the idea was still there. The idea of the learning organization which he described first was about an organization that's able to adopt, an organization that is able to create and acquire knowledge. That was the first bit.

The second bit was that the employees were also able to gather that knowledge, that new knowledge, and modify their behavior in order to support that new info, the new ideas that they have. Now, this is great because this is one of those things that you want in place to help you with that risk reduction, or at least prepare for it because new knowledge, fantastic, can create new ideas and innovate.

After Peter Senge started this, there were more psychologists who picked up on that idea and tried to put that into more of a research-based, see how can people implement this in their own companies, in their own organizations. Those psychologists were Amy Edmondson and David Garvin. They worked on that premise, that idea of that learning organization and they tried to help organizations to implement it. Because it's been such a, what we call almost scholarly subject, it's been very hard for a lot of companies to implement that. They were literally missing the structure and the building blocks in order to implement that at their own company. A few companies have been able to introduce this type of ideal organizational structure. Not much as you can see. These three have developed from that idea of the learning organization, they developed a talk and measurements, and how we can move on from different aspects of that learning organization into your own.

Building Blocks

What are those very important building blocks that you require in order to have that learning organization? These are the three building blocks that are very essential. I'll go through them briefly, but I want to mainly dive into one of them, and that's the psychological safety, so I'll skip that for now. The first one is about having that supportive learning environment and that is build on these.

The appreciation of difference is about that you let people have different and opposing ideas to each other. Innovation not only comes from not being all the same, but different people in different cultures coming together. The same is for openness to ideas, it's about having that creativity to think about things from a different perspective. We know that we can all be very busy in our lives. Scheduling, releases, whatever you have to do or there's an emergency, but if you do not have time for reflection, you will not have time for innovation. Creating a pause to think about things within your process will help with that.

This is what Sarah described this morning in her keynote. This is the experimentation part of your process, of your organization. This is about practices that allow you to experiment. If you do not experiment, how do you know whether something will work or not? Actually Sarah said this morning that it's not an experiment if it doesn't have a hypotheses and when it doesn't have proper analysis or where you can fail the experiment, because a failed experiment is just as much as a successful experiment.

An experimentation can potentially start with you want to test a new product or you want to test something, a new process within your organization. Obviously you set the time limits, how long are you going to run this experiment for? What do you think? What's your hypotheses? What do you think you're going to achieve? What's going to be the outcome of that experiment? Information collection obviously you need to know, you need to collect the data, then analyze that data and then when it's a failed or the end of the experiment, whether it's failed or successful, it's about spreading the word, it's about educating and it's about training the people in your company.

Information transfer, that's about what we do here today. That's about us telling each other about what works and what didn't work. Most of the talks today are about “Oh what I did wrong with microservices” or whatever. It's all that information transfer that can be beneficial for others. Then very importantly, leadership that reinforces learning. If you have this, fantastic. If you've got a leader that does that or you have a team lead who does that, great, but it will not survive without the other two building blocks. Also, the other way around: if you've got the other two building blocks, but you don't have this, it will be quite tricky.

What does this leader do? They ask questions, they start dialogue, they invite input from other people. It's about that leadership behavior that is inquisitive. It's about that leadership behavior that makes sure that everybody's involved in those processes.

Psychological Safety

I'm going to do a little bit more on this subject. Psychological safety was introduced by William Kahn and he was a psychologist as well. Psychological safety is not something that you believe, or I believe. It's something that we as a group belief, so it's not an individual belief, it's a shared group belief. What type of belief is it? It's a belief that if you say something or if you speak, or you have a problem or an issue or you see a potential risk, you feel okay to share this. It's about the ability to speak up about things that you think need to be spoken up about.

I've spoken about this in relationship to ethics, because ethics is obviously very important if we flag things. If we talk about things that might worry us or that we see in a future that could happen. I gave examples in those talks about NASA and about pilots. In both situations, those people did not speak up because they were too scared to either be fired or because there was a hierarchy in place that they couldn't break with fatal consequences. In the NASA example, seven astronauts died, and then the fighter pilot three people died. That means not having that psychological safety could sometimes literally save lives.

Amy Edmondson, one of the psychologists that was really involved in that learning organization, research and developing those building blocks, she started researching psychological safety, but it was actually something she found along the way. It wasn't that she was out to finding or establishing that with psychological safety. She was looking into high performance teams and how did those teams become high performance? How did they do better than other teams? She did this research in hospitals because hospital teams work very closely together. She looks at how people, how those teams were, how they were effective by looking at the patient outcome. Where certain patients that were treated by certain teams more quickly out of hospital, that they get better sooner or not.

She actually didn't expect it, but the teams at team meetings, when they reported more mistakes- so they were talking about things that went wrong more- those patients left hospital better and quicker than the teams that didn't, so she was like “How's this possible? They make more mistakes, but the patients are better, how is that possible?” It wasn't that they made more mistakes, because everybody makes mistakes, we all make mistakes. It's just that they spoke about it more. They were able to address those mistakes quickly before they became too big, before they became big mistakes. I don't know if anybody remembered that example where a doctor left a piece of cloth, I think, in the patient's belly, and a nurse wouldn't say anything. That was a small mistake, but it turned into a disaster for that patient.

What made that team with the more reported mistakes so more effective? What made them admit to those mistakes? It was that shared belief of psychological safety. In our industry, we've had a very similar research done, and that was called the project Aristotle. Most of project Aristotle was based on Amy Edmondson's work. They also wanted to figure out what made certain Google teams more effective than others. Funny enough, they also found very similar results. It was psychological safety that they listed as their most important aspects of being effective as a team.

Paul Santagata is the head of industry, he is one of those founders of that Aristotle, who was very involved in that in that project. He said this, and I think this is a really key quote, because it is exactly about the high demanding environment and about the fast-paceness of it all. Without that unsafety, without that belief that you can’t express when you feel uncertain or you feel something might be going wrong, you are able to adapt more quicker to those ever-changing situations.

Sometimes people think, “Yes, great, this psychological safety, but can we still perform? Can we still demand things of people in our team? Can we still hold people to account, or are people just going to sit down and do nothing and feel safe and share and do all that?” Well, no, you don't. You don't have to lose out on being high demanding in what you want to achieve without giving up psychological safety.

In a way, my title of this talk is a little bit deceiving, it's not about balancing those two. Without that psychological safety, it's almost hard to not create a good risk assessment. Amy Edmondson made this, she's got a great talk and if you watch it, you'll recognize most of the things I said. But the problem is if you do have those high demands but you give up on the psychological safety, something will happen. This will happen. People end up in an anxiety zone. Do we want to be in an anxiety zone? No. And that anxiety zone is again, system 1 thinking. It triggers our fear and the flight and fight response that gets us back into our system 1 instinctive brain thinking. Can we innovate? No. Can we make accurate risk assessments? No.

When you happen to find yourself in that anxiety zone, that's when you're afraid to speak up. That's where you're afraid to say anything. That's when a nurse is too scared to call a doctor in a nightshift because he called her incompetent last time. This is when that co-pilot was scared to say anything to his pilot because he would probably get reprimanded for speaking up to someone higher in his hierarchy. We don't want to be in an anxiety zone.

There's another place we don't want to be, and that's when we have no demands and we also have no psychological safety. That's commonly called the apathy zone, I also don't want to be in the apathy zone. You might be working for government, this is where people just kind of go, “Yes, whatever mate.” Nobody probably works there. If you do, Container Solution is hiring.

The comfort zone is where we do have a lot of safety, where we feel nice and comfortable. When we talk about risk today, comfort zone is probably a potential risk because you're not pushing yourself, you're leaving money on the table. As Amy Edmonson says, you are not doing enough. It's great if you will run a local, maybe farm shop in the countryside, potentially. It's nice, everyday same customers you're fine to speak up. Everything is nice and what we call in Dutch, gezellig. In our industry, we don't want to be there, we don't want to be in that comfort zone, you're best off being in the learning zone.

Here we have both the high accountability. We can push each other, but we can still make mistakes or we can still move forward. This is where most engineers, I think, would want to be, because you really are always educating yourself, you're always learning yourself a new tool, you're always trying and experimenting. That's exactly the organizational structure and the place where you want to be. You want to be held accountable, but you also want to be felt that it's okay to say, "Hey I fucked up." That's where you want to be in the learning zone, so you need that psychological safety for that.

Learning Problems, Acknowledge Fallibility & Model Curiosity

”Great, Andrea. You told us about this utopia that Amy's crew has been painting about how do we help each other? What can we do to create that? That shared belief.” Amy Edmondson has given us a few handouts for creating that, and I believe that you don't need to be a leader or a manager or anyone with specific powers in an organization to create that. I think you can be a team member and give the example to others and push each other within that psychological safety.

Amy says three things: frame problems as not so much something that we need to get through, but something we can learn from, as a way that nobody knows the answer. We all don't know what's going to happen or what the effects will be, so almost like an experiment instead of, “We need to do this, we need to do this”. Form it in a way that involves everybody, where you form it that people’s voices are needed in order to come to that solution of that problem. That it’s not you, if you are the team lead, that it's your idea that is going to be fixing that problem. Everybody is involved in this.

Secondly, acknowledge your fallibility. It might sound simple, but by just simply saying, “Hey, I'm only human, I might miss something,” because you will, you will miss something, by expressing clearly that you might, it forces people to speak up and to come and see you and say "Hey, have you thought about that?" “No, I haven't actually. That's great. Thank you.” Or, “I have. Yes, great. Thanks for pointing that out.” Acknowledging that you're a human being because we are all and we can make mistakes, is setting that context for people to speak to you.

Why is modeling curiosity so important? Because if we assume we know everything, we tend to go into blaming when something goes wrong, and blaming leads to conflicts and blaming leads to defensiveness, and it doesn't usually lead to feeling nice. We've attacked this problem together. If you assume like Socrates did, “I know nothing” and just ask the questions, you can have a hypothesis in your head, but model the curiosity because it forces people to think and give you answers. Whereas blaming just doesn't force them to think about their own way of working.

These are three simple things that you can practice every day to create that psychological safety for each other in order to move forward to that innovation to more system 2 thinking, and therefore not only risk reduction, but being more aware of what could be a potential risk for your company and for your team or for your product.

We can't prevent risk from happening. They will happen, risky situations will occur. What we can do is prepare ourselves by having a greater understanding of what is needed for good risk assessments and what could be potential pitfalls. Our own brain can be a pitfall, remember? We can all support each other and having those right structures and having those right processes in place, and create a safety that allows us to innovate and prepare for the uncertain future ahead.

 

See more presentations with transcripts

 

Recorded at:

May 24, 2019

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT