BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations Unraveling Techno-Solutionism: How I Fell out of Love with “Ethical” Machine Learning

Unraveling Techno-Solutionism: How I Fell out of Love with “Ethical” Machine Learning

Bookmarks
49:16

Summary

Katharine Jarmul confronts techno-solutionism, exploring ethical machine learning, which eventually led her to specialize in data privacy.

Bio

Katharine Jarmul is a Principal Data Scientist at Thoughtworks Germany focusing on privacy, ethics and security for data science workflows. Previously, she has held numerous roles at large companies and startups in the US and Germany, implementing data processing and machine learning systems with a focus on reliability, testability, privacy and security.

About the conference

QCon Plus is a virtual conference for senior software engineers and architects that covers the trends, best practices, and solutions leveraged by the world's most innovative software organizations.

Transcript

Jarmul: Welcome to my talk, unraveling techno-solutionism, or otherwise known as, how I fell out of love with ethical machine learning. I'm Katharine Jarmul. I am a privacy activist and a principal data scientist at Thoughtworks. I'm excited to talk with you about how we can pull apart techno-solutionism. Where we're going to start, though, is how I fell into love with ethical machine learning. Here's me. It was my first ever keynote I was invited to present, which was a really amazing experience, at PyData Amsterdam. I had started thinking and working in the space of ethical machine learning due to the changes that we had seen in natural language modeling. I was primarily working in NLP or natural language processing, and modeling. This was the point in time when we were moving to word vectors and to document vectors, and to then also try to place those together in new ways and think through problems. I had been spending some time investigating word vectors and finding really problematic stuff. Then I found, of course, the work of ethical machine learning, of FAT ML, so fairness, accountability, and transparency in machine learning. I was really inspired by all this work.

I thought to myself, way more people just need to hear about this work. If more people hear about this work, and they're aware of this work, then maybe we're going to have some fundamental questions about the data we use. We're going to have some fundamental questions about the way that we process that data, the way that we train with that data. We're going to end up solving the problem. Maybe not solving the problem, we're going to at least make some progress. Here's me presenting just really thinking, this is a really amazing, cool field. How are we going to get there? This is it now, basically, nearly six years later from when I wrote the talk. We're generating dashboards with YAML and Python. This isn't what I expected or wanted. I don't think this is what anybody in the field of responsible AI expects or wants. This is what we're going to talk about, is how do we actually keep to the core principles of maybe some of the inspiration for techno-solutionism, without actually falling into the trap of techno-solutionism. This is not to say that the responsible AI work that lots of folks are doing, whether at Microsoft Research and many other places, is in any way not useful. It is to say that a dashboard is not going to solve ethical machine learning.

What Is Techno-Solutionism?

First, we need to figure out, what is techno-solutionism? If we can't identify it, we probably can't find it. Let's cover what it is. Here's a good graphic that I think breaks it down to the simplest of ways. We have a bad state. Then we have a magic technology box. When we push the bad state through the magical technology box, it comes out to a very nice happy state. This can be people. I think we should often think about it in people even if what we're doing is we're optimizing a service, we're optimizing a product or something. It's going to affect people. This is one of the things that we need to think about when we talk about techno-solutionism, is with the bad people problem that is going through the magic technology and becoming a better one. This is the view of the world as a techno-solutionist.

Another thing that you'll often see or hear or know about, if you're thinking about, or you're talking with, or you yourself are in the depth of techno-solutionism is this mythos, that technology equals progress. That of course progress is good. Progress has this notion that it's positive or good, or that technology in and of itself is completely neutral. We can never criticize technology because technology is neutral, but it is the people that are bad or good, or something like this. Here, I have the first written formula for gunpowder, which was actually discovered by Chinese researchers who were trying to discover the elixir for life. I think that is just like a really great example of techno-solutionism. It's like we want to find the elixir of life, because we want life to be good forever. If you can just find the way to have life, it'd be great. Then everything's going to be awesome. Ended up inventing gunpowder, which, of course, changed the history of quite a lot of the world. Mainly, most people would agree in quite a negative way, killed a lot of people, still kills a lot of people. Essentially, was a technology that was able to be used by colonialism, and so forth, to consolidate power and oppress people, and to perform genocides and hostile takeovers of entire areas of the world.

We can clearly see here that gunpowder is neither neutral, nor is it necessarily progress if we're affiliating that with societal change, that means that people are happy and benefiting. It's a great example, because when you actually take a look back at the scientific process, and how technologies developed and so forth, it's often more like this. There's a technology that we found, discovered, invented, figured out a new use for. Then that technology creates winners and losers, and potentially lots of other things as well. The technology is going to hurt some people and it's going to help some people. The technology, therefore, is not neutral. Yes, of course, it depends how you use the technology, and who has the technology in the end. It is not like it's some magical vacuum that has neutral effects, or that it inherently means progress that is positive in any way, shape, or form. Once we start looking at the world like this, we can start to question the techno-solutionism. This is a real narrative that counters the fake narrative.

Early Mythology

How did the fake narrative even start? There's a lot of mythology in it. There's also a lot of history in it. We can get into a long debate about the greater span of technology over human history, at least of what we know, and so forth. I think there's a lot of really amazing research on the intersection of technology and consolidation of power on technology, and colonialism, and feudalism, and so forth. We're going to zoom in on technology in our field, which particularly starts with computer science, and maybe data, and machine learning, and so forth. When we zoom into that, we often end up in the buzz of Silicon Valley. This early mythology, and here we have Jobs and Wozniak, centers around the culture or the subcultures that existed in Silicon Valley, at the start of personal computing, as well as the expansion of data centers and the internet and so forth.

We're in this era, from the late '70s till the mid-90s. We're in this area, this period here. In that time, this mythology allowed people to combine the movements, or the history at the time allowed people to combine the movements of hippie activism, and revolutionize things, and don't do the way that it was done before. Think outside the box. Fight the power. You're the underdog. You're going to conquer and create a new paradigm, and all this stuff. This was like a lot of the language at the time. Then, Silicon Valley in particular combined that hippie revolutionary spirit with entrepreneurialism and essentially like a yuppie search for wealth. This idea that the best idea will always win. This idea that it's genius, and therefore, you should be able to get paid a lot of money for it, and this ethos. Those two things clashed to create this rugged individualism, technology as part of social change, and we're going to get rich. That was the era of this, and I think also a big play into some of the core beliefs or philosophy around techno-solutionism.

California Mentality

Part of why this was able to be done, and part of what made Silicon Valley so special for this is the California Mentality. I am a native Angeleno from Los Angeles. I deeply know this California Mentality. Let me tell you a little bit about the California Mentality. Here's a picture of the Gold Rush which brought a whole bunch of new different population to California, in the search for wealth. I think you can't remove the Gold Rush and also the pioneer mentality from California. It was the idea that here's this rugged wilderness, as if it was unoccupied. This place where if we could only tame it and use it, it would make us rich. It would bring us happiness. We could start a new life, all of these things. These were primarily white settlers who were coming from the Midwest, who were coming with this dreamer idea of Manifest Destiny. That it is America's destiny from some religious aspect almost, to control the entire North American continent. Of course, it's the idea of, we're bringing progress and technology and civilization. We're then changing the nature of the place, and of course, also changing the people of the place. Let us not forget that there was many First Americans who were already there, and who were killed and displaced. Genocide was committed as part of "settling" California. This California Mentality, it plays into the techno-solution, because it is this idea that if you have a better way of doing your thing, you can just displace people, and you can take things. Also, this idea of like, the gold is everywhere, the gold is in the earth, but if you can find it, it's yours.

Echoes of Colonial Pasts

That's important to remember, because this is also part of the story of the way that we see numerous ideas of techno-solutionism in our world today, is the idea that there are resources out there that we just need, if you have a better way to do it, you're going to win, you're going to get rich, and it's yours for the taking. These are, of course, echoes of the colonial past, of not only California, but numerous other places. Where if you can be the most powerful one to command those resources, and if you can use technology to control that, then you deserve the benefits of that, and so forth. Kate Crawford is an amazing computer scientist and also a thinker. She has a massive piece of work that she worked on, one that artists called Atlas of an AI. It's fantastic because it's really starting to go against this narrative of techno-solutionism, because it looks at the entire AI system. It looks even from the start of taking precious metals from the ground in Central Africa, often with the help of child labor, or at least minimally, and soldiers, and so forth, and then turning that into chips, like TPUs, and GPUs, and so on. Then through the entire actual data collection, classification, training process, and then the deployment of these systems and how they're used.

This is a little zoom into some of the middle, but I want to point it out here, because data exploitation is not that dissimilar from the gold mining exploitation. It's not that dissimilar of saying, let's take some technology plus an idea. Let's take also cheap labor, and let's turn it into value. What we see here is also the hierarchical nature of the labor that is related to an AI system as well. When we think about this in the concept of how the data ecosystem works today, we can often also compare it quite easily to a feudalistic society, in which we can think of the top, the kings giving out the money to ideas, or the VCs. We go all the way down, knowledge workers, those of us that are machine learning engineers, and so forth, maybe data collection workers, data engineers, systems that support that. Then all the way down to the data producers, or those that have to answer to these systems, which we could compare to gig workers, for example. We can essentially see the colonial nature of quite a lot of the technology that we have.

Joseph Weizenbaum's Work

There wasn't only that narrative. It seems pretty silly. There's got to be alternative narratives, this whole mythos of Silicon Valley, and it providing something, and this idea of, if I have the technology, then I deserve to take it and make value of it. That's not the only voices. It's not even the only voices when we look at the history of data and machine learning in our world. I like to talk about Joseph Weizenbaum's work, because he's a great example of somebody that was not buying into the techno-solutionism narrative of his time, or of the times that he saw after he left the act and field of programming. Here he is, SSH-ing or teleporting into his computer at MIT. He was a professor for a long time. Before that he built actually the first OCR, the first automated character recognition system. He also built what many consider the first NLP model. He saw the impact of that on the world. He saw the impact of automated check reading, which is what he built for Bank of America, many decades ago. He saw that that actually allowed Bank of America to expand very quickly in comparison to its competition.

What did he think about technology? One interview in the '80s, around the time that the mythos of Silicon Valley was deeply churning out new ways that computers are going to "revolutionize" the world, Joseph Weizenbaum had already seen a bunch of that. He says, "I think the computer has from the beginning been a fundamentally conservative force. It has made possible the saving of institutions pretty much as they were, which otherwise might have had to be changed." A stark difference we see here. We do not see computers as progress. We see computers as consolidations of power and resources. We see computers as being in the hands of institutions that always have been and always will be, rather than in the hands of a "revolutionary." There's a whole bunch of technologists and ethicists and thinkers and people who have been calling out techno-solutionism and calling out what they see in this, for decades. Hopefully you're here. You're not alone. We're not alone. We want to find Weizenbaum's of our time. We want to support them. We want to maybe be part of that. How are we going to do it?

How To Spot Techno-Solutionism

First off, we need to first spot techno-solutionism. I made some test here. It's guaranteed an incomplete list. These are some things that I've definitely seen that I know I've also fallen for myself. I've fallen for thinking, yes, if we just put for good at the end, like it's data for good. Then, definitely, it's going to be good data stuff. We should just do only data for good. That means that we solve the problem, and other things like this. Because it's quite seductive when we think of techno-solutionism, because, of course, I think a lot of us, we want to contribute something positive to the world. We love math, or computers, or technology, whatever it is that drew you to this field. You want to feel like that combination is almost inherent. That's why the techno-solutionism story gets so intoxicating, because it basically says, just keep doing technology, because it is good. If you just keep doing that, then you're going to be positively changing it. It's easy to do, everybody along. A lot of us have done it. Totally understandable from a psychology point of view.

Here are some tests. One test, are you optimizing a metric that you or somebody else made up? If you're in meetings, and you're planning something, you're researching something, you're building something, and every single person agrees how awesome it's going to be, how much it's going to change the world for good, and then that. If you can reformulate the problem statement as, if we only had blank, it would solve everything. If you find yourself using this mythology speak, revolutionize, change, progress, and so on. If you notice when people are bringing up issues, or they're questioning something, or they're pushing back, not on a simple technology choice, but maybe on the impact of things, and you realize that those people were being therefore then excluded to the conversation. Maybe this was you at some point in time as well. If you realize that nobody on the team has even tested or thought about a non-technical solution to the problem, to even just essentially have a counternarrative. If you said yes to any of these, you might be in techno-solutionism. There's probably some other tests you can think about. Let's create a really nice list for us to think about and notice when we're there.

Lesson 1: Contextualize the Technology

If you notice you're in techno-solutionism, what I'm hoping is, you're here on the staff-plus track, which means that you understand a lot about technology. You probably also understand a lot about businesses and companies. You've been around a few different ones most likely at some point in time, and you understand maybe that side of technology as well. You potentially also have a lot of power in the organization. When you realize it's happening, there's some things that can be done that can help shift the entire conversation. This may end up with all different sorts of outcomes. I think shifting the conversation is the most crucial and important step in actually countering techno-solutionism. The first lesson that I can tell from hard-learned experience is, first take a step to contextualize the technology in terms of society and the world and the larger space of history. We have a technology that we found or discovered or invented or created a new use for and so we're optimizing whatever we did. Where we want to first look at is, where are we in the course of history on this problem, and also on this solution, and also on this technology? I would very much recommend expanding the connections that you have in the world beyond the tech world, and finding ways to connect and talk to researchers in other fields, to talk to historians and librarians and scientists in all other areas. To start figuring out and learning about what happened before this technology, and what even happened with this technology before, and really immerse in that search. You might not have the time or capacity to do this. If it's not you, it needs to be somebody that's doing this, or it needs to be a group of people that are doing this.

Then I would say, think about the null hypothesis. If this technology was never released and never available, what would be the same? What would be different? Then, I also want you to start mapping and thinking about the non-technical competitive landscape. Other than this technology, what are solutions to this problem? I want to give you a really clear example here so that you have something, or how you can look at this. Because usually when we have a competitive landscape, especially if you've formed a startup, you just throw a bunch of other startup names that are in the space, and you put them on the chart, and you're like, we're better because, whatever. That's really deep into techno-solutionism there. When you're thinking of a non-technical competitive landscape, you want to figure out, how are people solving the problem not using technology today, or in the past? Then you also think about new, creative ways to combat that problem. This is also going to involve the 5 why's, and really asking why.

There was a startup here in Berlin for a little moment, that promised to deliver medicine to your house in less than 10 minutes. Medicine here in Germany is a lot of times at like little pharmacies, and there's quite a few, usually, in most neighborhoods and so forth, in a large city like Berlin. I was wondering, when I saw the advertisements, like what problem is this solving? Most neighborhoods have these small pharmacies, and there's usually a pharmacist there, and of course, they have medicine and so forth. Might be the case that you have to walk more than 10 minutes, or take a bus more than 10 minutes. Also, unlikely, I would be surprised, then you'd have to go outside of the city maybe quite far. That wasn't where the startup was. Then I started thinking, some of these pharmacies, sometimes they close at like 6:00 or 7:00 in the evening. Maybe the problem is that somebody wants medicine delivered late in the evening or something like this, and they don't want to go to a hospital or something like this. Then I thought to myself, did they not know that they needed medicine before? Could they have gotten off work 20 minutes earlier, or 30 minutes earlier and gone to the pharmacy? Then you start to figure out there's all these other potential ways to solve this problem. It also makes you question, what even is the problem that we're actually solving? Both of those are really important.

Lesson 2: Research the Impact, Not Just the Technology

As you're figuring out the context, and now you have a better understanding of the landscape, you can also start to figure out what's the impact. Only when you have the context, can you also start researching the impact. You're going to have the short-term impact. This means finding people that are going to be harmed by the technology, and finding people that are going to be helped by the technology. You're probably already talking to those people. Probably some of them are your customers. If it's hard for you to figure out the harmed part, then you probably need to go back to the context, and talk with more people outside of the field, and get some more context of how this technology could harm people. Make sure that you get those voices and document those short-term impacts. It's also beyond thinking of the short term. It's not the year after you release the technology. It's the five years after you release the technology. Let's say the technology does all of the dreams that you want it to do, and it starts actually changing the way people behave, or the way that certain community things happen, or the way that traffic patterns happen, or whatever it is.

Then you need to start thinking about, what's the human impact? What's the impact on schools, the education system? What's the impact on the transportation industry? What's the impact on the government? What's the impact on logistic networks? What's the impact on the critical infrastructure of a place? What's the impact on the workers, both the workers at the company you work at but also workers at other places? What's the impact on the supply chain? What's the impact on factories and the production of things? That's your mid-term impact. Then as you think of the long-term impact, you're thinking, what's this interaction? How does this interact with other systems and processes in the world? How does this affect people in other regions of the world? Particularly when you're thinking of these, and you're looking in a North American context or something like this, you need to think about, how does this impact humans and people and communities and work in the Global South? How does this impact other entire areas and regions that you may not even have imagined will use the technology, if they do use the technology or if a competitor in your space starts using the technology in another place? What does that impact? You're not going to know all the answers. You're going to need to get that outside help. That's why it's good to start with the context because you start to have people that you can rely on and have these conversations with, and potentially even start communities at your company or organization that can actually have a more in-depth and educated conversation about this, with experts from other fields with knowledge, and ideas, and work from other spaces. Also, how it's the interaction effect on the wealth distribution, and capital systems in the world, and so forth.

Lesson 3: Make Space for and Learn from Those Who Know

Now you have a good idea of the impact. Hopefully, along the way, you ran into some really interesting organizations. When I use the term expert, I don't mean that they have to have a PhD after their name. Nobody is an expert in another person's experience, other than that person. You can't be an expert on somebody else's experience. They're the expert. This means that when you're thinking particularly about potential harm, so when you're thinking of harms on a community, or societal or institutional level or system level, you need to start thinking about oppressed folks in the world and how it works with systems of oppression. How folks that are undervalued or underrepresented, and under-accounted for often in technology systems, how that's going to impact them. When I stepped away from ethical machine learning, because not only was I realizing, I don't feel like this is a place where I feel like I can contribute.

Because I think there's quite a bit of noise, and I don't feel like I'm fundamentally essential to helping improve this. I also see a lot of techno-solutionism, and that makes me upset. I started thinking about, what are smaller slices of ethical machine learning that I really want to focus on? What I eventually focused on is privacy in machine learning, and data science. A lot of that was inspired by making space for and learning from people who have been affected by machine learning systems.

I came across the work of Stop LAPD Spying, near and dear to my heart of being an Angeleno, and seeing up close and personal the oppression of the Los Angeles Police system. Their work started with documentation of not only the surveillance infrastructure, such as the drones that the LAPD were deploying primarily in lower income, communities of color, primarily black and Latinx communities in LA, and how that was creating a reinforcement system where more were deployed. Of course, then, if crime was found, then they deployed more. This was, of course, a way to do more automated surveillance and automated policing, and so on. They've been working against that and many other things, and they have weekly meetups, and you can donate to them, and so on. When you find groups like this, that have already been working on the impacts of these systems, make space for them, learn from them, listen. Hopefully, you still have that idea of the beginner's mindset, even though you're a staff-plus. You hopefully have learned that there's nothing better than keeping your mind open to learn. When I say make space, I mean make space for them in the conversation. Bring them into the room. Hold the space for them. Use your own privilege and your own space at the table to lift up those voices. If needs be, create a permanent space for them, or step away and give up your space for them. This is the ways that we can use the power and the progress that we have in this world to make sure that the right voices, the most affected voices, and the often underheard voices are a part of the conversation.

Lesson 4: Recognize System Change and Speak to it Plainly

As you're doing that, you're probably going to start to change if you haven't already changed your idea of system change. It's going to look less like the Amazon Go of the world, that Amazon Go is going to revolutionize retail convenience. It starts to sound confusing, because revolution is a group of humans that can come together, and that can change a system that is usually oppressing them, for a better outcome. Amazon Go has very little people and these people that are shopping, and there are systems that are tracking them. Then there's almost no people working, because that's the point of Amazon Go. You stop thinking revolution in these terms, and you start maybe thinking of revolution in these terms. You start recognizing the Amazon Labor Union and the work that they've been doing, and the work that they have continued to do to try to change a system that is actively oppressing them. Speak to revolution in words like that, very plainly. Speak to them in a way that doesn't take an asocial and ahistorical view, because you can't have revolution without people.

Lesson 5: Fight About Justice, not just about Architectures

Finding the Weizenbaum's of your fields and of your world, and making sure that their voices are uplifted, are going to make sure that there's arguments about justice at work, not just about architectures. This means you don't have a homogeneous team because you made space and you created a space for brilliant minds like the two here. Timnit Gebru and Margaret Mitchell were fired from Google for literally doing their job as ethical AI researchers. They were hired to do ethical AI research. When they did it, and criticized the company's own machine learning practices, they were unceremoniously fired. Make sure that you make space for fighting about justice and make sure you try to create safe space, psychologically safe, and also literally from firing safe justice, ways to talk about justice and change in your organization. If you do these things, you might find a field, an area, a product, or technology that you really, truly believe in. I did. I found data privacy. I found privacy technology. It's both intellectually enthralling and extremely inspiring in terms of the way that it can change the way that we work with data, and how we collect it, and how we use it.

Conclusion

I'm going to leave you with some questions, because I've given you a lot to think about. Now you know maybe some ways and maybe you go through those steps, and maybe you realize, I don't really feel connected to what I'm working on right now. Awesome if you do, and awesome if you can continue doing this in a way that's not techno-solutionist thing. Keep on keeping on. I want you to think, and if you turn out like me that you realize, maybe you're not working on the right part of the problem. Then I want you to ask, what could you be doing if you weren't building what you're doing now? I want you to think about, what could you change if you focused on the change, and not the technology? The change itself, was it the change that you want to see? I want to think this is a room full of brilliant technologists with many years of technology experience, and therefore a lot of collective power and a lot of collective responsibility, at probably many different technical organizations. What if we collectively took responsibility for the future of the world instead of the future of technology? What if we use the engineering brilliance that we all have to actually think about, what is the future of the world that we want? The technology is a second-hand question that we deal with later.

Questions and Answers

Nardon: We are in the staff-plus engineer track, which means a track where we discuss the choices and skills you need to have if you want to stay in the technical path. In your field of work, data science, I see many problems in this field, like data privacy bias, and things like you talked about. If we don't have more experienced engineers in this field, specifically, maybe it's going to be hard even to detect that these problems are happening. I want to hear your thoughts on the importance of having more staff-plus engineers, or more experienced engineers stay in the technical track, to be able to solve these complex problems we have. What skills do you think these people should have, in order to even convince their bosses that this is a problem, because you probably need not only technical skills, but also soft skills to be able to have these conversations?

Jarmul: I think you know also from your work in the field, how difficult it is in the beginning of your career in data, and machine learning, and so forth, to even see that problems are happening. I think also, like one thing that I'm noticing, and you're probably noticing too, also from the field is like machine learning is now becoming just easier to do. There's a lot of models that are available, where maybe the danger is not understood of how a model could be used, or how it could be deployed, or what even are the dangers or the risks. Just a quick example, from recent events, Stable Diffusion is out. Everybody is excited about it. There's actually prompts that you can give Stable Diffusion where you can see ISIS executions. There's prompts where you can give Stable Diffusion where you can see non-consensual pornography, and other things like this. Even experienced teams, even experienced researchers end up making mistakes. That's normal.

I think what you've noticed, and what I've noticed in the field, and probably some people also, is that if you've been in this field for a while, and you have a critical eye towards problems that can happen. It can become easier to predict that these things will happen and therefore your expert opinion and your input to those conversations of analyzing risks is even more essential. I think the reason why I keep saying the word risk is I think that's actually the best approach that we can use as technologists, is we're asked to be the experts on not only the technology but the risks and rewards of using that technology. Being therefore the owner of the risks of the technology that you choose to implement in products can help position you to have the power in a conversation to highlight that to upper management and so forth. That means taking time to sit, outline them all, educate your team, speak with management about it. Sometimes some teams are better at that than others. I think practicing, even if you're not a team that currently uplifts technological voices to management level, at least trying it, is a good start.

Nardon: I think that part of this movement of having more experienced people stay in the technical field is a win. It's not just for people that don't have management skills and they want to stay in the technical field. It's more about having more experienced people having a voice in companies. I see that many companies are realizing that they need to have more experienced people stay in the technical field, to solve better their problems. Also, to have these voices with status in the company that allow them to provide a technical vision that can avoid many problems in data. I work in data as well. Usually, it can be a huge problem for companies if they don't do things in the right way. I imagine that many other fields are having someone that's very experienced in technology, being able to go to the management and saying, "You shouldn't do this, because this is going to have these problems." It's probably going to be even financially interesting for the companies. I think this is part of what we have to do as staff-plus engineers is to create more awareness of how these people are important for the company, and giving them the right status is important as well. That's a good conversation to have.

What is the most important lesson that you learned when you became a staff-plus engineer?

Jarmul: I think probably the most important thing that I've learned is learning when to make space. Knowing that there's probably other people that are thinking the same thing as me, and knowing when to make space for the more junior engineers on the team to shine as well. I think that that has helped my life a lot. It's helped, hopefully, some of the juniors that I've had a chance to mentor and coach. It also gives space for new ideas. I think often as staff-plus we're asked for the solution, and it's fine, you probably already know the solution when you're asked for it. There can be this very critical point of leaving some silence and potentially learning at the same time. I give myself little reminders on my calendar so I remember that, because it can be very easy to get used to hearing your own ideas. It's important to remain somewhat humble and open and beginner in your approach even after many years. It's hard, but it's something that I think is useful.

 

See more presentations with transcripts

 

Recorded at:

May 19, 2023

BT