BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Podcasts New Approaches to Engineer Onboarding with Kristen Buchanan

New Approaches to Engineer Onboarding with Kristen Buchanan

In this podcast, Shane Hastie, Lead Editor for Culture & Methods, spoke to Kristen Buchanan about new approaches to engineer onboarding and reducing time to productivity for new hires.

Key Takeaways

  • In most organizations the onboarding process for new engineers is most likely just a checklist to be followed, and a checklist is actually one of the least effective ways for adults to actually learn something new
  • Four main categories of knowledge for onboarding are process, product, professional expectations and tooling
  • Culture spans all four areas, and is embodied in professional expectations.  Unfortunately in many organisations the cultural aspects are not well communicated
  • Engineering productivity is about how the people in your engineering team are able to do their job and produce the outcomes that your business needs for customers
  • One indicator of developer productivity is the types of questions they ask

 

Transcript

Shane Hastie: Good day folks, this is Shane Hastie for the InfoQ Engineering Culture podcast. I'm sitting down today with Kristen Buchanan. Kristen, welcome, thank you very much for taking the time to talk to us today.

Kristen Buchanan: Shane, absolutely, thank you for having me, it's a pleasure to be here.

Shane Hastie: Now you are the founder and CEO of Edify. Do you want to tell us first a little bit about yourself, and a little bit about Edify, please?

Introductions [00:41]

Kristen Buchanan: Thank you for asking. So, as you mentioned, I'm the founder of Edify, and we today are the engineering onboarding tool for high performing engineering teams. So what that really means is that we allow engineering managers to very quickly create a technical onboarding plan, and then automatically offer that out to their new hires, we call it deploying it out to their new hires, through Slack, so that their new hire is never alone and can always unblock themselves through working with the bot, or getting connected to other team members and being able to ask their other team members more intelligent questions. Our big vision really is to be the operating system, not just the onboarding tool, for high performing teams.

So it's a little bit about the company, and then me, I am coming to this company on the heels of my last business, I ran not a venture funded startup, I ran a regular money in money out business for six years prior to running this, where I actually was also working on software engineering onboarding. I was consulting to companies all over the world and helping their engineering teams in a number of different places, including onboarding. And before that I actually came from the world of museum learning and art history, I was an adult learning specialist and supported a lot of different museums and non-profits. And so found my way into tech in some strange ways but have loved it and really enjoy being in this middle zone between technology and the rest of the business.

Shane Hastie: Thank you for that. So what's the problem with engineering onboarding? Why do we need a special approach?

Why engineering onboarding is different [02:18]

Kristen Buchanan: That's an excellent question, especially given that there's really not much on the market today. The status quo is that most engineering teams have some document, it's usually a confluence doc, or a GitHub repo, maybe they're a little more advanced and they've got a Trello board, or an Asana board, and they're using that to catalog a list of documentation, or a list of tasks. So that sounds fine, except for when you think about a couple of different things. One is that I've never met an engineering team who had a very well-organized confluence, and whose engineering documentation was actually up-to-date at any given time. That's just a natural consequence of developing software, things just changing too frequently, and really the code is the source of truth, but there's a lot of interpretation on that code.

So when you offer a new hire this checklist, it's usually without context, and this gets to the second big problem about onboarding is that a checklist is actually one of the least effective ways for adults to actually learn something new. And so if you take a look at adult learning science and the most up-to-date theories on how adults learn, you'll find that adults really need to be party to and making decisions about their own learning experience, and software engineers are no exception from that. And so software engineers often like to get into the code really quickly, start to solve problems and get into their puzzle work, and yet they still need the historical context, they need the cultural context. And so without a real program around it, you don't really have the tools that you need to get someone up to speed in a relatively reasonable frame of time, we call that time to productivity at Edify.

Shane Hastie: I've heard wide numbers in how long this time to productivity, lovely term, takes, in some organizations as much as a year, year and a half. How do we shrink that? How do we get people not hit the ground running, because I think that's unreasonable but more productive? And then let's tackle what's productive, but we'll pause that for a moment.

Shortening the time to productivity [04:25]

Kristen Buchanan: This is such a good question, and while listeners can't see both of us using air quotes around productivity, which I think we'll get to in just a minute, but that's an excellent question because the number really can range so widely. And you're right, on the far end, where I would be really concerned, is it takes 12 to 18 months to get someone up to speed. I'm chiefly concerned about that because the average tenure of a software engineer is about 18 months now anyway, so that's quite a long time. But on the shorter end, I see junior engineers, often junior to mid-level, taking seven to nine months to get up to speed, which is still quite long, and then even senior engineers can take a quarter or more to get up to speed and to feel really productive.

And so we'll, I'm sure, dig through this in a minute, but when I think about how to actually bring those numbers down I go back to adult learning science, I go back to what is the right way to scaffold the information so that this new hire can see where they are today, where they should be, and what the delta between those two points is, and where's the ladder to climb that mountain, if you will.

Shane Hastie: So what are the types of things that a new hire in an engineering team needs to come up to speed with?

Four main categories of knowledge for onboarding [05:35]

Kristen Buchanan: In my eyes, there are four main categories of knowledge, and there's actually something that I developed in my last business, as I was working with engineering teams all over the world, I used to, back before COVID, when you could actually be in an office with people, I would go and bring the giant Post-it notes that are so fun to use, I would put four big Post-it notes in the room, in a conference room, with the engineering managers, and I would give each of the managers a pile of Sharpies and Post-it notes, and usually there would be some pizza, or some candy, or something else there too, and I would ask each of those engineering managers to basically write until they couldn't write anymore on four different topics, so process, how do we do our work? Product, What do we build? And sometimes there's more than one, of course, and then professional expectations, how do we behave when we are building these things? And finally, of course, tech and tooling, what are the building blocks that we use to build these things that we ship?

Pretty much anything that an engineer is going to need to know is going to be in one of those four categories, and I am not dogmatic about what goes into what category, as long as it's on there somewhere, this is just a place for us to start ideating on what should be in a new hires experience. And what ended up happening is each individual manager, I'd give about 15 minutes for each of the four categories, and then everyone would go and put their Post-it notes up on the poster boards, and there could sometimes be between 60 and 100 Post-it notes per board. And then I would go and take it back to my office and I would basically dedupe those Post-it notes, and there would still be 40 solid unique items there.

So after years and years of doing this at so many different engineering organizations, I ended up having a list of several hundred of these unique things that I called learning touchpoints, because each one was unique in and of themselves. So on the simple end it could be, what are the tools I need to set up my development environment? To, what are our linting practices? And it gets more and more complex as you go through what the actual software development life cycle is, and how do we actually get product to market.

So those are the big categories of knowledge that, to me, really matter in helping an engineer get comfortable, and understand how to do their job. And certainly, from a more philosophical perspective, nobody wants to start a new job and feel ill-equipped to do it, that's going to engender imposter syndrome or anxiety, it's going to make you feel not connected to the team and the product that you joined, and there's really no reason to have a new hire feel that way, especially if you've got tools like onboarding for them.

Shane Hastie: So connection to the team, and this is the engineering culture podcast, how does culture get conveyed, and how do we get inculcated into the culture as part of this process?

Culture spans all four and is embodied in the professional expectations area [08:29]

Kristen Buchanan: Culture, for me, really sits in that professional expectations area of those four categories, though naturally engineering philosophy, and the way that an engineering leader is going to manage their team, will show up in a variety of places in all four of those areas, and I would argue probably should show up in multiple areas. But what I find is, unfortunately, a lot of engineering leaders actually haven't articulated what their philosophy is, and how they would like to deal with failure. What do they prioritize? How do they think of the value of QA, as a very specific example. And so once an engineering team has basically admitted to itself, these are the things that we believe in and the ways that we choose to behave, then you can move forward.

Because if you take a look at an engineering team out of AWS, they're going to behave extremely differently than an engineering team out of Puppet, and in many, many others, the thousands of companies all over the world. And so you can't expect a new hire to just port themselves over from AWS to Puppet and naturally understand how you ship code, how you actually make products and help your customers, it would be unreasonable to expect that. And I think chiefly it's the engineering leaders job to articulate what that philosophy is, but in case, perhaps you might be listening to this and have not articulated that, or you don't feel that it's clear in your own engineering team, I would start asking questions about what are our behaviors, what do we actually do today, these are our lived experiences, rather than our ideal value-based experiences. There's a whole delta usually between the stated values and the lived values, even at the company and the corporate level of the culture, but most definitely in the engineering level as well.

And so I think it's really helpful, teams are always, spending time on a retreat, or spending time on OKR planning, spend a little bit more time articulating what the engineering philosophy is, and I have never seen it be a waste of time for an engineering team. It typically actually can to help speed up velocity because we're all on the same page, we're all communicating in the same way.

Shane Hastie: Velocity, this leads us to that lovely term, productivity. What is engineering productivity? Well, let's start just, what is it?

What is engineering productivity? [10:51]

Kristen Buchanan: First off, as a caveat, I don't think there's one answer, and I think that is because there are different types of businesses that are motivated by different things. So for example, just for the sake of conversation, I think you could do two big buckets, you could have venture funded businesses and non-venture funded businesses. You could get much more specific even within that private, equity, for example, versus a public company, versus a small venture funded startup, versus a bootstrap company. So it's really critical to actually understand what are the motivations of the company, because that's going to have downstream effects in how you assess the productivity of your engineering team.

But I think for me, at the base level, engineering productivity is about how are the people in your engineering team able to do their job and produce the outcomes that your business needs for customers. And again, that may look very different. Say you have an internally facing team, like a DevOps team, or SREs, their customers are internal, and so you may redefine what productivity looks like for them, or what their SLA is look like to other teams, whereas you may have more customer facing product engineering teams, and they have accountability around different outcomes. So that's where I would start first, and Shane, I'd love to ask you, what do you think engineering productivity is?

Shane Hastie: Oh, oh, hoisted on my own petard, so to speak, what do I think engineering productivity is? It is the delivery of value to our customer community, and you're right, there is a wide, wide spectrum of different customers, and I think what I see in many organizations is often a fixation on one group of customers, not always those who are paying us money. A common thing is the “he who speaks the loudest”, the HiPPO, highest paid person in the organization, what they want they get, just because of who they are, even if it's not actually the best thing. So we see a lot of fixation on a particular thing, sometimes a particular metric, and metrics are truly a two-edged sword in this regard, they can help, but they can also keep us so focused on one thing that we neglect others. So yeah, a difficult question, thank you.

Kristen Buchanan: It is a difficult question, and I completely agree with you that it could be the HiPPO, it could be an outlier customer that actually doesn't fit the company's ideal customer profile, but yet is paying a lot, or maybe was an early adopter customer, but is no longer the crossing the chasm customer that the company needs to be working toward. And so, if you are over-indexing, I would say, on the wrong set of metrics, I would say you're probably indexing on the wrong thing if you were only measuring one thing, and we can get to that in a minute, but if you only have one or two data points, it's unlikely that you're looking at the whole context of the situation. And so even if you're just pulling reports out of Jira, that might not be enough, and so you've got to look at these things in context with the rest of the business.

And I think this is an evolution that I see happening now, but even five years ago, I really struggled with my own clients in my old business to get them to think about engineering outcomes as not just internal outcomes, but about the connection between sales, and marketing, and finance, and the rest of the business. And I see that changing now, but it's often I see people thinking, well, this is just what I have to do as the person responsible for engineering at this company, and I don't necessarily have to think about the complexity of these other organizations and how we have interrelatedness. I call that organizational agility, that ability to think about yourself and your outcomes as connected to the rest of the organization.

The dangers in indexing on the wrong metrics[14:48]

Kristen Buchanan: So not to get too far afield, but you mentioned metrics, and this is something that I am extremely passionate about. There are maybe philosophies and then some products on the market that I think engineering teams are being offered that feel maybe like not the right thing for me. So, for example, I think perhaps one of the easiest things you could do to measure productivity would be to say, well, how much code is getting written? This is problematic for so many reasons, and I don't think I have to explain this to anybody listening to this podcast, but oftentimes for other business partners outside of engineering this can be the question, and I think what's really important is for engineering leaders to be able to say, why is this not valuable to us? And from an adult learning perspective I can very quickly say that you're basically rewarding the wrong thing. All of us are going to resonate with the idea that the best code is often the most elegant, which is often the most simple.

Kristen Buchanan: There are certainly times when we may choose to do things that create some technical debt, and therefore there's some bloated parts of the code base, and things that can be tripping us up later, but we made some trade offs now, but measuring the lines of code is not necessarily going to tell you the value of that code, or the outcomes that it has produced, and you can only do that in context with the rest of the organization, and with the actual outcome that the engineering team was supposed to reach. So if you're just looking at that one data point, it really won't help you.

Shane Hastie: We know that, and you're absolutely right, I would imagine that most of the audience for this podcast understand the dangers of trying to measure code. And in fact, you could almost turn it around and say, one of the best metrics is how much code are you removing? But that, in and of itself, gets to be dangerous as well. What would a suite of metrics be in a typical software engineering space? Now I do want to put the caveat, there's certainly no one size fits all here, but what would be some of the common ones that people could start to think about, if they haven't?

Measuring the impact of learning, not the content [16:53]

Kristen Buchanan: I might perhaps start with something a little bit more broad and then get a little bit more focused as I go. There's a tool in adult learning science called Kirkpatrick's levels of evaluation, and Shane you're nodding so I know that you must be familiar with it, and frankly, a lot of learning and development professionals really struggle to move any learning experience beyond the first level, which is essentially, did you like the experience? Do you personally feel that you learned something, or liked it? But the holy grail really is to get to level four, which is this evolution of the individual that learned something made a behavior change in themselves which engendered a behavior change in the organization, which then produced value outcomes, not output but outcomes. And that framework is really key to actually our product design at Edify, but also just the way that I talk about what these metrics really can be, what they should be, what they shouldn't be.

And so if you're thinking about wanting to draw a line between having onboarded a new hire, and time to productivity, for example, and wanting to make that line smaller, or shorter, I think Kirkpatrick's levels of evaluation is a very useful tool to start thinking about, just in building a mental model for this. And so if we're going to think about that level four, the outcomes level that we're thinking would be most valuable for engineering, then what are the metrics that would stem from that? So getting a little bit more specific, I would be curious about our new hires actually asking intelligent questions.

This may sound ironic from a learning and development-focused person, but they're actually are stupid questions. Mostly they're stupid because people don't know where they can find that information which may already exist, they may be asking a question that could, with the right context ahead of it, not be asked. They may be asking a question of the wrong person, and that person is going to be too polite to say no and so they'll spend time answering that question for the new hire, and so there are all of these problems. And if a new hire is actually equipped with the tools to ask more intelligent questions, we can start to see their evolution as a learner, from I'm learning right now, I'm experimenting, I might've failed a little bit and I'm learning from that failure and applying those lessons learned, we can start to see that in the reflection of their questions that are showing up in a one-on-one with their manager.

And so that might be, I hate to not give you a quantitative example, but I think that's really a useful qualitative example of a metric that managers can measure on a one-on-one basis. We're working on ways in Edify to do that from an AI perspective, to support our new hires own understanding of their question, and how they might improve a question, actually, and get more out of it, and so that's probably where I would start. And the other component is, again, how much value and outcome is this new hire actually contributing to in the team. And so if you can use a tool like Jira or other tools that you're using to track work, can you actually see the components that a new hire has contributed to, and start to help the new hire themselves draw the line between, this is what I'm working on, and this is what is valuable for my customer, whether internal or external.

Shane Hastie: One of the things that's, dare I say, fashionable at the moment, but I've seen applied well and badly, is OKRs as a mechanism to draw those lines. What do you think, and how have you seen them applied in that space?

OKRs are one possible tool, and are often applied badly [20:25]

Kristen Buchanan: I have a love/hate relationship with every framework, frankly. I think that if you put too much stock in any one framework, in the same way we talked about earlier, it can mislead you, and so I think some flexibility could be useful. And I'll use a story from Edify, actually, to illustrate this. So in the first quarter of this year we had OKRs for the team, and there were about six of them, and a few of them were engineering, a few of them were product related, they just spanned the areas of the business. And if you're familiar with OKRs, you know that they're meant to be about 75% achievable, and it depends how you were applying them in certain cases, but you want to be clear in your key results, and ambitious in your objective.

What we found for our stage, at the time we were seven people in our company, so very small, we had just raised our seed round, so we're moving very quickly, we were actively trying to hire, we were dealing with Techstars, which was an amazing experience, but also exhausting, and we actually found that the OKR model was not inclusive of all of the work that we were doing that really rolled up into the success of that quarter. And we didn't meet all of the goals, but we did meet other goals that we actually didn't realize back in December we're going to be really important. And so I think that's the challenge with OKRs, is depending on the timing that you assign to an OKR, whether it's a quarter, or a month, or something like that, it may be, again, misleading, or it may not capture everything that you need.

And so what we've actually tried, and I won't say switched to, because this is an experiment in our own business, is thinking of the quarter as two big swings, so two, six week big swings inside of the quarter, and a six week swing for the engineering team is going to have a couple of sprints within it. And that has been really interesting for us this quarter because it's engaged our engineering team in taking ownership, individually and as a team, for things that they're going to ship, that they're going to be proud of and demo in about six weeks. We're just about finished with our first swing, and so I'll be curious to see the outcome of this, and what we learned from this experiment, but I think that OKRs can be very useful if they're cascaded down correctly, and if there is real buy-in, and that buy-in has to come from actual ownership. It's not just, oh, yes, I agree, this is a good OKR as an individual contributor engineer, it has to be, I have decided this and chosen, and I know how I'm going to contribute to it.

Shane Hastie: One of the things we touched on in that chat before we started recording was friction that engineer's experience in their work. What are some of those frictions? And again, how can we remove or reduce those frictions?

Reduce friction in the developer experience – make work more humane [23:17]

Kristen Buchanan: I love engineers so much, I'm married to an engineer, and I also have met so many engineering teams who just think that it has to be bad, that it's always going to be irritating, that interviews will always be challenging, that onboarding is just bad, and that's just how it is, and they just let the friction pile up and work around it. And as somebody who is not an engineer by trade or training, but works frequently, and for many, many years has worked really closely with engineers, and observing these situations, I've found that the friction points actually line up with the employee life cycle. So if we think about recruiting, onboarding, continuous learning, performance management, even into offboarding, when somebody is leaving a team, or a company, there's all of these moments where engineering teams are either creating or ingesting a lot of data. And that data often never really becomes knowledge, and that never really becomes actionable to people.

So there's a lot of spreadsheeting, there's a lot of, well, I don't know, HR might understand this, they did the exit interview, but we're not really sure what happened, or the HR team sends a 30, 60, 90 day survey, but that never gets looped back to the engineering team so they can't improve their onboarding checklist. And so there are all of these moments where engineering teams aren't actually running at their most high performing capacity. And one of the things that I think about quite frequently, and this is probably because one of my earliest and longest mentors, Luke Kanies, who is the founder of Puppet, introduced me to the concept of DevOps many years ago, and immediately I thought, oh, well this is an amazing people management framework, why are we not using this in people management? This could be the most humane way to be running engineering teams.

And I use the word humane very specifically, and I think, especially on this culture focused podcast, it matters because work is not always very humane. In fact, we call people resources in many places, and it makes me bristle, I know it makes you bristle, Shane. But if you look at, it depends how you define it, but there's 5, 6, 7 principles of DevOps, and things like continuous improvement, continuous delivery, collaboration, a focus on openness and transparency, automation, these things could be used to mitigate those friction points in the employee life cycle.

Shane Hastie: A really interesting metaphor, or perhaps a way of thinking about people engagement in the DevOps model. What is continuous integration for people?

Continuous Integration for people [26:03]

Kristen Buchanan: I think continuous integration for people is the ability for a team to have a tool, and I think about that as an operating system, that allows them to essentially leap frog between situations, so they're continuously integrating into new knowledge, or new relationships in their team. So that's another reason that onboarding is so critical, onboarding is actually, in my mind, the nexus of the most knowledge exchanges in a team. It's got the highest frequency, highest volume of exchanges of information when a new hire is joining, that's the biggest point of it in a team's experience together. And so if you think about that moment in time when a new hire is joining, they are choosing to, because they joined this company, integrate into an existing team, and that, if we think about continuous integration from a technology standpoint, we want smooth process, we want to be able to push things through our environments so that nothing really breaks, and we know when it does break.

So we would love to be able to know, or even have a warning light, essentially, when something is breaking in our people process. But frankly, the people that have to deal with that right now are engineering managers, and most of them are overloaded. Even managing just five people, that's a lot, five humans, but I know engineering managers who are managing 10 or 12 people, and that's a lot of human interaction, even just in one-on-one time, every week, to try to be understanding the needs, the outcomes, the things that are happening with that person, tweaking performance, giving a little bit of coaching. And so often managers are too tapped to even try to identify issues that are happening two, three weeks from now.

Don’t try and change people’s natural ways of working, rather utilize it [27:55]

Kristen Buchanan: And so if we were able to do that, have that pipeline way of thinking about our own teams, I think we would be communicating better. And for that, I have a design principle of not changing people's daily workflow behavior, not changing their natural ways of working, because when we change people's ways of working, or when we try to, I should say, it's usually not successful. My favorite analogy of this is if you've ever been walking where there's a big grassy space, and there are sidewalks, but the fastest way from one end of the area to the other is actually not through the sidewalk, people are going to carve a path through the grass, and that's an interesting way to see what people actually wanted to do, and how traffic was actually working.

And so that happens in our teams, and so if we had that kind of tooling to tell us, this is actually what's happening, or this is what needs to happen, we might be more intelligent about how we work with one another. I imagine this, from an outcomes perspective, this continuous integration, or other components of DevOps applied to people management, would actually be helping us run more humane organizations where we could say, "Ah, okay, you've been here for 30 days, it doesn't appear that you're enjoying this work, and it's not a good fit for you technically, let's off-board you quickly and get you into a role that is better for you somewhere else." Or, "Wow, you've actually sped through this experience and we didn't realize you were going to be at this high level, let's give you some more complex tasks to be focusing on." Even not just in new hire experience, but the rest of your team as well.

Shane Hastie: Teams, teams are an interesting environment, aren't they? We have a common, what would I call it? Almost a dichotomy, and we know in high-performing environments and organizations, one of the values is stable teams, keeping people together long enough to go through whatever model you want to use, form, storm, norm, perform, or whatever, but getting to a point where they are effectively collaborating. On the other hand, the average tenure of a software engineer is 18 months, organizations are growing rapidly, or changing, there's a lot of change in those teams. How do we maintain this productivity and performance while embracing and adapting to that change?

How do we maintain productivity and performance while embracing and adapting to change? [30:17]

Kristen Buchanan: It's a wonderful question, and my first answer to it is to have the system developed before you are doing that change. When I was a consultant this used to drive me insane, that some company would raise a bunch of money, and then they would have 50 open positions, and they wouldn't call me until they'd hired half of them and they'd had bad experiences with the onboarding, and then they would say, "Oh, now we need to fix this." And it's like, well, it was going to take me three to six months to get this built for you as a consultant, thankfully it takes much less time with our software, but it's like, well, what if we had had the system built six months prior so that we could be retooling it, seeing what was working, testing it on our current team members, getting feedback, the same way that we build software, or would hope that that's the same way that you build software.

And that's what I encourage engineering leaders to do, that's what I'm trying to build with Edify, so that we can make this easier for customers, so that they have a system that basically is a soft landing for new team members as they're joining, moving, changing, things are changing, the requirements are changing, and we need to communicate those things out. I have a customer now at Edify who has a very, very well pronounced engineering philosophy, actually, but his problem is that he's hiring 60 engineers, and they don't come from the same background, and he has a very specific way of working, and he wants them to learn this. He doesn't want to assume, "Oh, you came from AWS, or you came from here, you came from there, and so you're going to work this way that I want you to work this way." And so he's got to keep layering people in, but he's not scalable, he can't have a one-on-one for an hour and do this workshop on his engineering philosophy every time a new hire comes in, even every week he can't do that.

So thinking about what is automatable, what systems could be made, and I really encourage engineering teams to be creative about this thought. I said earlier that I meet a lot of engineering teams that just feel like it has to be bad, and I don't think it has to be. If you could wave a magic wand and say, "I wish I didn't have to do that work," or, "I wish I didn't have to communicate in that way to my business partner, I wish they could just figure this out." I had somebody the other day tell me that what's the most challenging right now for him is other business partners will come and ask random people on his dev team for answers. It could be the marketing person asking some front-end developer about something.

And that breaks their focus, because it shows up in Slack. So you have switching costs, you have all of these painful points, but if there were systems for this where that question could get in a queue, that question could be answered through automated documentation, or a bot that queries the existing documentation and tries to shield the dev team from that, then you can start to retain that high velocity high-performance culture.

Shane Hastie: Kristen, thanks so much, some really interesting points in here. If people want to continue the conversation, where do they find you?

Kristen Buchanan: I love to talk to people on Twitter, you can find me @Kristenmaeve. You can also reach out to me on LinkedIn, and I always love having conversations with engineering managers and leaders who are trying to do this for their teams.

Shane Hastie: Thank you so much.

Mentioned

More about our podcasts

You can keep up-to-date with the podcasts via our RSS Feed, and they are available via SoundCloud, Apple Podcasts, Spotify, Overcast and the Google Podcast. From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Previous podcasts

Rate this Article

Adoption
Style

BT