BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations Hidden Decisions You Don’t Know You’re Making

Hidden Decisions You Don’t Know You’re Making

45:41

Summary

Dan Fike and Shawna Martell explain how "hidden decisions" silently shape software architecture and engineering culture. By examining the invisible defaults behind CI/CD bottlenecks, platform complexity, and misaligned metrics, they share frameworks for leading with intentionality. Learn to identify the "decision behind the decision" to better incentivize high-performing teams and careers.

Bio

Shawna Martell is Principal Software Engineer @Imprint, previously @Carta, @Yahoo, @Verizon, and @Wolfram. Dan Fike is Principal Engineer and Deputy to the CTO @Carta, previously @Google and @YaHoo, 5+ years implementing strategies and solutions in the tech industry.

About the conference

Software is changing the world. QCon San Francisco empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcript

Dan Fike: Let's talk about decisions today. Every project we work on begins with a decision. Every job we've had began with a decision. Every commit, every pull request, every hire began with a decision. We schedule meetings, get people together to make a decision, but before that, we decide that we need a meeting. Our workdays, our lives, really, are just the big sequence of decision-making. Some of you might feel this way already if you're sufficiently senior or in a leadership position. You've probably noticed an increasingly large majority of your time goes toward making a decision or facilitating a decision or maybe assembling the context required to build conviction around a decision.

Shawna Martell: It's not those obvious decisions we want to talk about today, and we're not going to get into the really existential questions like tabs versus spaces either. I think we do know the answer, it's fine. That's not to say that those decisions aren't important, though, or that they're easy to make. Often, behind these overt decisions we're making every day, there's another decision. There's a hidden decision. A decision that we made without even realizing it. It often reaches beyond the surface of the issue at hand that we're trying to resolve into shaping our culture and our relationships and our architecture. The systems we build and the ones we don't. They can shape our careers and our future decisions.

Dan Fike: Everyone here today made a decision to be here. We have different sessions hosted by different people on different tracks. You've all made decisions about how to spend your time. These weren't hidden decisions. You knew what you were doing and you understood the consequences of that. Obviously by attending one session at 2:45 this afternoon, you aren't attending one of the other sessions. I'd wager that some of you made these choices based on what's most interesting to you personally. Others may have made choices based on what seems most relevant to the struggles your employer is facing today. For some of you, those two things are the same, but not all of us. We didn't just make a decision around which talks to go to, we made a decision about why. We developed a principle that, when applied, resolves that decision. Basically, we made a decision about how to decide. That we probably didn't realize.

Shawna Martell: Those are the decisions we do want to talk about today. Those are hidden decisions. They're buried just a bit deeper behind the overt decisions that we're already making all the time.

I'm Shawna Martell. I'm a principal engineer at Imprint.

Dan Fike: I'm Dan Fike. I'm a principal engineer and deputy to the CTO at Carta.

Shawna Martell: Between Dan and I, we have more than 40 years of experience in technology. We have seen so many of these hidden decisions. I know we like to think that decisions get made in documents and in meetings. So often they do. We also find that some of the most important decisions are ones that we don't even realize that we're making. What we want to do today is walk you through some of the common examples that we've found in our experience and talk about how we can be more intentional going forward.

Most Successful Platform Tech Enables Simplicity, Not Complexity

Dan Fike: To get us started, I'm going to come out swinging with a bit of a hot take. I spent much of my career working on backend platform tooling. One of the things I've come to believe is that the most successful platform tech or rather most successful platform tech enables simplicity and not complexity. There's obviously some nuance to this and there's going to be exceptions. I think it's important you read this as most successful platform tech and not the most successful platform tech. English really needs parentheses. I believe this is the right way to build new platform tools. We've all encountered or maybe built some tooling that solved 10 tricky problems and introduced 9 new ones. Where the complexity of our life without that platform ends up being replaced by the new complexity of life with that platform.

Shawna Martell: If your platform supports 94 use cases but nobody understands the first one, maybe you're not really helping.

Dan Fike: No, ask me how I know that. I believe the root cause here is when your primary goal for your platform tools is to enable users to achieve complex results. I think the antidote is to enable users to avoid complexity, not achieve it. You want something simple to understand and simple to use. I don't need you to agree with me on this. That's not what I'm up here for. I'm not trying to convince you I'm right. I want you to understand that this is a decision that I've made about how my teams build software. It was deliberate. We sat down. We talked about it. We thought about it. We wrote it down in our platform execution strategy.

This is a principle that our engineers are considering when they design systems and weigh tradeoffs. It wasn't always this way. I was there 3,000 years ago when our systems were designed and plans were laid without this principle to steer them at all. When a small solution or a small design to a small problem gets tweaked and tweaked and tweaked until it becomes a complex solution to every kind of problem. Indexing services that were generic enough to do nothing, or new foundations that launched with a barrier to entry so large nobody adopted them. Or arguably worse, some people adopted them. I've seen this happen. I can tell you what happened. I can't tell you why or when it happened, but I can tell you what happened.

The organization had subconsciously, unintentionally decided that a new platform utility is going to be considered successful based on how many users it gets. How many people are going to use it? To the engineer who's building this, the best way to make sure that people will use it instead of whatever they were doing before is to make sure it can fulfill every single need and use case they have, giving them no reason to hold out.

The Decision to Build

Shawna Martell: In all the cases where we've seen this go wrong, it wasn't that the organization sat down and said, you know what we should do? We should incentivize complexity by measuring success in the number of users for our platform. Of course not. It wasn't that intentional. This wasn't encoded in any set of quarterly business goals, but it was a decision nonetheless. Behind our well-intentioned design documents and migration plans and success metrics, there was this additional decision that we didn't immediately recognize.

Dan Fike: There's one particular kind of hidden decision that we've seen be overlooked more than any others. I've done it. She's done it. Most of you have probably done it too. Long before we decide what tools to build or what abstractions to build, or whether we should build in line or pull it out into a new library, before any of that, we decide to build. I get it. We're software engineers. That's what we do. We build and we rebuild. Sometimes, and maybe more often than you think, don't. Maybe don't build. Often, engineers, we haven't decided on a real principle for what to build and what not to build and how to make that decision. We skip that part entirely. We're living under an unintentional principle that code is the default solution to our problems.

Shawna Martell: I want to share a story from a project I was working on not all that long ago. This was a pretty bleeding edge project for the organization as we were just starting to get our feet wet with AI agents. This isn't specifically an AI talk, but we are going to talk about AI just a little tiny bit. It was a pretty ambitious project for the team. We were building out a set of AI agents that were designed to help some of our internal teams with some pretty critical customer work. We were moving really fast. Maybe too fast at times. Iterations flew out the door just about as fast as the ideas that sparked them. The energy on the team was really electric. Beneath all of this momentum, there was a growing discomfort because we were building in the dark.

Unfortunately, we hadn't sufficiently considered our observability needs up front. We were relying on our users for feedback. They were incredibly generous. They told us their stories and they shared their frustrations. These anecdotes, they gave us a glimpse into our agents' performance. We needed more. We needed numbers. We needed clarity. We had questions. We had answers, kind of, that were scattered in user logs and traces of decision flows and conversation histories. You think that I'm exaggerating with these slashes. I think I actually was a little bit. It was probably more. It basically looked like some JSON had lost a fight with a wood chipper. It was real bad. We didn't need new infrastructure, but we really did need insight.

At least that was what I pretty fervently believed. Unfortunately, the team had gotten to a place, though, where we were thrashing on this problem more than anything. I probably shouldn't have been super surprised when I came into work one day and found a giant pull request waiting for my review. Does anybody else want to know what the one line removed is? Because I'm dying to know. It was thousands of lines of code introducing a new generalized agent metric tracking solution. It was clear the engineer who worked on this had given it a lot of thought.

My stomach sank because we had skipped a step. We had skipped the step. We weren't building to learn. We were building to build. The decision to build it never felt like a decision. It wasn't like the engineer who worked on this sat down and considered the tradeoffs and picked one. It wasn't that intentional. They were going with the flow. The existing data was really awkward and hard to work with, and that was a problem. We solve problems with code. It's true. It's generous to say that it was awkward and super to work with. It was real bad. That was a problem, but it wasn't the problem. In some ways, it was more of a distraction.

The team got together. We needed to have a conversation. This was serious. Were we really going to commit all of this new code to production when we still hadn't answered a single question about our agent's performance yet? Is this what we needed to gain the insights we were looking for? No, it really wasn't. Instead, we built the world's ugliest Jupyter Notebook. It was basically a crime scene. It was real bad. We parsed some real weird blobs. We did this a lot. It wasn't pretty. It definitely wasn't impressive. It did give us the insights that we needed. What was even better about it was that we learned that that generalized agent metric tracker, it wasn't even quite positioned to give us the data we actually needed.

Sometimes, as engineers, the right decision for us is to not build. It's true that the software we build is one of the most valuable assets for our companies, but it's also one of its greatest liabilities. When we fall into that default of shipping new code, we're also falling into the default of taking on that liability. The hidden decision isn't just to build something new, it's also to accept the long-term cost and risk that comes along with those new builds. When we're going to accept those costs and risks, it's important that we consciously make that decision. I want to go just one layer deeper here. Why is it so often our intuition to build something new? It's often because our institutions, they incentivize us to do so. Engineering productivity, performance, all of those are measured in systems built and code shipped. I've never known anybody that got promoted to staff engineer on the back of a good Jupyter Notebook. These hidden decisions, they get established in our culture and then are allowed to continue to persist.

The Tight Coupling of Hidden Decisions and Incentives

Dan Fike: Incentives and hidden decisions are really tightly coupled. Hidden decisions are actually often the cause of these misaligned incentives, either directly or through the tools those decisions create. They absolutely shape how we work and how we collaborate. They shape the behaviors we're silently optimizing for. I'm going to show you what I mean with a simple example that we can all relate to. Let's talk about our build pipelines and our deployment pipelines.

For some of you, this is your favorite thing to talk about. For others, this is your favorite thing to talk about. Your perspective is probably determined by what you can do while your pipeline runs. If it's slow enough for you to grab a cup of coffee, you're probably a happy caffeinated engineer. If it's slow enough for you to make Thanksgiving dinner, you might feel a bit differently. I don't know what it means if it takes 30 days. Either way, you likely spend a lot of time talking about it, reasoning about it. Do you run the full test suite on every pull request or just a subset? How many linters do you need? How are you going to handle type checking? Do you want your security scans for prevention or detection? These aren't hidden choices. These go in design docs. They get discussed in meetings. They show up in tickets. They feel deliberate, and that's because they are.

There is one decision buried in there that we rarely name. What behaviors does our CI/CD pipeline incentivize? Imagine this. You're a developer with big ideas and tight deadlines. Seems familiar. Every time you open a pull request, it takes an hour for that CI pipeline to run. Now ask yourself, how many times a day can you wait for that? It's not very many. What do we do? Do we just get slower? No. We start bundling. Instead of small, manageable pull requests, we put big changes into big pull requests. We try to outsmart the CI bottleneck. A slow pipeline doesn't just delay us, it reshapes the way we work around it. Suddenly, instead of these small PRs, we have massive ones that are hard to review, that suddenly see a lot more rubber stamping to get them through, and become almost impossible to reason about fully. That's the hidden decision here.

None of us are going to sit down and say, what I really want to do is encourage our engineers to make big pull requests. No, we're not doing that. Nobody's putting that in an engineering strategy. When we choose to allow our pipelines to drift into being slow and cumbersome, that's the effect, whether we realize it or not. I'm not saying security scans don't matter or our tests aren't critical. The right answer to what belongs in your CI pipeline is going to be, it depends. Quality isn't just about test coverage numbers. It's about behavior. Hidden decisions like this, they don't stay technical. They become cultural. They become how new engineers get onboarded. They become how these pull requests just get stamped and approved. They become how risk starts to accumulate in our system over time. That culture, it's shaped as much by the friction we choose to remove as it is by the friction we choose to leave in place.

Shawna Martell: Once it becomes a cultural problem, it's something that tooling alone can't fix. That's why this matters. The decision we need to be making very deliberately is what behaviors do we want to incentivize in our engineers? How can our tooling support that?

Dan Fike: You can see how these hidden cultural decisions can be buried beneath the surface of this technical engineering work. They're also emergent in many of the non-technical ways in which we work every day. They show up in meetings. They show up in who gets heard and how ideas evolve. In the silent defaults that shape whose voice is trusted and whose decisions stick. Let me ask you something. When was the last time your team made a decision based on who had the strongest conviction instead of who had the clearest insight? Sometimes we don't follow the data very well. We follow the loudest voice, or whoever's most senior, or the person who's just really good at writing something persuasive in Slack. Often, we just follow the person we're used to asking first. It's not because we're trying to exclude others. It's just because we're defaulting. Default is really powerful.

Shawna Martell: It's often not just which voice in the room stood out. It's who is even in the room at all. This is how our cultures can accidentally calcify. Our rooms get smaller and our voices get narrower. We stop noticing who we trust by default or which edge cases we're ignoring. Those choices, even though they may happen without us noticing, they do shape the long-term direction of our systems.

Ownership Shapes Behavior

Dan Fike: One of the most interesting hidden decisions that's buried in our culture is around ownership. We talk about it like it's a virtue, and most of the time it is. It definitely is, for sure. If we're not intentional, ownership can become a constraint. There's a great phrase some of us may have heard before, shipping the org chart. This is what happens when our architecture starts to mirror the company's structure. That same thing can happen in the decision-making layer, too. People start owning decisions not because it's what's right for the product or right for the business, but just because it fits cleanly into our org chart.

Shawna Martell: I want to tell you a story of how I saw this play out in real life. Many years ago now, I was on a team that was building a new platform service that integrated with a third-party HR system. Think like a Workday or a Rippling. Basically, we were responsible for building the platform where customers could hook up their data and then disseminate the normalized HR data to different parts of our product. It powered lots of different user experiences. It's something that had been running in production for many years. I'd long moved on to other things. Then, one day, I got pinged to review a document from another team.

Dan Fike: While I have the platform, don't do this. Don't start Slack conversations like that.

Shawna Martell: As I was saying, I got pinged to review a document from another team. They were planning on building a new HR integration layer. I admit, I was initially pretty skeptical, but I tried to go in with an open mind. Maybe there were good reasons for this. They had a specific set of use cases that just weren't handled by the existing platform. Their document laid out their plan and their architecture and their use cases. It was pretty thorough. Their path was pretty clear. While I was reviewing this document, there was something that was bothering me, because there just wasn't any evidence that the existing platform had been seriously considered in this decision-making. They acknowledged that it didn't solve all of their problems. I didn't see the curiosity as to whether or not maybe it could be extended into these new use cases.

I started asking questions. Why are we building this from scratch? Why aren't we using the platform we already have? What makes that path a non-starter? It didn't take me too long before I got to the root of it. You see, the existing platform, it was really focused on user experience and scalability and data normalization. This new proposed solution, it needed to focus on user experience and scalability and data normalization. It wasn't about architecture at all. It wasn't about capabilities or scale or features either, really. It was about ownership.

The decision that had happened basically invisibly is that we don't build on things we don't own. We don't ask other teams to adapt. We just avoid that friction entirely. That was the hidden decision. We all know that this duplication comes with really serious consequences. Fractured user experiences. You end up with all of the delightful long-term costs of two pieces of parallel infrastructure built to solve basically the same problem, but in a slightly different way. The existing platform wasn't owned by this team. It was written in a different language, managed by a different group, lived in a different repo, and had a totally different roadmap.

Even though it solved about 80% of the same problem and I was pretty sure could be extended to reach into these use cases as well, it just wasn't even in the running. It's not like the team got together in a meeting and said, you know what we should do? We should set this company up to maintain two HR integration platforms forever. Because that would be bananas. That was what they set out to do, even though it was unintentional. Because this decision didn't happen in a meeting, it happened in the absence of one, where we skipped that step of going to talk to the platform team to find out what was and wasn't possible.

I genuinely believe in ownership. I think it's a really good thing. Building a second platform because the existing one lives in another team's repo, that's not ownership. That's at best an inertia masquerading as strategy. It's definitely legacy just waiting to happen. Again, this wasn't a malicious decision. It was invisible. The default had become each team builds and manages their own stack. Then we skipped the really important underlying question of when does it make sense for our teams to differentiate versus when should we invest in shared infrastructure? Both solutions can be right, it just depends on the context. What matters is having that shared principle. That moment where you name the default and you choose to either accept it or to challenge it. Because if we don't do that, what happens is the most important decisions we make, they're not going to be made with intention, they're just going to happen.

Measurement Cements Behavior

Dan Fike: If these defaults keep shipping and shipping, they'll start to harden. Their grip on your culture gets tighter and tighter. Where these defaults tend to harden first is in what we count, in what we measure. If ownership shapes behavior, measurement cements it. One of the most important decisions we can make for ourselves is deciding how we measure our own success. If you want to understand what any team truly values, don't look at their mission statement, look at what they measure. Because what we measure becomes what we optimize. When we optimize, that shows what we believe really matters, even if nobody quite decided it that way.

Shawna Martell: We say we want impact, then we measure feature delivery. We say we want customer delight, and then we measure ticket throughput. We say we want resilience, then we measure uptime, but ignore operational drag.

Dan Fike: That's not strategy, that's measurement drift. It's a culture built by accident, one dashboard at a time. It's subtle, because these aren't evil metrics. Like shipping velocity, cycle time, MTTR, these are all useful signals. Once we start treating them as a proxy for value without revisiting how they relate to it, they can start to become dangerous. Because once a number becomes a goal, it stops being neutral, and it starts shaping behavior. I saw this play out some years ago when one of our infrastructure teams was standing up Kafka. This is new infrastructure. They were putting together the brokerage topics. They were setting it all up, building a schema registry, putting together some Codegen tools, the works.

The big idea was we wanted to enable teams to move away from tight synchronous dependencies and into something more decoupled. This is a good goal. Since it was new, the team needed some way to track progress to prove it was working. They picked the obvious metric, how many events are flowing through Kafka? That's the signal. More events means more adoption. More adoption means we're solving more dependency issues. That's not what happened. We started encouraging teams to instrument all kinds of data, all kinds of events. Helpful ones, but also speculative ones. Maybe useful one-day events, or we should probably have an event here just in case events. Those event counts started going up. They started going up a lot. There's just one problem, almost no one was consuming them. Even worse, most of the usage growth could be tracked back to a small number of very high frequency events. Technically, the team's progress looked healthy.

Functionally, the core problem wasn't really improving. Teams were still depending on overly coupled services that make brittle synchronous API calls unnecessarily. You see the real proxy that we needed to measure. It's not the number of events that we sent. I know what you're thinking, but it's not the number of events consumed either. What we needed to measure was the number of distinct event consumers that exist in our codebase. Each of those is replacing what would have previously been a blocking call. You can think of it as the number of lines in our dependency graph that should go from a solid line to a dotted line. That's the measurement. The team wasn't being political. They weren't chasing vanity. What they were measuring and encouraging was useful, in a sense, but it did result in a lot of activity and not a lot of progress.

Before the team had deliberately decided what to measure, they had unintentionally decided to focus on the solution they were building and not the problem they were solving. When they asked, is it working? They didn't agree or identify what it was. The team might have pivoted away from this measurement a little sooner, but another pernicious thing started to happen, we started to celebrate it. The more a metric like this gets rewarded, the less anyone's going to question it. The more we invest in tools to help track it or showcase it, the less anyone's going to ask, wait, is this even the right thing to be measuring? This false success makes it so much harder for us to find the real success. It makes that hidden decision even harder to identify.

Shawna Martell: Because metrics aren't just a way to observe performance, metrics teach us what success looks like. If we don't choose them intentionally, we're going to end up teaching our teams the wrong things about what matters. If we spend a lot of time measuring shipping velocity, our teams are going to optimize for output. If we really focus on measuring what users say they want, we may well miss what users actually need.

Dan Fike: We're measuring the things that actually help us build better judgment. Those things are harder to measure. It's easy to allow these defaults to creep in, especially because the easy things to measure are often like interesting things, really.

Hidden Decisions in Career Strategy

Shawna Martell: This isn't just true for our teams, it's true for us and our careers, too. The things that are the easiest to measure are title changes and our promotions and our job changes. Those can quietly start to teach us what success means for us as individuals. Like I said at the top, I've been in the industry a long time now, but I haven't actually worked that many places. I don't live in San Francisco or New York or one of the big tech hubs. That shaped a lot of the decisions that I made early on in my career. Or maybe I should think of it more as the decisions I didn't realize that I was making.

For my first seven years out of school, I worked at Wolfram research, the last few as a people manager. Near the end at what turned out to be the end of my time there, I decided that I wanted to go back to being an IC. This was not a hidden decision. This was a very deliberate choice that I made. This was the thing I wanted to do again for a while. They were very nice. They let me go back to being an IC. I was really hoping it would be the reset that I was looking for in my career. For lots of reasons, it just wasn't. That's when things got a lot fuzzier because I told myself I wanted something different, and so I left. It felt really decisive. It felt like action. It felt like momentum. What I didn't realize until much later was that I had actually only made half of a decision. I decided that I was leaving, but I really hadn't spent much time thinking about where I was going. I didn't ask myself what kind of work I actually wanted. I didn't ask myself the really uncomfortable questions like, what work brings me energy? Or, what key traits am I looking for in my next opportunity? That's the hidden decision.

After taking one job interview, accepting that role, I told myself, this is clarity. What I'd actually done was that I had allowed circumstance to choose for me. I had decided unintentionally that my strategy was anywhere but here. In the end, I was very lucky. It worked out. Again, not because it was well thought out. This was just pure luck. I ended up in a place with really brilliant people. I got to solve really amazing problems. It was my very first opportunity to work with Dan. It was very lucky. Luck isn't a strategy. What I learned since then is that you don't get to opt out of having a career strategy. Even I'll just see what happens, that is a strategy. It's just one where momentum decides for you. This is what makes these decisions so sneaky, because they don't feel like decisions at all, they feel like action. They feel like momentum.

What I've learned since then is that the decision to leave is really only half the move, because the other half is asking, where are you going and why? What matters to me now? Because if we don't name those things for ourselves, our circumstances will name them for us. I do want to acknowledge that I understand I was incredibly fortunate in this situation. Not all of us get to choose our moment. Sometimes your job just disappears. Sometimes the market turns against you. Sometimes survival is the only decision that you get to make. I understand I was very lucky to have this good fortune. Even if we don't get to decide the start of our change, we really do deserve to be intentional about what comes next.

Dan Fike: The thing is, these hidden decisions like these, they don't just happen when we change jobs. They happen every day. Not just in your codebase and not just in your team culture, but in your career. My first job out of school was in the games industry. I was working at a studio called Volition, which sadly does not exist anymore. I was thrilled to have the job. I was a little thrilled to have any job, because despite that being my first role after school, I'd actually also already been laid off once. I'd actually had a different role lined up, and as it would happen, the only paycheck I'd get from them was a signing bonus and a severance check. It's not bad for zero days worked, but it's a pretty uncomfortable situation to be in as a new college graduate in June. I'm behind the ball already. Working at Volition was not the only choice I had, it was the one I was most excited about, for sure. In 2007, it was notoriously difficult to break into the games industry.

As a lifelong gamer myself, I was psyched at the opportunity. After playing games for years, I was about to make them. Volition was a small studio in a minor metropolitan area in the middle of the Midwest. Attrition was low. It was uncommon for folks to leave, and it was uncommon for new folks to start. As such, they didn't have a very sophisticated onboarding process. I didn't even know what team or what project I was going to be on until I showed up for my first day. Maybe I'd be on the team building the new Saints Row game, which they called SR2, or maybe I'd get to be on the team building the new Red Faction game, which they called RFG. I showed up on my first day, and they tell me I'm going to be on the CTG team. That's a new term. I hadn't seen that one. Maybe this is some new secret IP that no one knows about.

Turns out CTG stood for Core Technology Group. Yes, that's not a game, at least not one I think would be very fun to play. They laughed. It went over well. I was going to join the team that builds the tools that the engineers and the artists and the designers use to build the game. Think about memory profilers and debug utilities and build pipelines, asset browsers and content serializers, the stuff behind the stuff. At the time I felt a bit disappointed. This isn't making games, is it? It is, but I was naive at the time, and I didn't see it that way.

Most of the staff at Volition had Xbox and PlayStation consoles at their desk to do their work, and here I didn't even need one. This is fine. I can do this for a while. It's a small company. People wear many hats. I'll get more hats. I've got a lot to learn still. I did the work. Each day I was making a decision to do the next right thing. This was nearly 20 years ago, and there's a pattern that's pretty clear now. Across almost every job I've had, I've built software whose primary users were my coworkers: internal platforms, infrastructure, tooling, systems for other engineers. It wasn't because I sat down and established a deliberate career strategy focused on improving internal velocity through backend tooling. It had just become a default. It was what I'd learned to reach for, what felt natural, what I'd been rewarded for. It's what I'd become good at. That's the part I want to pause on. So much of our career trajectory gets shaped here, and the things we reach for over and over. These choices, over time, they start to compound.

The work you lean into is going to shape the skills you develop, and the skills you demonstrate is going to shape the reputation you build. That reputation shapes the value others will project upon you, and that value in turn determines what roles or scope or problems you have access to. No one told me I was on a tools and platforms track. I just started walking, and someone pointed me in a direction. Over time, the trail behind me started to look a lot like identity. I enjoy my work a lot, quite a bit. Did I get good at it because I liked it, or did I start to like it because I was good at it? Your professional identity isn't something you declare, it's something you accumulate. Once that happens, once people start seeing you as a specific type of engineer, changing isn't just technically difficult, it's risky. It's emotionally risky. It's politically risky. Because you're not just switching tech stacks, you're pushing against what other people believe about you.

Often, you're pushing against what you believe about you. This shows up everywhere: glue work, testing, developer productivity, zero to one product builds, migration efforts, observability, operations. These are all quiet niches where strong engineers go looking for problems, and they have no problem finding them. They get good at solving them. Then they get quietly boxed in by the value they've proven. Every day, we go to work, and we do the next right thing.

The task that lives at the intersection of what the company needs, and what we're good at, and what nobody else wants to do. That's not necessarily wrong. We need to decide why we're doing it. We need to decide what principles are behind that decision. We need to decide what our career strategy is. Is it because this is work you enjoy, or maybe it's because you're actually indifferent? Is it because you believe that deep specialization is coveted and valued, or is it because you find the heroics of it satisfying? Maybe it's because you want the line to go up and to the right. There's a difference between choosing your career and accepting the one that shows up when you don't.

Shawna Martell: I know the context is completely different, but this really is the same decision we asked you about at the beginning, where we asked you how you decided which tracks and talks to attend. As we think about what our career strategy has been, or maybe as we decide what it should be, are your principles shaped by your choices? Or is it more like you ended up here automatically, where you didn't have as much intentionality as you hoped?

Recap

Let's talk about what we've discussed so far, a little recap.

Dan Fike: That's not what we said. Is that what we said?

Shawna Martell: Not exactly. No. We're not saying you should do more of this, or you should do less of that. What we're saying is when you do this, or when you don't, it should be a conscious choice that you are making actively, and not something that just happened automatically. When you make that choice, understand what principles are you applying in this situation to make it right for today?

Overarching Trends/Themes

Dan Fike: At this point, we hope you've each heard a story or two that sounds familiar. Maybe hope is the wrong word, because these aren't exactly great showcases of awesome work happening, but we expect that some of these stories feel a little relatable. We had a story about building a new AI metrics platform, instead of using the data we already have, and a story about how slow CI/CD pipelines can create big pull requests. We shared a story about a duplicate HRIS solution, and another about counting Kafka messages, instead of measuring reduced coupling. Shawna shared a story about her leaving a job, and I shared one about me being assigned one. When you start to look at these together, there's a few trends and themes that start to emerge.

Shawna Martell: The first is about spotting decisions that don't even feel like decisions. Everything we're working on came after a decision. Did you notice it? Everything we aren't working on came after a decision. Did you notice that? Every tool we use, every frustration we overlook, every meeting we attend, everyone who was included or excluded from that meeting, all of these came after a decision. Did we notice it? We talked about one of the sneakiest decisions of all, which is just deciding to build in the first place, and how those decisions of what to build can unintentionally take the shape of our organizations, even if we don't intend for them to, sometimes under the guise of ownership.

How these defaults can sneak into our code and our careers. This isn't where we do the clickbait, and we're going to tell you the one trick. We don't know exactly how to tell you to make sure you see these next time. The best advice we can give you and what we try to do ourselves is before we build that next thing or we schedule that meeting, just to take a brief moment and ask ourselves, is there some default that we're falling into here unintentionally?

Dan Fike: The second theme is about finding the decision behind the decision. Whether it's one that was easy to spot or one you normally don't notice, behind it, there's often another decision. Usually that layered decision helps explain why you made the choice you did. You're probably familiar with the idea of asking why five times to get to the root cause of some incident or problem. Turns out that same process can be applied to discover the hidden decisions lurking beneath your visible ones.

If you dig deep enough, you can usually find some core principle or philosophy that shaped that outcome. That's your strategy. Maybe it's working backward from the decision-making culture you want to pursue. Maybe it's about weighing the tradeoffs between autonomy and redundancy. Maybe it's about defining a proactive career strategy. These all provide opportunities to decide how you make a decision before you start deciding. When we're done here and you all have a moment to breathe again, I want you to do a bit of a retrospective on some of your past decisions. Can you derive any principles from them? Can you define the strategy you were applying when you made that choice? Is that strategy something you deliberately chose or did it just happen? Is it the right one? If not, what can we do about it? If so, how can you communicate that decision to the rest of your team, the decision to adopt that strategy? When you're next about to make a decision, you can consider that strategy before we're fully committed.

Shawna Martell: Our third theme is around incentives. Often when you make a decision, there's this little tailgater decision that follows it into action in the form of some new incentive for your team. If you ever find yourself looking around and just like, I don't understand why my team or my peers are doing what they do, very often this boils down to misaligned incentives. Maybe they're not aligned with the incentives that you are incentivized with, or it could be they're not actually aligned with what the company actually needs them to do. We were talking earlier about how our metrics can teach us what success looks like. Once we notice how our metrics are shaping behavior, it can be really hard to unsee it. In our CI/CD story, we saw that we were teaching our teams to run the CI as little as possible.

Once you understand why they're doing what they do, it becomes a lot easier to identify how you might change their behavior in the future. Sometimes the decisions can be tricky to spot, but usually the behaviors are not. Maybe the team is neglecting to include tests in their PRs, or they're constantly declaring themselves unavailable for engineering interviews. Maybe they keep adding new responsibilities to some existing layer of code. The trick isn't to look at what people aren't doing. You need to look at what they are doing instead and ask yourself why. What incentives exist to explain this behavior? Are those the incentives we wanted? What decisions did we make to create the incentive structure that we have?

Dan Fike: We both worked for Will Larson for a couple of years. He's a smart guy, you might know him from his many books. He was on this stage a couple years ago giving a keynote, I think. One of the things Will considers a lot in his decision-making is what he calls the physics of the situation. It's getting out of our usual headspace for architecture or policy and just considering the natural mechanics, the cause and effect. Balls don't roll uphill. If you are, for example, iterating a developer experience survey for your engineers, and you want the feedback you collect to be more structured and measurable, you might, in order to achieve that, place a larger burden on the reviewer for how they organize their thoughts. If you do that, you're not going to get better feedback, you are going to get less feedback. That's the physics of it. If you make it harder to do, people will do less of it. When you're making a decision, you need to consider the physics. Does this decision make one behavior easier than another or reward a particular sort of behavior? Is that what you want to incentivize?

Conclusion

Shawna Martell: You might be wondering, Dan and Shawna keep calling these hidden decisions, but basically aren't these just unintended consequences? I get that. What we've been talking about is what happens when we're not intentional. There's a lot of overlap. We call these decisions and not consequences. Because when you call something an unintentional consequence, you're saying, this just happened. It becomes something that we react to rather than something that we interrogate or explore or take responsibility for. In my experience, hidden decisions aren't hidden because they're complex. They're hidden because we just never admitted to ourselves that we were deciding at all. Can we start there? Can we start noticing when we're shaping outcomes? When you're making a decision, ask yourself, what principles and policy and process brought us to making this decision at this time with these people. Why? Were they the right principles? Did we end up here intentionally or somehow wander in by accident?

Dan Fike: When we do decide, this time with intention, what principles to apply in our decision-making, and what culture you want those principles to promote, then we get to repeat this process. Why is that the culture you want your principles to promote? Because even these hidden decisions can be rooted in further hidden decisions. I'd like to wrap up our time with one parting thought for you to consider. This is a photo I took last April when Shawna and I were speaking at QCon London. You walk around town and you see these kinds of things on the ground near intersections. I assume it's an access panel for some electrical or telecom equipment or something. I'm not sure. I want you to look at this. Someone did this. As an independent thought exercise, I want you to take a minute and ask yourself, what incentives might have existed or what hidden decisions might have created those incentives for this to be the outcome? I want you to ask yourself, is this, metaphorically, happening in your workplace? Why?

 

See more presentations with transcripts

 

Recorded at:

Mar 31, 2026

BT