BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations Improving eBay's Development Velocity

Improving eBay's Development Velocity

Bookmarks
38:48

Summary

Randy Shoup and Mark Weinberg discuss breaking down silos, measuring software delivery, continually reducing build, startup, PR validation, and deployment time, embedding experts in product teams.

Bio

Randy Shoup is VP Engineering and Chief Architect @eBay. Mark Weinberg is Vice President, Core Product Engineering @eBay.

About the conference

QCon Plus is a virtual conference for senior software engineers and architects that covers the trends, best practices, and solutions leveraged by the world's most innovative software organizations.

INFOQ EVENTS

  • Oct 24-28 (In-Person, San Francisco)

    Learn how to solve complex software engineering and leadership challenges. Attend online at QCon San Francisco (Oct 24-28) Save your spot now!

Transcript

Weinberg: I am here to talk to you about how we are using DevOps at eBay to transform our engineering Culture. First, I thought I would just start out and just talk about what problem we're trying to solve. Frankly, eBay is just too slow as a company, when it comes to engineering. We lag industry leaders. That's a problem for us. Why does this matter? Engineering velocity, of course, leads to better customer experiences, stronger business results, and more engaged and happier employees. For example, if you're a developer, you just want to write code. Developers love writing code. You don't want to sit in wait states. If you're a product owner, you want to deliver more features and values for customers. If you're an executive at the company, you want to stay ahead and of course beat the competition. Bottom line for us is, frankly, we just need to move faster.

Mission

What's our mission? It's actually to turn this around and turn engineering velocity into a competitive advantage for the company. For us just getting a little bit faster won't be good enough. There are competitive threats everywhere, from big companies like Amazon, and Walmart, also, newer companies, startups like Shopify and StockX. We feel pressure and competitive pressure from multiple angles. Turning this around and improving our velocity is going to be a huge advantage for us.

Why Are We Here?

How did we get here? Randy and I spent the first three months really serving the entire state of engineering velocity at eBay. We looked at the whole board. We talked to tons of engineers and engineering leaders across the company. At a high level, what we found is that we really had systemic challenges that have accumulated over many years, ranging from code in architecture. We have tons of tech debt, lots of monolithic code. We have missing tools in infrastructure, like a high quality staging environment. We had poor processes, for example, things like PRs with way too many changes in them. Long-lived feature branches, slow code review processes, yearly site-wide upgrades only. Then we had major team dependencies, which created the equivalent of distributed monoliths. As a result, there's really no silver bullet to these issues. It's going to require improvements across all areas, and span all disciplines, from engineering, customer support, finance, and even at our executive level. Luckily, many of these problems that we've encountered are well known with well-established solution patterns. Many companies share these challenges, perhaps many of you work for companies that have these challenges. The benefit is that we all get to learn from each other and share what we've done. Hopefully, this talk will help all of you.

Background

I'm Mark Weinberg. I've been at eBay for one year now. I don't write code for a living, but I still consider myself an engineer. I am co-leading this velocity initiative with Randy Shoup. I am the VP of Core Product Engineering, which means I lead some product teams. I lead a team working on eBay Stores. I have a small team working on mobile. I have a PMO that drives our planning process. Also, I act as the equivalent of the senior technical leader for the core product organization.

Shoup: I'm Randy Shoup. I'm VP and Chief Architect at eBay. My teams are responsible for the developer platform, the frameworks, all aspects of the developer experience. We're also responsible for the architecture standards across eBay. I have a stable of what we'll call enabling architects that go in and embed with individual teams. Then, I'm also responsible for the technology-wide program management organization. The great thing about this partnership is Mark is on the product engineering side, and I'm on the platform and infrastructure side. By doing stuff together, we can reframe and reprioritize work on Mark's part of the organization, on my part of the organization. We can connect teams together. We can unblock them when there are bottlenecks or issues. We can encourage people and permit them with a carrot. Then we can also suggest and mandate with a little bit of a stick. We shamelessly leverage all these capabilities as leaders together.

Assessment - Product Life Cycle

I'm going to talk a little bit about the assessment that Mark mentioned. We took several months doing what the lean manufacturing people would call a value stream map. We looked end-to-end at the product life cycle, so you can imagine that I think of it as divided into four areas. Planning, where somebody has an idea, and at some point, it becomes a project or things people work on. Development, where that project becomes committed code. Delivery where that committed code gets deployed to the site. Then an iteration where we do experimentation, and we do analytics, and we iterate on this stuff in real time. We have issues at all of those levels.

From the planning side, we have lots of issues of coordination between teams, a lot of inter-team dependencies, and basically every team at eBay has too much work in progress. From the development perspective, we talk to the developers and build and testing time is an issue, they want that to be much lower. Every developer feels like a lot of context switching and wait states. We have a pretty coupled architecture that has grown up over the 26 years of eBay's existence as a company. We don't have a strong set of service contracts. There's a lot of hidden work too that just doesn't bubble up to the higher levels.

In terms of software delivery, if I look overall at eBay not as much use of end-to-end development pipelines or deployment pipelines as we would like. Lots of issues in terms of staging. More manual testing than we'd like. When we started this work, no automated rollout of code to the site, no canary deployments, and then a minimal use of feature flags. Then, from an iteration perspective, again, if I look across eBay, not as much end-to-end monitoring, as we would like, of the customer experience. Some issues around tracking customer behavior, and what we like to call dysfunctional experimentation, where some teams do almost too much experimentation and are afraid to move forward. At the same time, some teams don't do any experimentation at all.

Lots of opportunities for improvement, as you can imagine, but we wanted to laser focus on the actual bottleneck that is happening at eBay. We decided to focus on the software delivery, and on the software development areas of this. Not that the other ones don't need improvement, but that they're not our priority at the moment. Why did we focus in the software delivery area? It's because improving software delivery makes all the other things possible by enabling us to change more quickly and reducing the cost of that change. As chief architect, I'd love for us to get to a less coupled architecture, and we are going to do it. I can't uncouple the architecture if we only deliver software once a month or once every couple weeks.

Measuring Success - Accelerate Metrics

Now I'm going to talk about how we measured our success. Since we're doing software delivery, we took a page out of the Accelerate book, and used the four Accelerate metrics to chart our progress. Those are deployment frequency, how often are we deploying software? Lead time for change, how long does it take a developer from committing her code till it shows up on the site? Time to restore service, so when we have an incident, how quickly can we restore service to customers? Change failure rate, what's the percentage of time when we do a deployment that we have to roll it back or hotfix it?

If we look overall, at eBay, we're solidly in the medium performer category. One of the things that we like about this work is for the teams that we work with on this velocity work, we were able to move all of those teams into the high performer category. We'll talk more about how we did that, and what specifically we did to move the metrics. It's really exciting that we've been able to move the deployment frequency and lead time metrics, which we've been focusing on, and actually not even intentionally, also improving the time to restore service and the change failure rate. It really is true that you don't have to choose between speed and stability.

Velocity Initiative - Think Big, Start Small, Learn Fast

Weinberg: The first thing we did is we focused the effort on a select set of what we call pilot domains, their applications, and also a set of platform tracks. We knew we couldn't just go after every single thing, every application or every team that we worked on at eBay. There was still learning for us to figure out what the best way to achieve these wins would be at eBay. We started with roughly 10% of eBay's active applications, those are the ones in the pilot. Then we looked for a balance of short-term wins and long-term capabilities. Obviously, both matter. A simple thing like saving five minutes on a task for every engineer, every day across the entire organization turns out is a big win. We wanted to get lots of those. Then there was also larger changes like bigger code refactoring projects or some re-architecture. These have much bigger impact but obviously carry more risk. We did take on a couple of those.

Our big focus was driving improvements in developer productivity. We looked at our build times, our server startup, our PR validation times, things that developers do every single day. The standard build, test, debug cycle. We really just dug in there and went after those. Then focused a huge amount of our effort on software delivery. Improvements here make everything faster. We just really have a ruthless focus on process automation, so getting rid of all of our manual tasks wherever we can. Automating our load and performance testing, our site speed testing. We have lots of team dependencies, so automating our concept of partner sign-offs, automating our security testing, our localization and accessibility. Because there was really just a ruthless focus on that automation. It was a huge advantage. Then just, faster deployments using canary deployments and traffic mirroring. Those were the primary things that really drove these improvements in delivery.

Then we also spent time focusing on our instrumentation and monitoring. We found we're really flying blind in a lot of places. Alerting, better observability gives us more confidence to deploy faster or roll back faster. eBay, for example, is managing payments now, and so when it comes to managing payments and money, you have to be able to detect and resolve issues quickly. We couldn't fly blind. Then, lastly, longer term capabilities on the architecture side. We focused on areas with high active development, high customer usage, things like our view item page, which is the single most visited page on the site. Then also our mobile application. Then areas where we had lots of team dependencies, or brittle, hard to test code.

Platform Tracks and Pilot Domains

I mentioned platform tracks and the pilot domains. This just gives you an example. The platform tracks are more horizontal areas, things like build, our CI system, staging, but even things like educating and training our engineers. Then the domains are just more areas of the application, so things like selling certs and ads. It was really just this joint effort across the horizontals and verticals that I think have really made these things work well.

How We Work

Let's talk a little bit about how we work. Obviously, collaboration was a big part of enabling this. We have separate organizations, my organization, Randy's organization, and in the past those platform and applications team didn't really work that closely, or iteratively. We've really brought the teams together, and they work as really one team. Then, we embed senior architects from his team in with the application teams to help. It's a coach to advise, write some code. That's really worked well. On the communication side, Randy and I meet every single day. I think that's been just instrumental and just making sure we understand what's going on in each other's worlds. We run a weekly scrum of scrums, where individuals and teams learn from each other in a single place once a week. That's been really effective. Then, Randy and I do weekly deep dives with the teams, where we coach them. We push them. We find additional bottlenecks. We really just work closely to help them find issues. Then, lastly, we do a monthly operating review, which keeps our executive leadership team up to date and of course engaged and interested in program which is important because it's not a short-term program. It's something that needs investment nurturing for the long term. Keeping them up to date really helps with that.

Shoup: I'll talk a little bit about the measurement side. One of the first things that we did was we put together a dashboard that showed those four key metrics for every app at eBay, all the several thousand applications. We can look at any app at any time and see where they are, going back a day, a week, a month, a year in terms of their deployment frequency, their lead time, change failure rate, and MTTR. That same team for each deploy allowed us to delve in, click through, and just see more granular visibility into all the steps in the delivery pipeline. That helps us to, again, find bottlenecks and look for opportunities for improvement. Then each of those platform tracks or horizontal tracks, also had input metrics that we were driving towards. On the build time, for example, we started where some applications would take an hour to build and now they take five minutes. Those kinds of input metrics that are upstream from the outcomes of the four key Accelerate metrics.

In terms of iteration, like Mark suggested, our model is basically, "Hi, team, tell us some impediments that you have to going faster." We would always ask, what if we asked you to deploy daily or 10 times daily, tell us all the reasons why that doesn't work? Every team would give us this big long list of 10 or 20 different things. We'd say, we'll work on this thing, we'll work on that thing. We already have a team working on this. We always think of, what are the opportunities for improvement? Look at, what are the biggest places where we get the bang for the buck. Then we iterate on those. Within each of those, we do what Deming would call a Plan, Do, Check, Act cycle. We look at an opportunity for improvement. We do an experiment. We check to see if that experiment was successful. If it is successful, we double down on it and bring it more widely. If it wasn't successful, we try something else. Again, we stop the effort if we're not getting an improvement.

Then the other thing which has been super critical, is training and workshops. We have a whole internal suite of training videos, that a bunch of the architects internally put together and individual teams talking about how they made velocity improvements in their areas. It's a lot of opportunity for sharing, but also opportunity for learning and dissemination.

Initiative Results

In terms of the initiative results, we're pretty proud of this, actually. We've been at this for this calendar year. Not quite a year. We have doubled the productivity of those pilot teams that we're working with. Again, it's roughly 10% of the actively developed applications at eBay. When we say double productivity, what do we mean? We mean that holding the team size constant with the same people in the same environment, they deliver twice the features and bug fixes that they were doing before. In terms of the Accelerate metrics, we've improved the deployment frequency, on average, 3x. We've shrunk the lead time 2.5x. Even without focusing directly on it, we've actually improved our change failure rate 3x as well.

Weinberg: I can't say enough about putting these metrics in place. It really gets rid of these debates about, is this working? It just gives you a tangible way to show to demonstrate, yes, the program is actually working. I highly recommend doing something like this. We focused on systematically just removing these bottlenecks. We just jumped from one bottleneck to the next, team to team. Then we also focused on nuts and bolts, things like build, startup, and PR validation time. We improved our staging environment, so now we have great clean data that's privacy clean. We have last known good components, so people can run tests in staging reliably and trust the results. We automated many of our processes. Then we did things like we moved our mobile release cadence from what used to be monthly to weekly, and that's been a huge improvement. It's improved our quality, because you don't have people rushing changes in because they don't want to miss the train. There's another train next week now. That's really helped a lot. It enables us to get faster hotfixes out to customers. It also reduces our batch sizes into something that's smaller, which is definitely something that we want.

Culture and Behavior - Excitement & Fun

On the culture side, it's really been fun. Teams don't want to go back to the old way of working. We hear a lot this notion of, it's working. That's been really fun. I think lots of teams are trying to get into the pilot. Luckily, they'll have an opportunity next year. It's really a culture of collaboration on this project. It's a no-blame culture. We work together to find a problem and fix it. I think that's been really healthy for the company.

Community & Sharing

Shoup: In terms of community and sharing, one of the great and completely unexpected second order effect is it's not just my platform and infrastructure team that's doing the automation, it's the individual teams automating their own processes. For example, one of the teams built a little PR reminder tool, "This PR has been waiting for two days, make sure that you do your code review." Automation of end-to-end performance testing, which was mostly run manually, like pressing a button manually. Now that's automated. Lots of effort on improving the accessibility testing, localization testing. Lots of teams across eBay are working on automating their own workflows to make their lives better. As part of that team of teams meeting that we do weekly, we also do these regular team demos. Every time a team does some interesting thing, improves build time, or automates one of these processes, we schedule a little demo, and we invite everybody. It's a great sharing mechanism, but also a great encouragement mechanism for teams to do this work. It's a great mechanism again, to share all those things with one another.

Executive Support & Engagement

In terms of executive support and engagement, this is absolutely critical for a transformation like this. One of the things that we're very lucky about is our newly, relatively recently come in CEO, Jamie Iannone, constantly highlights this particular effort at the all-hands, at executive meetings. He keeps talking about how it's the most important initiative at the company. No pressure for Mark and Randy. Also, he wants us to go faster, which is great. I love the fact that our executive team is seeing the value of this work and wanting to invest even more in it going forward.

Current Challenges

Let's talk a little bit about some of the challenges because it hasn't all been roses and unicorns. One of the things that is a side effect, almost an intentional side effect of the fact that we took this very cross-sectional approach for one or two teams in each of these individual areas across the entire area of eBay is we've been able to improve the outputs for those individual teams. We've improved those individual team's productivity, and we see that in the numbers. Because we haven't done end-to-end of any particular flow, we haven't improved overall eBay outcomes. That's something that we're going to be working on next year as we expand the program.

In terms of challenges for the initiative team, as with all things, like under-resourced platform teams are a bottleneck. That's something that I'm trying to work on in my area. In Mark's area of the world, making sure that teams, when they commit resources to doing this work that they actually do it and don't get distracted by legitimate requests to do feature work. Improving daily work over doing the daily work. This was an interesting finding. Often, we found that in the product teams, these efforts would be led by quality engineering and not development. There's nothing wrong with the quality engineers. They're great. They live in these pipelines. One of the things that when we start to shift to changing the development side behavior in terms of trunk based development or doing PRs, there was a little bit of a mismatch there. Then also overtaxed individual leaders, which is just another way of saying Mark and Randy are pretty busy.

Weinberg: We do have some challenges about people focus too much on the metrics. eBay is a very metric focused company, which is mostly very good. Our job is to make our engineering process faster. We do have a lot of people that are just hyper-focused on the metrics rather than just actually making ourselves faster. There's definitely a fear of failure. We hear comments like, going too fast caused this bug, or did it cause this bug? Really, what we're finding is that actually, our quality is going up, because we're applying some of these principles. Then just lastly, lack of belief in the program. There's still people that you have to convince. We need to do a bit of evangelism, spend time with folks and really explain the what, and the how, and the why of these things. Then, once we do people get it and we move forward.

Product Life Cycle: Future Goals

Shoup: I'm going to tell a little bit about what we hope the future to be in the not too distant future. Again, looking at that product life cycle, what we'd like to see in that planning phase is rather than big old upfront yearly planning, that we do rolling planning along the way. That we do lots of small, cheap experiments. If and only if one of those experiments is successful, only then do we double down with a big cross-functional company-wide initiative. In terms of development, we want people to be developing in small batch sizes. We want them to have a very fast, tight inner loop of build and test and debug iteration for individual developers. We want though to be doing daily merges, and deploys, or even better, and things go all the way to site from that. Then we'd like to spend a lot more time focusing on decoupling our architecture so that teams are able to move more independently from each other.

In terms of software delivery, what we want is a fully automated test and deployment pipeline, for every one of the thousands of apps that we have at eBay. The goal is one hour commit to deploy as the elite performers in the Accelerate metrics do. Then we'd like to do a lot more iteration in production by turning on and off feature flags. Then in terms of that iterate phase, that post-launch phase, we want to have a lot more visibility, observability. Both observability of the system, but also of customer behavior, so end-to-end monitoring there. Tracking of customer behavior in a privacy safe way, but just so that we understand what the customers are doing so we can improve it. Again, many small, cheap experiments that we iterate on, and that we get rapid feedback on those results.

Questions and Answers

Reisz: The first thing you said is that you instrument metrics from Accelerate, the DORA metrics, so that we could actually see what was going on. I have heard sometimes that this is the first time people have heard about DORA. Can you talk about how you put that in practice?

Shoup: When we say DORA metrics, Accelerate metrics, it's the same thing. Accelerate is the name of the book, DORA is the name of the organization, DevOps, Research, and Assessment, I think, is the acronym. The metrics are deployment frequency, lead time for change, change failure rate, and then meantime to recover. How do we measure it? It turns out, fortunately, we already had all the detail for those measurements that we were measuring those things already, but not together in this form. When I arrived, my team showed me a pretty complicated derived metric that put all those together with exponential decay, and all this stuff. I'm like, that's too complicated. What if we just keep the four separate? We have had a dashboard that looks at a bunch of different things where we've been able to see aspects of development health and production health. We've had that for a long time. Then we just added these things very explicitly as these four.

Reisz: Specifically, is it just Jira output? Is it just Git metric? Where are you getting the data? What's the source in real-time?

Shoup: Deployment frequency. We have our deployment tooling, and so that tooling knows when it starts. That tooling knows how many deployments we're doing, and it knows what team is doing it, so the deployment frequency is relatively straightforward. We use GitHub. The start is the GitHub part and the end is the deployment part for the lead time. It's lead time commit to deploy. You can measure lots of things. The Accelerate metric is specifically when the developer checks in her code, she's ready to go net of whatever code review process, testing, pipeline, all the way to the deployment.

Reisz: You said I can't say enough about the importance of putting these metrics in place, it gets rid of the debate. You've been in engineering for a while, why these four metrics?

Weinberg: I think having something that there's research behind that correlates improvements in these metrics to improvements in your software delivery performance is so important. Especially an effort like this, it's a big effort. Rightfully so, we started with 10% of our active applications, and so when we communicate progress to our team, and leaders, and executives, it's hard to see the improvements. We feel them as engineers, but the non-engineering people are always asking, are we getting faster? How do you know we're getting faster? It's a big investment. We want to be able to show that there's a great return on that investment. Having these metrics just gets you out of the debate of, is it working? We have objective data, and research to show, yes, it is working. It's so important, because I've done this in other organizations without these metrics. It's really hard to prove especially early on that it's working. I think you see a lot of times that people lose interest or excitement about an effort if they don't see the results.

Reisz: Yes, it's science. It's hard to refute science. Tom said he's having some of these exact same conversations, as I'm sure you have, at his organization, now 30 engineers. People are freaking out about the term optimized for speed. How did you really get buy-in from leadership?

Shoup: The honest answer is it's a combination of those facts. I am trusted by my leadership, Mark is trusted by his leadership, that leadership trusts one another. That helps, because we come in with a little bit of credibility that we say things that are true. Also, again, there's no substitute for the actual research. Nicole Forsgren who wrote the Accelerate book, and did all the data science for years of The State of DevOps Report, is like, again, it's science. Just simply saying, you can trust us. We're going to do this. Here's why. Those same slides that we showed you all, we show exactly those slides, literally, to our executive team. We say, here are the metrics that are industry standard, and here is why moving them matters.

Sure they do. Because they have this sense, and it's wrong by the science, but it's intuitive, you'd think that there's this tradeoff between speed and stability. If I go faster, I'm clearly reducing the quality. Turns out that's not true. It is counterintuitive. Part of it is like, trust us and you'll see it. We saw it.

Reisz: As you start describing, Randy's got platform. He's the architect. You've got the product features. You got this vertical-horizontal relationship. The thing that comes to mind when you're showing that is Conway's Law. How is it that these things don't start running into each other? How is it that you're able to work together and not create systems based on two different communication systems?

Weinberg: It starts with Randy and I. We met a year ago, and we just hit it off. We think alike. We say the same things. We finish each other's sentences. There's no effort to do that. We just think alike. Immediately, teams started seeing that, and feeling like, this is a highly collaborative effort. We're going to work together as a team to actually make this happen. This alignment between platform and more application oriented, there is a natural alignment for these things. My teams depend on all the work that Randy's teams do. We need these tools and infrastructure to actually make this work. I think that people understand that. We also made an explicit effort to bring people together. We have a weekly scrum meeting, that brings the two teams together and we solve problems in that meeting together. People leave that meeting and go work on these problems together. It really feels like this is more like one team than separate org structures that end up in a Conway's Law situation.

Shoup: Open and honest, this is different than how it used to be. There was a tension between the product side of the organization and the platform and infrastructure side of the organization. What we're doing is breaking it down. It hasn't always been like this. We're explicitly making it like this. Again, part of it is this collaborative relationship where we share goals, and we work together. The goal of the company is for the product team to make the customer experience better. What the platform and infrastructure teams offer is horizontal improvements. As we mentioned, we're going to be responsible for, how are we going to make build times better? How are we going to make the CI and CD pipelines better? How are we going to make startup time better? How are we going to do capabilities like canary deployment and traffic mirroring? That naturally uses the horizontalness of my organization. We piloted with a bunch of teams in Mark's organization, many of them, and said, what are your bottlenecks? Tell us. Almost immediately there was commonality among all those, and like, we'll put horizontal forces toward solving that. It's like weaving it together not in conflict.

Weinberg: I think that there were a lot of silent sufferers in at least my organization in the past that they just lived with these problems. When I first started at eBay, I synced one component of the code and did a build myself, because I just wanted to get used to the tools. It took over an hour do a build. I started asking questions. I'm like, do you live with this? Is this ok? It was just crazy to me. People just accepted it. They weren't talking across teams. Randy and I just got in there, and we just asked these questions, like this is not ok. We got to fix this. Who's responsible for this? Connected people, and just followed up, and held people accountable for actually fixing some of these problems they've been living with for years?

Reisz: Someone asked about how you identified these problems. Randy said, you ask them. I think he also talked about value stream mapping. Talk a little bit more about that, how did you get this list? You've listed 20 things that were issues, how did you really identify those?

Weinberg: It's not rocket science. We literally asked the leaders of these teams, what was slowing them down, because it varied from team to team. That's part of the benefit of the program is it's not every team having exactly the same problems. As we work with individual teams, we find one problem that does apply to some other teams, not all. Then we work with another team, and they have a problem that applies to some of the other teams. You end up with this toolbox, this philosophy toolbox, we call it, that now everyone in the company gets to benefit from one team highlighting, "I have this problem." Where you might have had another team that actually had the problem, but they just didn't do anything about it, didn't think there was a solution to it. You literally ask the experts that are dealing with this every day, what is their problem? Obviously, Randy and I have our own experience of what's ok, and what's really not ok. Here, there's a lot of people, I think, that had been living with these things and feeling like it wasn't a problem until we came in and said, "This is a problem. We're not going to do this."

Shoup: Just to riff on that. Two things, number one, it was so powerful to ask individual teams, "If I told you today, you had to move to daily deployments with whatever, one day lead time, or two day lead time, what are all the reasons why you can't do that?" We didn't ask that in an accusing way. We said, tell us. Every team is like, "What, are you joking? Here's all the 20 reasons." They'd never heard people go, "We're already working on this one and this one, and that's a new one," and such and such. Then you develop this trust by like, we're actually knocking a bunch of those things down. Nobody asked them these questions before. It's a motivator for the team to see when you say a thing is broken, people will actually jump on it and fix it. There's a bunch of things where my teams are like, yes, we knew that. There were a bunch of things we were like, "Never thought that."

Reisz: Mark said, don't underestimate the developer experience of improving the build process. Absolutely.

 

See more presentations with transcripts

 

Recorded at:

Mar 17, 2022

Related Sponsored Content

    Related Sponsor

    The platform to power synchronized digital experiences in realtime. Guaranteed to deliver at scale. Get started for free.

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT