BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Podcasts The AI Joy Gap: Why Some Developers Thrive While Others Struggle

The AI Joy Gap: Why Some Developers Thrive While Others Struggle

In this podcast, Shane Hastie, Lead Editor for Culture & Methods, spoke to Michael Parker, VP of Engineering at TurinTech AI, about bringing joy back to software development in the AI era, the emerging role of "factory architects" who orchestrate AI agents rather than write code directly, and the cultural divide between AI hype and the reality developers face on legacy codebases.

Key Takeaways

  • AI is creating a polarization in developer experience - those on greenfield projects see massive productivity gains while those on legacy codebases struggle with AI-generated code that doesn't fit their context.
  • Developers are becoming "factory architects" who design and orchestrate AI agents and rules rather than writing code directly, requiring a new mindset and skill set.
  • Some high-performing teams are returning to mob programming because team synchronization has become more important than individual code generation speed.
  • There is a growing cultural disconnect between engineering leadership who believe the AI hype and developers on the ground who face real limitations with their codebases and tooling.
  • As engineering productivity increases, the bottleneck shifts to product discovery — organisations will need faster decision-making processes and more product managers and researchers.

Transcript

Shane Hastie: This is Shane Hastie for the InfoQ Engineering Culture Podcast. Today, I'm sitting down half the world away with Mike Parker. Mike, welcome. Thanks for taking the time to talk to us.

Michael Parker: Hi, Shane.

Shane Hastie: My normal starting point in these conversations is, who's Mike?

Introductions [01:12]

Michael Parker: Yes, so I'm the VP of engineering here at TurinTech AI. So, we're a London-based cloud development platform company. We're trying to build a system to bring the joy back into development, and I've been doing that for the last year. Before that, I was at Docker for seven years. So, building out Docker Hub, Docker Desktop, Docker Scout, lots of broad scale developer tools. I'm very interested in developer tools. Before that, I was a backend engineer interested in test-driven development and clean code and all that sort of fun stuff before moving into management. And even before that, I was in game design.

So, I started off in games as a young boy, very keen on building those kinds of things. But now today, I'm very interested in the future of AI tooling and accelerating developers and bringing that joy back into development.

Where Has the Joy Gone in Development? [02:04]

Shane Hastie: So, let's dig into that. Joy back into development. Where has the joy gone? What's happened?

Michael Parker: Yes, it's a really interesting question, and it really depends who you talk to. Some developers are having even more fun than they've ever had, especially if they're working on greenfield code bases, shipping small products, small services, where AI is supercharging their productivity. But there's also a lot of developers that are working on legacy code bases in large enterprises. They might have a lot of in house libraries. They might have ways of working that AI is not necessarily trained on. And getting AI to output code in those environments is very difficult.

And we're almost seeing a polarization of opinion in this space because software development is so varied and a lot of people tend to talk about software development as if it's just one space and it's not. And so, you've got people on the greenfield side saying, "Wow, AI is amazing. You can get 10,000 times productivity increase". And then you've got people on very complicated legacy code bases, locked to old technologies. The AI is just churning out garbage. And these people are not talking enough. I want people to talk more and understand that there's different code bases, there's different companies.

So, I'm mostly focused on the people that are suffering. These code bases where AI doesn't work, how do we make it work better? And to a lot of these developers, what they're finding is that AI has taken over the fun part. And the fun part is a lot of the time deciding on the structure of the code, the architecture, getting things right. And instead we're left with code review. No one likes doing code review. And so, we're sitting there, AI generates some code, it's not right. We review it for ages. We tell it, no, you've done this wrong, you've done that wrong. We write rules, files. They don't listen to the rules.

We try a different model. We try a different package, and then we have to go through all this pain. And often people say it would've been quicker to do it myself and it would've been higher quality. And you're stuck like, "Do I use the new tools?" And especially when your engineering leaders are pushing these things down, they say, everyone has to use AI. And then the developers on the ground are struggling because they can't teach these things to do a good job. And so, I think that's where a lot of the joy has gone.

Making AI Work for Enterprise Developers [04:29]

Shane Hastie: So, for the engineer and the large enterprise that joy is being sucked out of, and you're right, code reviews is the last thing we actually enjoy doing as developers. We want the thrill of the creativity and so forth. How do we make this work?

Michael Parker: I think there's two aspects to this. I think that we need better tools to help AI produce the right code first time round. A lot of the cutting edge AI developers are essentially building their own factory. They're building their own AI agents. A lot of the AI tools today, like Claude Code and Cursor, Copilot, they have huge configurability. You can use subagents, you can use rules, and your rules can set up more subagents, and you can have MCP servers, and you can do prompt engineering. It's kind of a new role. You're not writing the code anymore. You're writing the factory to write the code.

And there's an emerging class of experts who have become expert in building their local factory, their own agents, their own rules, et cetera, et cetera. But very often these things aren't widely shared amongst their team. And so, you have this huge imbalance with some of the AI experts. And then how do you spread that joy to the team? How do you get them upskilling everybody else rather than having their own sort of personal productivity factory? But I'm not convinced that planning more ahead of time before it runs off and writes code is enough.

It's definitely more that we could do there, out of the box, to provide tools, so that you really check what AI is going to do before it does it. And that's helped me immensely. I'm sort of on this side of building the factory, right? So, I've got my own home set up and I'm tweaking my rules and my agents and I'm very clear, don't do anything. Let's decide. Tell me what you're going to do first. And so, I think that's one aspect of it. It's like really nailing the requirements, the technologies, the structure before you write any code. So, better planning tools is number one.

But also when it's finished, I think we need better code review tooling. Our code review tooling isn't really designed for the quantity of code that AI is churning out. It's very hard to say, why did you do this? Why did you do that? Can you move that over here? The whole review interface, I think, needs an update for this AI era. And I think that's true across the board for the full SDLC, by the way. A lot of these things need completely redesigning, including the IDE. But then after code review, even after your code is merged, today's code is going to be out of date tomorrow.

There's always going to be maintenance work and you probably have lots of legacy code. So, we need some systems to keep this code maintained over time. Is it going to be the developers that are doing that? And I speak to a lot of developers, they have a huge amount of maintenance work to do, and this is really boring work. No one wants to upgrade your Python version or your .Net framework or, "Oh, this thing has been deprecated. Oh, I need an extra parameter on this field". And LLMs aren't necessarily trained on every single version of every library.

And if something came out like yesterday, like .Net 10 was released recently and many LLMs are still like, ".Net 10 doesn't exist". I'm like, "Yes, it came out. I need to switch to. .Net 10". So, I think at the end, we need better maintenance agents and tooling to help take away that burden from developers so they can focus on the creativity and the problem solving and building business value and getting back to what I consider the fun part of development and not so much upgrading framework versions or doing code review.

The Pull Request Problem in the AI Era [08:19]

Shane Hastie: One of the things that we hear a lot is that as we generate code using the LLMs, the pull requests are getting bigger, the quantity of code. Now we've spent decades trying to get to smaller, tighter pieces, microservices, time-driven design, make it small, make it tight. Have we broken there?

Michael Parker: Yes, I think we have. And it's an interesting question. Would you rather have one pull request with a hundred changes or a hundred poll requests with one change? When I was writing code, not so much anymore, but I would always open lots of small pull requests and even small refactorings. If I was doing a bug fix and I knew there was some refactoring to do, I would roll back, I would do the refactoring, I would open a pull request, then I would do the bug fix, open a second pull request. And so, a lot of my pull requests were like 10 lines, 50 lines, very easy to review.

And if the test failed, it was obvious exactly what went wrong. Now, does that scale in the world of AI? Maybe. I don't know. It's clear from history, no one's going to review 10,000 line pull requests. Are they going to review 10,000 pull requests? Well, I don't know either. This is very difficult. I think at some point we're going to have to stop reviewing code. I think that's ultimately what's going to happen, but for many companies, they're not ready for that. They're not ready yet. AI is making too many mistakes. It needs too much reviewing. People's factories aren't mature enough.

And we're in this awkward space where some companies and teams, they don't need review, or at least they don't think they need review. And other companies are very much still, you are responsible for every line of code that you write, make sure it's good before I review it. And yes, we're in that transition period and it's going to be ugly for a while, I think. But imagine we could get to the space where we don't have to review code anymore. That would be magical, wouldn't it?

Trust in AI-Generated Code [10:26]

Shane Hastie: Yes. Now we touch on a topic that I know was one of the things you and I touched on before we started recording is trust. How do I trust this code that has been produced, human or AI or some combination thereof?

Michael Parker: Yes, I guess maybe we get philosophical. What does trust really mean? I come from a world of continuous delivery where we merge to production all the time, like multiple times a day, and I'm a big believer in that. Merging and deploying very small changes very quickly. To do that, you need fantastic monitoring systems. So, if something breaks, you know about it immediately. And so, I think you need that anyway. And whether it's a human that breaks it or an AI that breaks it, I don't think it matters. You need to be able to respond to those things, roll it back. I mean, this isn't true for every company, right?

You can't afford to do that if you're building a space rocket. For a lot of people, if they're just building an e-commerce website or some server online, AWS goes down often enough, right? Nothing's perfect. It's fine to merge and deploy some small bugs and then we'll fix it as we go. But I think the bigger question is, if people are submitting code that is not up to standard, then what do teams do about that? And I fall back to engineers being responsible for the code that they submit ultimately. If they want to submit a pull request that's full of bugs, that's not good, right? And they can't just blame AI.

So, I think we have to take some responsibility for the code we submit. And I'm largely happy for them to fix that in any way they like. They can write a better factory, they can get some review tooling, either AI or not. They can split up their pull requests, they can write more tests, they can do some pair programming, they can get some draft code review ahead of time. There's lots of different ways we can improve code quality and a lot of these traditional methods still apply, I think. I guess it depends really what we mean by trust and if there's a wider issue around trust and AI.

From Artisan to Factory Architect [12:43]

Shane Hastie: Coming back to writing the factory, building the factory, I didn't go into software engineering to be a factory worker. I'm an artisan, I'm a craftsperson. How do we bridge that?

Michael Parker: Yes. I think the question is, can you fall in love with crafting the factory? I wouldn't describe a lot of these people as factory workers. I would say they're more factory architects, factory managers. And I think there is joy to be had orchestrating these different agents and systems, but it's one step removed from the ultimate customer that you're trying to serve, right? If the customer is trying to buy your product on an e-commerce site, for example, and you're busy making your agent output better formatted code, you're one sort of abstraction away from the customer. And historically we've been taught that's a bad thing.

We want product-facing teams. The whole agile process was about connecting customers and stakeholders much more tightly with engineering teams. And so, pulling these people away and focusing on the factory could be seen as a step in the wrong direction because they'll lose sight of what we're trying to achieve for the business. But I don't think everyone needs to be a factory architect. I don't think this world is very efficient if we are all building our own factories.

I think we could see an emergence of an AI platform team essentially that are basically building developer tools for engineering organizations and they're rolling out agents and rules and structures, so that everybody else can focus on delivering value.

The AI Platform Team [14:26]

Shane Hastie: So, this is the platform team on steroids?

Michael Parker: I mean, it's a little bit different because like many platform teams, they work on cloud development, and a lot of these factories are being run locally. So, I do think we need a stronger tool set for rolling out local factory configurations. I brainstormed a bit of this when I was at Docker, because Docker containers and images are very interesting as a way of putting out developer tools and you've got a sandbox environment. So, agents aren't going to accidentally delete your hard drive and all this fun stuff that you see happening. But I think there's a gap in the market and in our tooling space there, how do we roll out these tools?

If I write a new rule across all of my projects, how do I give that to you in a seamless way? How do you log on and just have a new set of agents and rules and everything's been upgraded and maybe I can give you a different model or I can use a different model for planning versus coding. I've got these review agents that you can run. All this works quite well in the cloud. And so, I think there's another debate to be had about how much of this should happen in the cloud and how much should happen locally because you don't necessarily want a hundred agents running on your laptop all the time.

And there's another argument to be said like, you want these agents to be working overnight when you're asleep after you turn your laptop off. But local IDEs have been very sticky. Lots of people have tried to build IDEs in the cloud and some people have moved, but lots of people still love their local environment. Even though you have to install a hundred different tools and things don't work all the time and you've got your environment and you've run out of space and you don't have enough around, people still love their local environment. So, it's going to be very interesting to see where this future goes.

Do we have some sort of hybrid methodology where some of these things are running in the cloud, some of the things are running locally? Where does this factory live and what are the interface points?

How AI Is Changing Team Composition [16:21]

Shane Hastie: What's the makeup of the team that is integrating AI today? What's different about teams?

Michael Parker: I think we are seeing a blurring of roles a lot when it comes to product management, design, front end, backend. AI gives this ability for anybody to become not an expert, but knowledgeable enough so they can have the conversation. I did a bunch of game design, but I've not done training on UI design. But if I feel like a UI feels off like the design, I can jump into AI and just have a quick conversation and say, "What's the industry best practices for an input form?" Or like, "I want people to sign up to my wait list. How many fields, should I ask them for their job title and what's that? How's that going to affect people signing up?"

And I can immediately get world-class advice. I mean, if it doesn't hallucinate and it doesn't lie to me, right? Hopefully, it doesn't too much. But that means that everyone can now start participating in conversations, which I think is really interesting. And tools like Lovable, for example, allow product managers to very quickly prototype engineering solutions. So, a product manager can think of a feature and very quickly sketch something out that people can actually click on. And I think this actually brings product management and engineering closer together in some ways because they'll start noticing the edge cases.

So, engineering and product management in my history, it's always been like, "Hey, can you just add this button?" It's like five minutes, right? And engineering's like, "That'll take us a month". And it's like, "What? Why is that going to take a month? Well, have you thought about this and this and this and this?" And it's like, "I don't want to think about this. I'm focused on the customer. The customer has this problem. Please solve it". The danger, of course, is that you build a prototype and you think it's the finished product and it's like, well, I did it in five minutes. Why are you guys taking a week?

Blurring Roles and the Dunning-Kruger Effect [18:19]

So, there's this Dunning Kruger effect that we are also seeing where it's like, "Oh, I can code because I've typed something into Lovable". So, I think there's two sides to the coin on that.

Shane Hastie: So, roles missing, AI tooling becoming part of that. What else is changing in the team environment?

Michael Parker: I guess the role of full stack developers versus pure backend front end developer split is an interesting one. Throughout my career, I've seen both approaches. Before I went into management, I was very much a backend engineer, interested in infrastructure and DevOps and scaling things, really good microservice boundaries and API specs and all that fun stuff. But now we have this ability to churn out code under the right rules, and I think it's becoming more important to make sure our backend is extremely strong. And then that lets you, I guess, vibe code some of the UI elements quickly. And all the UI engineers are going to hate me for saying this, right? So, sorry about that.

But I do think also there's an enhanced need for setting up the framework to allow these tools to work. So, at TurinTech, we've got a heavy focus on what are the patterns we're going to use, what are the libraries going to use, and we choose these things based on what LLMs are trained on, what they're going to be good at. And so, we'll set out with a framework in mind and a code-based style and we'll see if AI can follow that style. And then we feed that feedback back into the loop, right? Is AI good at following these rules? And either we change the rules or we try to change AI. But I think these things do need to work in tandem.

You don't want to be fighting against the training. We haven't seen the end of this. Do we have full stack engineers? Do we have backend? Do we have these factory architects in teams? We're also seeing emergence of researchers and data science engineers inside teams. The other thing to consider is if engineering productivity skyrockets, where is the bottleneck? And then I think the bottleneck becomes your understanding of the customer problem. It used to be the case that you could figure out what customer problem to solve, and then you can spend nine months fixing it.

If that nine months become one month, your discovery process needs complete overhaul. You can't just talk to developers about the same problem for nine months. You've got to make decisions in hours and days, not weeks and months. So, I think there's a knock-on to how you talk to customers, how you collect data, how you make decisions. And then also it shortens the loop for feedback. Engineering has always been the most expensive way of learning. Product management books always talk about stop building things to learn, right? Just draw a diagram on a piece of paper and give it to the customer and say, "Is that what you want? Would that help?" Right?

And then you can get more in that 10 minutes than building the thing for two months. But if that flips on its head and you can build a product in a day or at least a prototype in a day and give them the prototype, that changes discovery. If the bottleneck is product discovery, you're going to need more product managers, more researchers, more designers, and engineering shrinks as a percentage of your workforce, I guess, at that point.

Culture Shifts in Engineering Teams [21:53]

Shane Hastie: This is the engineering culture podcast. What are the culture shifts, the teamwork shifts that are happening with these changes?

Michael Parker: Yes. So, there's some good things and bad things, I think. One of the bad things is that people are becoming a bit more isolated in engineering teams. Everyone has their own setup. Everyone's using AI in different ways. And some teams are handling this really well. I've actually seen the reemergence of mob programming on some high performing teams where because the code is so fast to create, team synchronization is actually becoming more important than the speed at which you generate code. So, I was talking to a development manager a couple weeks ago. They do all of their code on one computer with five people.

They basically live their life in a meeting room discussing the problem, the structure, and then they type in the exact prompt and the plan, and then AI writes the code, and then they all review it, they discuss it, which I love. That brings warmth to my heart. People working together again and talking, having fun. That's how development should be, I think. We all went into this profession because we like computers more than people, but we do need human connection as well. So, I think getting people in a room and working these things through together is a great way of working. But I think lots of teams are not doing that.

A lot of teams are still stuck on their own computers and we're kind of seeing this combination of fear, denial, bargaining, grief. Some people are worried that they're not keeping up with the latest tools and they're maybe embarrassed to ask questions about why AI is going wrong for them or how do I get it working? So, they're more shy about working together and setting up their environment and they see some phenomenal gains from other people and they're worried that they won't compete. And this drives people sort of inwards. So, you're just stuck on your own environment, you're trying to make sense of it, you're trying to read things.

And so, I think we need to break out of that and we need to connect people more. That's the two ends of the spectrum for team culture. I guess the other thing is how leadership communicates and approaches adoption of AI tooling. And we're seeing various takes on this. You've probably seen the extreme end of the spectrum where people are like, "You're going to be fired unless you use AI". Everyone has to go all in, spend a month, all you're doing is learning AI. And that produces a lot of fear and worry from people, right? And this is what these messages are intended to do.

It's really kicking people and saying, "You can't just ignore this, you have to learn". But at the same time, the people on the ground are seeing a lot of the problems that you might not read about on LinkedIn or in podcast. And so, I think there's a growing disconnect between the level of hype believed by some of engineering leadership and the reality on the ground. And that's breeding distrust from both ends, right? Leadership think, God, my developers are so slow, they're not adopting AI. I read on LinkedIn that I can do everything in 30 minutes. Why is this taking a week?

And at the other end of the spectrum, engineering, like, "Have they used AI? Look at our code base. You can't just build the whole thing in JavaScript and Python. I got .NET2 code". It doesn't work. So, I think that's a cultural divide we need to bridge as well.

What Should Our Industry Look Like in Five Years? [25:19]

Shane Hastie: What's the important question I haven't asked you today?

Michael Parker: I guess the important question for us all is, what do we want our industry to look like in five to 10 years? And maybe me and you can't solve that here and now, but I do think it's as important as an industry to see where this is going and agree if we want this or not. And maybe we don't have a choice, but I do think it will end up in a better place. I do think it's possible to build AI systems that help us return to joyful development, offload all of the boring and the mundane work, primarily running in the cloud autonomously, being able to bring teams together.

How do we iteratively get there I think is difficult and everyone's trying to make progress in these areas, but I do think we need to agree what we want our job to be in five years' time. Is our career going to go away? Is it going to change? Are we okay with it? And I guess if not, what do we do instead?

Shane Hastie: Some pretty deep and thought-provoking questions there. Mike, thanks so much for taking the time to talk to us today. If people want to continue the conversation, where can they find you?

Michael Parker: Yes, I've loved this conversation. If anyone wants to talk to me about any of these topics, I'm always interested to hear what people think. You can reach out to me on LinkedIn, search for Michaelparkerdev@TurinTech, and I'm happy to hear from you.

Shane Hastie: Cool. Thank you so much.

Michael Parker: Thank you.

Mentioned:

About the Author

More about our podcasts

You can keep up-to-date with the podcasts via our RSS Feed, and they are available via SoundCloud, Apple Podcasts, Spotify, Overcast and YouTube. From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Previous podcasts

Rate this Article

Adoption
Style

BT