In this end-of-year panel, the InfoQ podcast hosts reflect on AI’s impact on software delivery, the growing importance of sociotechnical systems, evolving cloud realities, and what 2026 may bring.
Key Takeaways
- AI is now reshaping how software gets built. We've moved from "AI is impressive" to "AI is changing workflows": agentic systems, MCP and Agent2Agent-style interoperability, and AI becoming more like a team member (task prioritisation, calendar/workflow integration, co-creation of software).
- AI accelerates everything, including (potentially) your org’s existing dysfunction. Strong teams with good engineering practices get better/faster, while weaker teams get more chaotic. AI doesn’t magically fix mediocre DX, culture, or delivery pipelines: it amplifies friction points.
- Managing complexity is the architect’s core job, and AI raises the stakes. There’s explicit resistance to the idea that "AI can handle complexity so humans don’t need clean boundaries". Instead: separation of concerns, DDD, smaller components, and clear intent matter even more when AI is generating (and accelerating) change.
- Alongside excitement, there’s concern about AI: ethical focus slipping, sustainability being treated as cost-only, burnout/"996" pressures, and juniors’ development in an AI-saturated workplace. Team trust is highlighted as fragile if orgs treat people as interchangeable because "the AI can do it".
- Platforms, cloud, and reliability are back in focus, and "global by default" is fading, as seen in outages and multi-region reality checks, as well as Europe’s growing sovereignty/regional concerns. On platform engineering specifically, there’s a "trough of disillusionment" vibe, plus a renewed push toward better abstraction/composition layers and treating platforms as a product.
Subscribe on:
Transcript
Introductions [00:34]
Daniel Bryant: Hello, and welcome to the InfoQ Podcast. My name is Daniel Bryant, one of the co-hosts here. It's that time of the year we like to look back at our most popular technologies, topics, trends, and techniques that we've covered in the various trend reports, news items, articles, and podcasts. And to do that, I've got a fantastic panel of folks, leaders within the InfoQ space. We couldn't fit them all on one podcast, dare I say it. There's many other leaders we're trying to summarize their views as well, but I've got with me fellow podcast co-hosts and folks that organize the InfoQ events. So, without further ado, I'll go around the room to get everyone to do an introduction, and then I'll introduce myself. So, Renato, do you want to go first?
Renato Losio: Thank you, Daniel. I'm so happy to be back. My intro is almost actually the same as last year. I'm a cloud architect at Cloudiamo, based in Germany, based in Berlin. I'm an editor for the Cloud and DevOps Queue, and probably my highlight as an InfoQ editor has been running the InfoQ Dev Summit in Munich for the second year in a row. So, probably the same interest last year, but as excited as last year, even more.
Daniel Bryant: Thanks, Renato. Srini, how about you?
Srini Penchikala: Hello, everyone. I am Srini Penchikala. In my day job, I work as an enterprise architect. For the InfoQ community, I serve as the lead editor for the AI, ML, and Data Engineering Group. I also co-host a podcast in the same space, and I'm currently serving as a program committee member for the QCon London 2026 conference, which I'm really looking forward to. 2025 has been another stellar year for AI technologies, and we continue to see the next phase of AI adoption in terms of technologies, language model innovations, and more and more integration into real-world business use cases. We will talk more about this later in the podcast. I want to quickly mention the AI, ML Trends Podcast, which the editorial team recorded back in September. So, that will be the main reference I'll be going back to a few times in this podcast. Definitely, there is a lot to look forward to. Thank you.
Daniel Bryant: Cheers, Srini. And Thomas.
Thomas Betts: Thomas Betts. Still working at Blackbaud. Been doing a lot of QCon stuff. I was one of the program committee members for QCon. Like I said, I'm at Blackbaud. By the time this publishes, this is my big news. I'm accepting a new position. I'll be the architect for our new Agents for Good lineups, so creating AI agents for the social good community. So, looking forward to that starting January of 2026.
Daniel Bryant:Amazing. Kudos. That's kudos. Shane, over to you.
Shane Hastie: I'm Shane Hastie, lead editor for Culture & Methods, host of the Engineering Culture Podcast. Still with Skills Development Group as my day job. Still deeply entrenched and involved in the human side of sociotechnical systems, the human systems. And Thomas, I'm going to get you on the podcast to talk about AI for good.
Thomas Betts: I'm looking forward to it. Give me a month or two to settle in and figure out what I'm going to be doing and then I'll definitely take you up on that.
Daniel Bryant: Amazing. Amazing. And I'm still at Syntasso, working on platform engineering. Definitely here on the sociotechnical, Shane, we're doing a lot of stuff with that. We are building technical systems, but working a lot more on these days on the social side of platform building in terms of bringing folks in, getting them understanding the organization, mapping into the platform. It's a lot of fun working in that space at the moment. With AI in the mix, everyone's using their platforms as sort of the jumping-off point. So, I'm sure we'll mention those two letters a few times in the podcast. We'll try and keep it AI-friendly, AI-neutral as well. What I did want to do is always fun. At the end of every podcast we do a year, we make a series of predictions. And this year, Renato's joined us and has got predictions in the can as well.
How did we do with our 2024/2025 predictions from last year’s podcast? [04:12]
I want to quickly look back and how did our predictions from last year do? Did they come true? Are we halfway there? I'm not too sure. And Thomas, I want to start with you. What did you mention as a prediction and how do you think it did?
Thomas Betts: So, I was talking about sustainability. There was a lot of stuff talking about at conferences, the book with Anne Currie and Sarah Hsu about green software. I interviewed them on the podcast and it was the thing we're talking about, but it was also acknowledging that we're in the early stages. And I think I wanted to bring more hope around architects are considering the sustainability aspects. The place I am seeing some of that is actually where people are saying you've got all these AI systems. And I'm hearing from customers, how are you making sure you're not having a bad environmental impact? Because the layman knows AI uses a lot of water or electricity or whatever. And so, sure we want to use your AI stuff, but how are you doing it ethically and responsibly? So, it's kind of interesting that I thought just architects were going to start looking at sustainability as one more factor of how we design our systems, one more ility.
And with the shift to AI, I thought it'd get lost and it's actually sneaking back in. So, I'm half right for different reasons.
Daniel Bryant: I love it. I think we're all going to claim that, I reckon. Thank you, Thomas. That's great. Shane, I look at your direction as well. What did you predict and how'd you do?
Shane Hastie: What I predicted was getting past some of the knee-jerk reaction return to office mandates. I'd say I've kind of been half right there as well. Some organizations still doing that and others where it's just faded into the background and people work in ways that make sense to them. Sometimes that's together in a physical office, sometimes it's remotely. And there's less of that panic-driven, I don't know, was it investment, return on investment for building space, driven mandates around return to office. Starting to see some of the AI stuff, starting to see the teams and the team structures around AI. That was one of my predictions for last year was more a question, I think. The AI partner is going to be huge potential. The risk of losing the human critical thinking skills. Some of the studies and reports that are out are indicating that that is actually happening, which is a worry.
Yet we're also definitely starting to see some of the benefits. Is it delivering on the promise? Not yet, but it's heading there.
Daniel Bryant: Love it, Shane. Love it. We can dive into more of that later. I'm sure we definitely all want to discuss that. So, yes, kudos, Shane, give us a half right again. I think let's say two for two so far. Renato, what are you thinking?
Renato Losio: Well, if you ask me about my predictions last year, I would say last year I double-checked. I said my prediction for 2025 and beyond. So, I'm still fine.
Daniel Bryant: Half right.
Renato Losio: Let's say they were both wrong probably. I actually first said that I was expecting Intel not to be anymore the default processor or any cloud provider. And yes, I can kind of believe the hype from keynotes from re:Invent and say that now most new workloads are running on Graviton. Probably true, but we are probably still far away from saying that Intel is not the default. The second one is that I got it entirely wrong. I thought really we were going in the direction of cloud provider in general. We're going far away from regional endpoint and developer wanted to have something more global, don't think about local region, whatever. Well, the entire political landscape has changed quite significantly. We are actually going the opposite direction almost here in Europe. The focus is we want to have entirely separate cloud region or cloud provider. It's definitely not in the direction of a global one. Let's put it this way. So, yes, I didn't get them particularly right.
Daniel Bryant: Srini, how about you?
Srini Penchikala: I mentioned that we have really arrived in AI adoption when we don't have to talk about AI as a thing anymore. And when AI becomes part of the fabric of our lives, obviously that was a bold prediction and we are not there yet, but we are a little bit closer to that long-term goal with the new innovations happening in the AI landscape, such as AI agents and the standard protocols like MCP and Agent2Agent protocol and other developments. These innovations are starting to be used for various functions in a variety of industry verticals. So, similar to other panelists, I am also partly right in my prediction.
Daniel Bryant: I'll also claim and be partially right as well. I said platform engineering was going to head into the trough of disillusionment if folks know the kind of Gartner model there. And I was also talking about we'd see better abstractions at the platform, engineering platform building level. And I think I definitely am seeing a lot of folks now struggling with that platforms with the day two story, with the upgrades. I was at KubeCon and a bunch of folks recently, QCon San Francisco, same kind of vibe, folks back as well. So, yes, I definitely see there's some failure stories emerging. We're seeing them at the QCons, people saying, "Hey, I tried this, it didn't work". It's always nice to learn from those stories and we can evolve as a community towards better things. I think also with the abstractions, it's come along a little bit. There's frameworks like, kro, Crossplane, Kratix, kind of creating different levels for composing platform.
And in particular, kro's getting a lot of attention, a lot of love, AWS re:Invent and just in the CNCF community, these kind of things. It's sort of half the battle it's got a bit of the abstractions. It's not fully got some of the composition stuff. There's lots of great discussions going on around that space. So, I'm going to say I'm partly right. I think we as an industry have come together and say, "This is the way we're going to define platforms and platform components", but there are projects emerging that are kind of saying, "This gives one way to do it". So, I think a lot of this is focusing at the platform level on managing complexity. You're building these platforms, the components are quite complex. How do you abstract away some of that complexity at the right level? And I think that is a perfect segue.
How are architects managing complexity? [10:13]
Thomas, I want to look in your direction, you were saying about managing complexity, broadening of the topics that architects need to consider. And I think platforms could even be in that space, but I'd love to know what you were thinking in terms of these things.
Thomas Betts: Yes, I think there was a poll somewhere online months ago, I can't find it, of what's your one sentence definition of what does an architect do? And the one that stuck out to me was they manage complexity or they try and grapple with it. And that's both explaining it to the people, the socio part of the sociotechnical system, and how do we encapsulate that so we don't have the big ball of mud. And so the complexity overwhelms us, things like domain-driven design or these different separation of concerns patterns to help me keep the cognitive load minimal to just the thing I can understand right now. And so that idea of complexity, I think comes out in a lot of different viewpoints. And just in December, there was an eMag that Eran Stiller helped put together about architecture through different lenses. I think it's the second time we've done this.
And so, it's five different articles that are very different, but all around the theme of you need to look at different ways to solve the problem. Stuff like Matthew lists about building resilient platforms was one of the articles in there talking about platform engineering. I interviewed Andrew Harmel-Law earlier this year about his new book that came out, Facilitating Software Architecture, still sitting here on my desk, but it was an article by people who said we implemented what he did and how do you get decentralized decision making happening? How do you get more people making decisions as opposed to having architecture as bottleneck? Check out the eMag because there are a lot of different themes in there of how do you as an architect have to look at different problems. And here's where I want everyone else's feedback. That gets to that theme of these are all sociotechnical systems.
It's something we've been talking about for years on InfoQ and I'm seeing a little bit more broader in the community like, "Yes, it's just how we do it. We have to think about the people who are designing the systems, who are building the systems and maintaining the systems and how do we do that?" Things like DDD that gives us ubiquitous language. And Shane, I'm going to go back to your point about the AI as a team member, how does it affect our team dynamics? Is that one new aspect that we're going to have to start considering to help manage complexity? I want the AI to participate, but not take over. And there's a thought that, "Oh, AI doesn't need these separation of concerns. It doesn't need to be dumbed down for the human that AI can handle it". I resist that idea. I think it helps to still have things in human understandable pieces because the complexity is there and we say, "Well, the AI is going to handle it", then we won't know how anything works anymore.
Daniel Bryant: Exactly.
Thomas Betts: That's kind of a broad view of here's complexity in a nutshell, but I like that whole picture.
Shane Hastie: Yes, I'll pick up on that one definitely. The AI as a team member, AI as a tool to help, but the danger of just abdicating thinking to the AI tools is incredibly risky. One of the studies showed 300% more code being produced, 400% more bugs at the same time. And the fact that the pull requests are getting larger and larger, whereas we've spent decades trying to aim for much smaller separation of concerns, smaller components. We've got a whole... Microservices is a thing. Your AI slop machine will produce a monolith that nobody can understand if you're not really, really careful. So, the way we train, the way we prompt engineering, prompt design and the training of the bots of the agents, that's the skill. It's another level of abstraction, but it's also a different way of thinking.
Renato Losio: You really raise a good point about the lines of code, but as well, the complexity, because I can think of many successful, even a small project where I've used agents or anyway, some form of AI. And were they successful? Probably they helped me. I was faster. I delivered. Fine. Can I think of any of them where the result was less complex? No. So, I kind of accept extra level of complexity in the code that I deliver or extra code as part of being faster. Is that ideal? No. Where do you find a balance? I have no idea. So, for small project, I can still manage it. On a larger one, I have no idea if you can really scale that. That's my main problem there.
Thomas Betts: I think it's going to encourage us even more to not use lines of code as a metric of success. That's always been bad, but the first keynote at QCon was Nicole Forsgren talking about how AI is going to amplify everything. She's talking about friction and that any of your friction points in your developer experience, you're going to just rub against those 10 times, 100 times faster if you're using AI than the humans run at it. We've been working at the speed of people and now we're working at the speed of computers. And that's always been an interesting case like my software, if I know this endpoint's only going to be called 10 times in an hour, I don't have to optimize it. If it's going to be called 1,000 times a second, I'm going to design it differently. I'm going to have to write maybe more complex code that only a few people know how to use, but it's fit for purpose.
So, you have to do those trade-off analyses, and that's always part of the architect's toolboxes doing the trade-off analysis.
How is AI affecting things like DORA metrics? [15:49]
Daniel Bryant: Yes, it's a good shout Thomas. It's interesting like Nicole Forsgren in that keynote at QCon SF, I really enjoyed that. I mean, Nicole, Dr. Forsgren, amazing. DORA, accelerate, all these amazing things that she and her team have worked on. And I know Shane, you were going to call out the DORA Report shifting to the state of AI-assisted software engineering. I think it's a nice segue there into that.
Shane Hastie: What Nicole said in that keynote, and it comes through pretty much every episode of the Culture Podcast this year where somebody has brought in the, "Okay, we are doing this with AI and it's done that and it's made this worse". Definitely good engineering teams in an environment where there's good support AI accelerates, but it also brings in those friction points and organizations and teams that are average and mediocre, they're getting more average and more mediocre, and we're definitely seeing a bifurcation.
Thomas Betts: One of my takeaways is that culture eats strategy for breakfast like, "Oh, we want to use this AI, but we're not in a place to do those things". And the high-performing teams were already in a mindset of, "I'm going to be constantly learning". That growth mindset versus the fixed mindset. "And so this is just one more tool I can use and how can I use it to go faster versus I know everything I need to do and now I'm going to apply AI, but I shouldn't have to change how I do it".
Shane Hastie: Yes. The other thing is the organizational mandate for AI without really thinking about what are the implications to us. It's just everyone else is using this AI thing. Well, you need to use AI thing and do it tomorrow. And everyone tells me it's fast, so do it today instead.
Daniel Bryant: Yes, yes. Reminds me of the Simon Wardley quote, Shane, he's saying if there's a military campaign, well, 67% of generals bombard a hill, so we're going to bombard the hill. But if you're not thinking about the situational awareness, the strategy, the tactics, it's nonsense, right? But we've all seen it. So.
Shane Hastie: Yes.
What is the impact of agentic AI? [17:54]
Daniel Bryant: So, we definitely mentioned the AI buzzword. We have to. And Thomas, I know you wanted to talk about some of the new concerns and the same old concerns. And then I think that's a nice lead into what Srini's going to talk about with the AI as well. So, Thomas, you wanted to talk a little bit about the agentic AI concerns?
Thomas Betts: Yes. So, I remember QCon San Francisco a year ago, 2024, the closing keynote was a talk about agentic AI and it was very much the future-looking, here's what's coming. I had no idea how much of that was going to be happening so fast. If you look at our architecture design trends report that came out, I think in April, MCP wasn't on it. It's amazing how you think back how barely six, eight months ago, we weren't talking about it because it didn't exist yet or it just barely existed. It wasn't relevant. And now everyone's talking about MCP, everyone's talking about agents. But I think that gets back to those friction points and what Shane was talking about, the way people work. If you haven't optimized for this new way of doing stuff and you haven't been preparing for it, you're going to trip over yourself.
And some of the things that make agents work well, I think are the same patterns we've been following for years. So, if you had good separation concerns, you had microservices, you can say, "Analyze this one little bit". Smaller agents, very focused, are much more efficient than the large solve all my problems right now. If you try and write 100-page document, it's going to trip over itself. But you ask for 500 words, it's really good. Adrian Cockcroft gave that presentation at QCon about using agent swarms for fun and profit. And that was one of his key takeaways was lots of little agents working together and it still gets to that cognitive load. I think the cognitive load of the LLM has to be concerned. So, how does that show up in our software and our architecture design? I think we're going to see some of the old patterns that people didn't necessarily adopt very much or they were niche become more mainstream.
Things like an actor pattern because now it makes sense to have your agent as an actor and that's already been established. We know how to do that, but it's been around for decades. So, I think we're going to see that show up.
Renato Losio: Actually, it made me laugh the reference to MCP because when we were discussing the keynote for Munich, and that was about March or April, the idea was actually about AI in 2025 and beyond. And one of the topic was if in the apps that we should call out what MCP was because at that time in March was not very clear. By the time was September, we decided to remove the description of what it was because at that point it was pretty obvious. That's how fast things change in that space.
Srini Penchikala: AI as a team member was mentioned earlier in the discussion. This is one of the emerging teams I am seeing as well. I watched the presentation by Glean product management lead the other day. This company envisions AI tools becoming an integral part of software developers day-to-day tasks by integrating nicely into the workflow and the right steps in the process to whether alert the team member on what tasks or backlog items to focus on a specific day, but also managing the calendar based on the priorities and urgencies of the day, software development schedule and the calendar activities to a AI agent assisted task prioritization and what to work on on a specific day to deliver the value to the users. Also, Martin Fowler and his team have been writing and talking about a term they coined expert generalist, which I kind of learned about first when I read the book titled Range by David Epstein.
Martin's team describe some of the characteristics of an expert generalist as a curiosity, collaborativeness, favoring fundamental knowledge, and having a blend of generalist and specialist skills. I think this concept is even more relevant in the area of AI adoption by organizations where it'll be more important to have a general understanding of overall AI tool adoption, knowledge, and in-depth subject matter expertise in one or two of the areas in the space. Echoing Shane's suggestion from earlier in the discussion that AI will not replace people, but other people who can use AI tools better than us will replace us. I would like to take the sentiment up to the organizational level and suggest that companies who don't adapt or take advantage of AI technologies and tools where those technologies add value will be replaced by companies who will adopt those AI technologies to automate their business processes and gain additional insights compared to not using AI technologies.
I also see in some instances, companies are going overboard with the AI first mandates and initiatives. AI technologies are a good fit for specific use cases, but they are not good for other use cases. Companies should definitely embrace and leverage AI solutions where they make sense, but be cautious about using them where they're not a good fit. AI technologies are like any other tool in our toolbox. They're only as powerful as we leverage them for the right use cases and in the right context.
Daniel Bryant: Thank you, Srini. I'm particularly loving the mention of expert generalists. I was lucky enough to meet Martin Fowler again at a conference earlier this year. It was great chatting to him and he was really on the expert generalist theme. He was chatting about this with all the folks around the dinner, extolling the virtues of learning more about this. And he was saying, I believe at ThoughtWorks, they were seeing more and more need for this expert generalist role in the consulting work they do with fantastic software projects they work on. So, big plus one to Martin's mention of expert generalist, also in the Range book, fantastic book. I do see myself as an expert generalist, and I'm keen to learn more about that as time goes on as well. I will also call out now, is there anything you wanted to mention in relation to the InfoQ AI and ML trends report?
I've got to share out all the trend reports. We put them out once every couple of months at InfoQ, and we try to change topics. We focus on AI and Java and architecture and Culture & Methods and all the things. But Srini in particular, I think with the AI and ML trend report that was produced in the start of the fall, we got a lot of great feedback. So, I'd love to get your thoughts on that.
Srini Penchikala: 2025 was definitely a year where AI was talked about the most. In this year's AI, ML trends discussion, our Asia team has highlighted the following topics. The first one is language model innovations. We have seen a lot of interesting developments in language models this year, including vision language models, small language models, which we talked about last year, and they're getting more adoption this year. We have also seen the emergence of reasoning models. These models can not only analyze and predict and generate the content, the companies behind these models claim that they can even think. We also have seen the development of stage-based models. Some of the highlights of language models and LM innovations include OpenAI's GPT-5 and in the vision LLMs, we had tools like OpenAI Sora that are pioneering the generative video content. Reasoning models such as GPT-5 thinking generate an explicit chain of thought before producing an answer. So, these are more very powerful reasoning models.
Small language models continue to gain traction, especially for on-device inference, privacy preserving applications, as well as cost-sensitive deployments. Also, in terms of tools, there are a lot of innovations happening in the infrastructure of how we run the AI applications. Tools like vLLM and LLM-D from Red Hat, which was showcased at this year's KubeCon conference, can help development teams execute these LLM models on-premise in their own data center without needing to depend on cloud-based solutions where the cloud-based option may be very expensive and there may be privacy reasons that the teams do not want the data to leave their company's network.
What ethical concerns remain around the adoption of AI? [25:59]
Daniel Bryant: Very interesting, Srini. Yes. Shane, I'm pretty sure I saw in the notes that you had some ethics and AI-related themes you wanted to discuss as well.
Shane Hastie: Yes. One of the things that has worried me is there was a stronger conversation even a year ago about the ethical implications. Now, Thomas, you made the point that customers are starting to ask about it. I see the organizations slipping away from some of the ethical guidance, and I don't know whether this is a reaction to the geopolitical situation that's around us, those pressures, but there's less concern about the, just because we can doesn't mean we should. There's all the lawsuits going on with the AI companies among others, but there's a whole lot of, yes, just for me, concerns that are we slipping onto or stepping onto a bit of a slippery slope. Sustainability, when it is linked to cost, people are starting to care about.
So, can we reduce the processor costs, the transaction costs, managing your cloud workflows, managing the API calls for the LLMs and so forth. But sustainability from, again, that human perspective. And if we think about, going back to the Agile Manifesto, sustainable pace, I don't see that. I see organizations and maybe it's there again, so many restructures, so many roles lost, so many jobs lost. Those that are left, people are hunkering down, they're scared, they're nervous, and you do see people working at unsustainable paces. It's okay to sleep on the floor behind your desk because we've got this deadline. It's never been, and it never should be.
Daniel Bryant: Yes, definitely seeing a bit of that. The 996 culture is what I've been hearing in the Bay Area, drifting across some places. So, yes, it's a tricky one. I definitely see in my work with the Bay Area, there has been some pushback now. A few months ago was everyone's got to be doing 996, and then people literally started burning out. And there has been even the VCs now pushing back saying, "Hey, we love you to work really hard, but you've got to be in it for the long haul". I know it's a challenge, but I'm hopeful that there is a bit of pushback emerging from the thought leaders in the space saying sustainability. And to your point, Shane, calling back even to the original documents ideas like the Agile Manifesto, which contains, as I'm sure we're going to say a lot tonight, that there's a lot of core principles that we forget sometimes.
And if you look back to original extreme programming, the Agile Manifesto, there's a lot of good content in there. And the junior folks I mentor, I'm like, "We having to relearn some of this stuff. So go and check out the old books. They're actually really good for some of the architectural basics, the sociotechnical basics, the ways of working, the ways of putting systems together". I think there's a lot to relearn sometimes.
Renato Losio: Yes. I'm a bit more pessimistic than you, I guess because the feeling I have is that often we use sustainability just as a proxy for something else as was mentioned before, like in cloud, we say sustainability, but actually what we mean is just a cost reduction exercise. We don't really care about CO2 or whatever. And in terms of work, yes, some VCs are pushing back now, but because they saw the burnout, so they don't see the result anymore, not because they really are fully into. So, that's the sad part. I mean, if we achieve the result at the end, still good, but I don't think that the reasons are the one that Shane mentioned before, I think is more like-
What are the big stories from the cloud computing space in 2025? [29:37]
Daniel Bryant: Well, the capitalist angle there, Renato. Yes. No, I hear you're saying. Yes. I mean, the VC by its very nature is very capitalistic. So, yes, I hear you loud and clear. But you mentioned the cloud there. And it's perfect segue, I think, into some of your highlights. I know you've been doing a lot of work in the cloud space, a lot of amazing articles you've written on the InfoQ News site over the last year. A lot of them sort of focusing on outages seemingly every week we're covering another outage, right? I won't mention any vendors names, I'll let you do that. But what was your highlights from there this year, Renato?
Renato Losio: Yes. Well, you mentioned outages, I think has been the main key point of Q4. I think quite a bizarre way. I think they have been good for the industry. I mean, the idea has always been the cloud works. Everything works out all the time, even if the message has always been everything fails all the time. AWS had that failure in Northern Virginia that actually I liked two key points. One was how much of our infrastructure depends on the cloud. Whatever is AWS, Azure, Cloudflare, or whoever else. And the second one was even if you claim to be multi-region, if you claim to be across region, actually I have to see feeling that most people, the one that got away with it were mostly lucky. I mean, I have no problem to say that it's not that I built multi-region, was that if I have a deployment in other regions and I was not using anything related to service that depend on the control panel in Northern Virginia, maybe I got a lucky day.
The reaction has been interesting because you start to see the standard path of not mentioning name, but we all know we're the big believer of move back to your own data center because maybe they were able to do it. But I still believe that for most deployment, most customer, most projects, the cloud part is not the weakest link. I mean, yes, of course AWS can go down, the entire region can go down, but for most workloads, usually that's not the biggest problem. The weakest part is your own part, your own infrastructure, your own application. And actually, I would even say that when there are those big failures, usually, well, if the biggest provider are down, probably even your own customer, most cases don't even realize they have bigger problem to handle that way, that your own project, but that's a very negative way to say it.
And going back to the team you mentioned before is what I see in many teams. It was discussed day and night about multi-region, highly available system or whatever. And then you realize that actually the not highly available part is the human component that either is burnout or team that have one person to manage everything and as the only key to that entire infrastructure. But that's on their ops side. In terms of cloud provider, I mean, I think we are getting, I would say a bit, not bold, but it's not the excitement as well for newer generation that they were, I don't know, 10 years ago about AWS or Azure maybe. I think the only one that is going personally in the direction of being something new in terms of services or something more targeting developers is probably Cloudflare at the moment that has some interesting new features, some interesting new services, maybe because they're new in the space.
They are not using the space as a provider. Of course, we have used their CDN for many years, but extended their platform to be able to do everything serverless to have something more developer-focused. I think they are the most interesting one, at least for me in the last 12 months. Yes. In terms of announcement, I don't really know what to say. I mean, we have to say that as every major conference we keep claiming every year is better than the year before, but reality is at the moment, most of the focus is on AI. I mean, if I think about re:Invent two weeks ago, that's really, to me, a big disconnect between what developers want and what the conference was about. I think his intention is not that AWS doesn't know, is that the target of those announcements were not really the practitioners. The practitioner were there to enjoy the parties if they were there.
But yes, I think one of the major comment about the first keynote I think was that out of two hours, there were like 15 minutes at the end, 15 of which related to the real announcement that developers cared about. And there were some significant one. I mean, the serverless space, there were durable functions. There was this new approach that serverless is now as well, not just serverless versus server, but you can run the serverless function on your own infrastructure. So, it's kind of mixing. It's an interesting direction. But those things were kind of side announcement against the big AI stuff. I'm definitely not a big fan of those announcements that I have the feeling that they're not done for the developer. They're more like throwing many things out there and see if anything sticks.
Thomas Betts: Not just in the tech industry. I think other software companies, not just the big cloud providers are doing the, "I have to stick AI on it". The announcements from every company conference for the last year were, "Hey, our new stuff has AI and we're all AI-first and we're doing all these things". And some of the feedback from customers is like, "Could you just improve the software? How does that actually make me do my job better? I wasn't asking for it". And this might've been something I said last year, I know I brought it up on some podcast, is AI is the feature that people don't want, but they're going to eventually expect. I should just be able to have a chat interface and it should just work. But right now, sticking a chat interface in every single website is not necessarily the right answer.
Every app does not need a chat thing. Maybe we'll get there and maybe we'll just be like, "Solve all my problems, master chat interface". And it's all agentic with MCP, but there's a lot of stuff that has to be there before that magic can happen. We aren't able to just wave the magic wand. And I think if you go back to what are customers asking for? What are developers asking for? I just want to get my job done. I want to do one level better, not like surprise, it's 100 times better because AI. I think we can see past the smokescreen on that.
Renato Losio: Yes. I think one key difference with the AI is that we were used with big cloud providers that once something almost referring AWS, that once something was out there, was announced, was I wouldn't say forever, but you could trust that they were saying, "This is a new service. These are the API we'll support for you". It's kind of a trust thing. You can use it because it's going to be there. Well, first that has shifted significantly, but as also for AI stuff, it's more like here are 200 announcements for this year, by the end of the year, half of them are gone and good luck.
What has happened in the platform engineering space over the last year? [36:24]
Daniel Bryant: I was going to add on, so looking at some of the platform engineering space where I'm focused at, there's definitely a lot of development around chat interfaces to your point, Thomas. The only area I've really seen it been successful is observability. Folks are actually doing a lot more dashboard-based, like the Incident.io folks I was chatting to. Martha did a great presentation at QCon London on one of my tracks, and I saw Dash0, a bunch of folks like Datadog, many vendors in this space, trying to get the dashboards right and then empowering you to ask questions. So, not just go 100% chat interface, but go like dashboards, drill down, drill down, drill down so you can find the problem, but then maybe ask, "Where should I start and can you help me synthesize these results?" So, I do see the observability space making progress in the platform engineering space.
Unfortunately, not so much the security space. And I will put my hands up and say, I'm not a security expert. And folks, if you disagree with me, you're listening, hit me up on LinkedIn and tell me why I'm wrong. But at least what I've seen at KubeCon and QCon, the security space has still not found the right AI-driven interface yet. They're trying to flag up certain things with analyzing logs and whatever, but the chat interface isn't quite there yet. So, I see a lot of interesting developments in the platform space. I definitely want to call it's almost coming full circle to a few things we mentioned tonight. There's definitely a push towards building a platform as a product. And you said it, Thomas, actually asking your customers what they want. And in this case, it's the developers, it's the QA folks, it's the product managers.
A lot of people build platforms as simply collections of tools. Here's your Jenkins for deployment. Here's your Docker for building Docker. All the great technologies, but as a sort of collection and ecosystem, they need to be cohesive. So, you as a developer, as a QA person can actually get your job done and you want a nice interface and nice abstraction to do that. And in the cloud and trends report, I did with Shweta Vohra, Matt Saunders and Steef-Jan as well. We talked a lot about this sort of notion of actually how do you build a platform as a product and Shweta had some really good ideas. She also written some books. So, folks are interested check out Shweta's books on these kind of things. She got together a bunch of architects that were talking about platforms and broke down some of these, why you need to think about your platform like you do your normal product you're actually producing, but think about your internal tool set and platform as a product.
And a lot of it kept coming back to sociotechnical systems. We're building systems for people. As Shane, you mentioned, in this age of AI, we forget that sometimes. And I do want to call out Susanne Kaiser's amazing work, sort of brings a lot of these concepts together. I'm very lucky to know Susanne. No kickbacks on the book, but I've known her for years and she's awesome. The book is finally published. We're chatting, I think at PlatformCon in London a couple of months ago. And I've only reviewed the book and it's just brilliant bringing together things like Team Topologies, which I'm a massive fan of for actually realizing your platform. We know Matthew and Manuel, they're InfoQ fans, and they're awesome people, and they've really brought together these ideas of here's how you build an organization to build a platform, build things as a service.
And Susanne has layered on DDD, domain-driven design, which you mentioned at the intro, Thomas, and she's layered on by Wardley mapping, which I think some of the things you've talked about, Shane, like thinking big picture, big fan of Simon Wardley's work as I'm sure any of us here are. So, that I'm going to shout to Susanne's book, like I say, really good to read that one. If you're looking to build platforms as a product, you want to think about the architecture, the people, the technology, and getting them all together. And this is a really good jumping off point to do that.
Shane Hastie: Building on the Team Topologies thoughts, what we are seeing is more dynamic team structures starting to adopt AI as a teammate, you're seeing a blurring of role definitions. The product managers can produce code. Is the code fully production ready? Definitely not, but we're accelerating a lot of the thinking that just your feedback cycles become so much shorter, but the team structures are much more fluid in what we're starting to see now. And we've had dynamic reteaming from Heidi Health and years back and organizations needing to get good at reteaming and teams being good at bringing new people on and adapting and adjusting. Well, what I see happening now is this is even faster. I had Love Kapoor talking about composable teams and fluid organization structures. These are human competencies that we're not good at. So, there's a learning process for the people in the teams. How do we get a new person, bring them into the team and ramp them up quickly?
We don't have the six-month lead-in time that was quite common. We expect people to arrive on the team and start to be productive within days. The other thing that concerns me in that space is what do we do with the junior that's now coming into the team? How do we get them up to speed? The team dynamics are radically shifting. And again, I'm not sure whether every organization, one, understands what they're doing with this and has provided that platform, the human platform to enable this. What are you folks seeing?
Thomas Betts: Well, I think what makes teams highly effective is that level of trust you have among the people. I know Shane's got my back. I know Daniel's got my back, Renato's got... We're in this together. We as a team are doing it. And that's one of the tenets of Team Topologies is the basic unit of work is not the individual, it's the team. The team completes the feature. I think some people are thinking with AI tools and AI team member, well, they're just going to be the thing and it's almost putting people back into the, you're just an interchangeable cog. Why should I give Thomas this work to do? I can just have the AI write the code instead. And if my only value was writing code, then yes, I'm easily replaceable by an AI agent, especially front end code. It can write better than me. That's not my forte.
But the solve a problem, interact with people, do those things, that's where the humans still work together. I think we still need to get better at communicating and explaining our intent. The product owner can just write the code. Well, no, the product owner can get the code written if they can explain their problem. Going back to, if we have good communication, we have better outcomes. Don't try and explain it all upfront, just keep talking. Well, if I have a constant chat with my LLM, I get better results than I give it a one sentence instruction. I'll come back in an hour, I hope you're done. That didn't work with people, that doesn't work with the AI. So, I think there's a little bit of, going back to that trust, how do we get the people to keep trusting each other if we're changing these team dynamics?
And I'm worried about when you said these composable teams, that gets to the people are just cogs and that's going to lead to burnout and dissatisfaction. I don't see that being successful.
What are our predictions for technologies, techniques, and trends in 2026 [43:19]
Daniel Bryant: Fascinating. Fascinating. Is that a point to hopefully move on to some lighter topics with our predictions, but very well-said folks in like this stuff is really like, we try to have the difficult conversations here to get people listening thinking like the next six months, 12 months, 18 months. So, hopefully we've done that folks. If we have, let us know and the feedback. We always appreciate hearing from you. But I'd love to move on to predictions now. Where do we think the world is going? And Renato, should I start with you? In 2026, what are you putting up for grabs?
Renato Losio: Well, for 2026, I don't want to get them wrong, so I will be even bolder than last year and be abstract. I will say I mentioned before that I really believe that most of the announcements have been done in big conferences were already target developers and they were like try to see what is available there. I honestly think that by the end of the year, we'll have forgotten at least 80% of them and deprecate it at least. And that will be a significant change to previous paradigms. So, one thing that really surprised me in 2025 that I think will keep going in 2026 will be a change in the way that big cloud providers are using open source. Until recently, there was the big dilemma of license and the company changing the license because they didn't want that offer as a server by big, nasty cloud provider that were making money out of everything, whatever.
What happened last year to me with Valkey was a big, big change shift basically when it was Big Four corporatists. And for the very first time, cloud provider different in this case was AWS mostly, but different cloud provider collaborated on an open source version and took ownership giving it to the Linux Foundation at the end. I'm simplifying a bit. So, basically they are changing the paradigm to, instead of relying on an open source software from someone else to like, "We don't like it anymore. We don't like the license change. We take over and it's not anymore a minor folk, it's going to be basically the leading one". That's the direction it's taking that project right now already. Maybe it will be different. I don't know. At the moment, that's the direction. I expect that to happen in other cases. DocumentDB was donated to the Linux Foundation by Microsoft, again, with other provider behind.
Do I expect that one to lead the way? Maybe, or maybe that's even the future of MySQL. Who knows? That is declining, but maybe some cloud provider will take it over. I don't know. But the direction I had the feeling that is going towards not anymore like discussing relying on a third party license, but like forking interesting project. If that's good or not, I don't know, but that's the direction I see.
Daniel Bryant: Super prediction. Thank you, Renato. That's a very interesting one. Very interesting. I need to noodle on that. Thomas.
Thomas Betts: I'll say plug for upcoming episode of the InfoQ Podcast. I'm interviewing Madelyn Olson about Valkey. She spoke at QCon about the low level here's how we improve performance. Most geeky talk I've ever attended, and I loved it. I had to talk to her, but it talked a little bit about the open source discussion. But predictions. Since we didn't talk about MCP, I feel like I have to say that that's my prediction for 2026. I think it's a softball pitch that everyone's just going to be implementing MCP servers left and right if they aren't already. But I think from an architecture perspective, it's a little bit more of architecting for AI. AI is a first class design principle. How do we structure our data? How do we make vector databases so they're more easily searchable? How do we get those things so the LLMs can consume them so that other AI tools can consume our data or MCP, how they can use our API as a tool as opposed to just read the documentation and figure out the API yourself.
And I think that might accelerate some things, but there's going to be companies that haven't kept up with their API standards and are going to struggle. And I think we're going to see a little bit more of a split for that. Jumping out on that, I have to throw out there if the AI bubble is going to burst in 2026.
Daniel Bryant: Oh, someone said it.
Thomas Betts: I feel like there has to at least be some sort of contraction because the amount of hype from every company saying we're going to do AI, I think there's going to be some catastrophic thing that happens because the doomsday part of me says something's going to go bad more than the, "Oops, I accidentally deleted our production database". There's going to be some other stories of, "We went down this as a business path, it didn't happen, market changes". There's going to be some external forces that are going to apply. And I think people have a better perspective this time in 2026, a year from now of, "Okay, here's an actual path forward", instead of just the hype cycle.
Daniel Bryant: Yes, I love it. Shane, over to you.
Shane Hastie: I'm going to build on the bifurcation. Good organizations and teams getting better and better and better, and the bad ones getting worse and worse and maybe disappearing. They will be the victims. Now there's the off misquoted, and we don't know who said it first, but AI won't take your job, but other people using AI will. That's going to continue happening and be exacerbated, I think. Sensible organizations will put a lot more focus on the human skills, the critical thinking, helping our people get better in the stuff that people are so good at and building these great team environments that can adapt and respond effectively. So, that's a prediction or is it a hope?
Daniel Bryant: I like it, Shane. I like it.
Thomas Betts: Daniel, What's yours?
Daniel Bryant: Well, I'll hand over to Srini first.
Srini Penchikala: Let me start with my team's predictions that we talked about in 2025 AI, ML Trends Report Podcast that we published back in September of this year. The team was mainly predicting that the next frontier in AI technologies will be physical AI. There will be major developments in bringing together the physical and digital aspects of AI, and we believe this will be more evident in industries like manufacturing, supply chain, logistics, and other industries. We also think retrieval-augmented generation has become a commodity with the increasing adoption of RAG-based solutions this year. We'll see more adoption in 2026, but other emerging trends like the enhanced agentic RAG or EA RAG are taking both RAG techniques and agentic AI technologies to the next level. We will see a lot of mentioning of the EA RAG in 2026 and more adoption of this specialized RAG-based technology. A shift, as we mentioned earlier in the discussion, is also occurring from AI being an assistant to AI being a co-creator of the software.
We are not just seeing AI tools to write the code faster. We are entering a phase where the entire application can be developed, tested, and shipped with the AI as part of the development team. We're also seeing AI-driven DevOps processes and practices getting a lot more attention this year and they will continue to be more important in 2026. Our guest, Savannah Kunovsky in the AI and ML trends report discussion mentioned in the area of human computer interaction or HCI, we should map all of our research and engineering goals to true human needs and understand how these technologies fit into people's lives and design for that to design AI solutions for the human users. We are also seeing new protocols like model context protocol, MCP, and Agent2Agent protocol from Google A2A. They will continue to offer interoperability between AI client applications and the AI agents and services running in the backend.
To further expand on my predictions for 2026, the overall AI-based application development will be even more unified with innovations in different aspects of AI dev lifecycle, including components like AI gateways, MCP and A2A protocols, multimodal LLMs where the content can be text, audio or video, and obviously the small language models, SLMs for edge computing, and finally the AI agents. Also, similar to cloud-native and edge-native application architectures, we're already starting to see AI-native architectures and AI-native application development will be even more important in 2026 and beyond. Looking beyond 2026, I also think voice will become a more prominent interface to use AI technologies in our day-to-day lives, especially in smart homes, autonomous vehicles, and other devices where data entry is safer and more user-friendly and the interaction is voice-based rather than typing on a screen. I'm also predicting in 2026, there will be significant developments in the areas of AI security with more guardrails and the checks and balances to ensure the AI solutions we develop are going to be safe, secure, reliable, and trustworthy.
Daniel Bryant: Building on this AI, near all of us have mentioned, right? I think that AI is going to expose brittle platforms faster than it fixes them. I'm very much in the platform engineering space. I had some great chats with the analysts, folks that do it full-time as a job like Gartner, Forrester, and Intellyx. I was chatting to at KubeCon, and they were saying actually AI is really rejuvenating investments in platform engineering because folks are struggling to deliver AI projects. And I think Shane, you mentioned, I think we all kind of said it, you can actually go 10X faster, going back to Dr. Nicole Forsgren's keynote, go 10X faster, but if your platform is brittle, it literally falls apart when you're going 10X the speed. It's like driving a very old battered car down the freeway or whatever. The faster you go, the more the wing might fly off or whatever.
So, I think you want to ease your speed back a little bit. And I do see AI providing value in the platform engineering space, but not in the speed that perhaps we need to go. So, I think it might even riff off your comment a little bit, Thomas, in terms of unfortunately, I think we're going to see a few big issues and when people start doing the five whys, it's actually going to be the platform didn't provide the guardrails it needed to. And of course people will blame the humans there. The one person, we always know that's a root cause, "Oh, Bill, Jane, pressed the wrong button". Or, "Why could they press the wrong button?" There was no guardrails, there's doubting on all things we kind of said here, the sociotechnical system was not set up for success.
And I do think it's going to take unfortunately a few failures before we all start coming together and going, "Hey, we really should invest in guardrails and sort of safety as much as we are speed and efficiency and scalability as well". So, hopefully not too pessimistic roundup there, but I think we're cresting the wave of the AI hype and we're all realists here, I think, aren't we? And in the work that we do in the fantastic InfoQ community, we interact with day in day out. We see a lot of amazing people at the QCon conferences and the InfoQ Dev Summits, and I think we're echoing back a little bit what we're hearing in terms of concerns.
Wrapping up [54:30]
So, hopefully this has been food for thought for everyone as we sort of kick off the new year now. Do stay tuned. We're going to be doing lots of podcasts, lots of news, lots of articles. Whatever works best for you, we've got you covered at InfoQ. Please do check it out, but I will join the rest of the team saying thank you for listening and happy New Year and wish we'll see you in the future.
Thomas Betts: Happy New Year.
Renato Losio: Happy New Year.
Shane Hastie: Happy New Year.
Mentioned:
- Building Green Software, by Anne Currie, Sarah Hsu, Sara Bergman
- Facilitating Software Architecture, by Andrew Harmel-Law
- Unlocking AI’s full potential: 2025 DORA AI Capabilities Model report
- Range: How Generalists Triumph in a Specialized World, by David Epstein
- Expert Generalists by Unmesh Joshi, Gitanjali Venkatraman, and Martin Fowler
- Architecture Through Different Lenses 2025
- From Dashboard Soup to Observability Lasagna: Building Better Layers
- Team Topologies
- InfoQ Trends Reports from 2025
- Architecture for Flow: Adaptive Systems with Domain-Driven Design, Wardley Mapping, and Team Topologies, by Susanne Kaiser