BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Podcasts 2023 Year in Review: AI/LLMs, Tech Leadership, Platform Engineering, and Architecture + Data

2023 Year in Review: AI/LLMs, Tech Leadership, Platform Engineering, and Architecture + Data

In this special year-end wrap-up podcast Thomas Betts, Wes Reisz, Shane Hastie, Srini Penchikala, and Daniel Bryant reflect on technology trends in 2023 and discuss what they hope to see in 2024. Topics explored included: the use of AI and LLMs within software delivery, the changing role of technical leadership, and the increasing integration of software architecture and data engineering.

Key Takeaways

  • AI and Large Language Models (LLMs) like ChatGPT have become more integrated and foundational in various domains, especially software development. We believe there is scope for increased usage in product design, software architecture “explainability”, and system operations (“AIOps”).
  • There is a noticeable shift in software delivery approaches and leadership strategies, with an increasing focus on ethics, sustainability, and inclusivity. Many cloud native and platform engineering initiatives, quite rightly, focus on people and processes first, making sure the organizational culture is aligned with the goals.
  • Software architecture is increasingly focusing on the integration of data engineering into the discipline. This includes the design and implementation of data pipelines, ML models, and related systems. Related, the lessons learned from designing monoliths, microservices, and nanoservices are becoming widespread.
  • Open source and non-open source licensing is evolving. Architects must be aware of the implications of dependencies included within their codebases. Adopting a software bill of materials (SBOM).
  • Our predictions for 2024 include the use of AI within software delivery becoming more seamless, an increasing divide between organizations and people adopting AI and those that don’t, and a shift towards composability and improved abstractions in the continuous delivery space.

Transcript

Welcome to the InfoQ podcast year in review! Introductions

Daniel Bryant: Hello, it's Daniel Bryant here. Before we start today's podcast, I wanted to tell you about QCon London 2024, our flagship conference that takes place in the heart of London next April 8th to 10th. Learn about senior practitioners' experiences and explore their points of view on emerging trends and best practices across topics like software architectures, generative AI, platform engineering, observability, and the secure software supply chain. Discover what your peers have learned, explore the techniques they're using and learn about the pitfalls to avoid.

I'll be there hosting the platform engineering track. Learn more at qconlondon.com. I hope to see you there. Hello, and welcome to the InfoQ podcast. My name is Daniel Bryant, and for this episode, all of the co-hosts of the In InfoQ podcast have got together to reflect on 2023 and look ahead to the coming year. We plan to cover a range of topics across software delivery, from culture to cloud, from languages to LLMs. Let's do a quick round of introductions. Thomas, I'm looking in your direction. I'll start with you, please.

Thomas Betts: Hi, I'm Thomas Betts. In addition to being one of the hosts of the podcast, I'm the lead editor for Architecture and Design at InfoQ. This year, I got to actually be a speaker at QCon for the first time. I've been a track host before and done some of the behind-the-scenes work. That was probably my highlight of the year, getting out to London, getting up on stage and meeting some of the other speakers in that capacity. Over to Shane.

Shane Hastie: Good day, folks. I'm Shane Hastie. I host the Engineering Culture podcast. I'm the lead editor for Culture and Methods on  InfoQ. I would say QCon London was also one of my highlights for this year. It was just a great event, I track-hosted one of the people tracks and was just seeing some of the interesting stuff that's happening in the people and culture space. Over to Srini.

Srini Penchikala: Thanks, Shane. Hello, everyone. I am Srini Penchikala. I am the lead editor for Data Engineering AI and ML at InfoQ. I also co-host a podcast in the same space, Data and AI and Ml. I'm also serving as the programming committee member for the second year in a row for QCon London. Definitely, it has become one of my favorite QCon events, so looking forward to that very much and also, looking forward to discussing the emerging trends in the various technologies in this podcast. Thank you.

Wes Reisz: Hi, I'm Wes Reisz. I am one of the co-chairs here on the InfoQ podcast. I haven't done a whole bunch this year, so I'm excited to actually get back in front of the mic. I had the privilege of chairing QCon San Francisco last year, and I also hosted the “Architectures you've always wondered about” track. Had some amazing talks there, which I'm sure we'll talk a bit about. My day job, I work for ThoughtWorks. I'm a technical principal, where I work on enterprise modernization cloud.

Daniel Bryant: Fantastic. Daniel Bryant here, longtime developer and architect, moved more into, dare I say, the world of go-to-market and DevRel over the last few years, but still very much keeping up-to-date with the technology. Much like yourselves, my highlights of the years revolve around QCons, for sure. Loved QCon London. I think I got to meet all of you. That's always fantastic meeting out with fellow InfoQ folks. I also really enjoyed QCon New York, and met a bunch of folks there. I hosted the platform engineering track at QCon SF, and that was a highlight for me. 

Is cloud native and platform engineering all about people and processes [03:02]

Daniel Bryant: Yet, it actually leads nicely onto the first topic I wanted to discuss. Because the platform engineering track really turned into a people and leadership track. We all know the tech is easy, the people are hard. I'd love to start and have a look at what are folks seeing in regard to teams, and leadership. Now, I know we've talked about Team Topologies before.

Wes Reisz: I'll jump in. First off, that track at QCon San Francisco that you mentioned that you ran, I really enjoyed that track because it did focus on the people side of platform engineering. I consider myself a cloud native engineer. When I introduce myself, I talk about being a cloud native engineer. Over the last couple of years, for every single client that I've worked with, it seems that the problem that we're really solving is more of around the people problem. As you rightfully mentioned just a second ago, there with Team Topologies, which, just to make sure everyone is level-set, Team Topologies is a book that was written by Matthew Skelton and Manuel Pais, both also InfoQ editors who happened to talk about the first idea of this book, if you read the intro at QCon London many years back.

Regardless, Team Topologies is a book about organizing engineering teams for fast flow, removing friction, and removing handoffs to be able to help you deliver software faster. Back to your question, Daniel, where you talked about platform engineering, and you talked about the importance of people on it. I really enjoyed that track at QCon San Francisco because it really truly did focus on the people challenges that are inside building effective platform teams. I really think that shows that we're not just talking about the needs of having a platform team, but how to build platform teams more effectively. I found that a really super interesting track. What was your reasoning for putting together the track? What made you lean that way?

Daniel Bryant: It actually came from a discussion with Justin Cormack, who's the CTO of Docker now, and he was championing the track, he's on the QCon SF PC. He was saying that a lot of focus at the moment on platform engineering is very much on the technology. I love technology. I know you do, Wes, as well. The containers, the cloud technologies, infrastructures, code, and all that good stuff. He was saying that in his work, and I've seen this in my work as well, that the hardest thing often is the vision, the strategy in the people, the management, the leadership. He was like, can we explore this topic in more depth? I thought it was a fantastic idea. I reached out to a few folks in the CNCF space, and explored the topic a bit more in a bit more depth. Super lucky to chat with Hazel Weakly, David Stenglein, Yao Yue, Smruti Patel, and of course, Ben Hartshorne from Honeycomb as well.

Those folks did an amazing job on the track. I was humbled. I basically stood there and introduced them, and they just rocked it, as in fantastic coverage from all angles. From Hazel talking about the big picture leadership challenges. David went into some case studies, and then Yao did a use case of Twitter. 

What came through all the time was the need for strong leadership for a clear vision. Things like empathy for users, which sounds like a no-brainer. You're building a platform for users, for developers, and you need to empathize with them. Definitely, in my consulting work, even 10 or more years ago, I would see platforms being built, someone recreating Amazon but not thinking about the internal users. They build this internal Amazon within their own data centers and are all happy, but then no one would use it because they never actually asked the developers, what do you want? How do you want to interact with that?

What’s new in the world of software delivery leadership? [06:28]

Thoroughly enjoyed that QCon SF track. It was genuinely a privilege to be able to put the ideas together and orchestrate the folks there. Again, the actual speakers do all the fantastic work. I encourage folks to check out the QCon SF write-ups on InfoQ, and check out the videos on QCon when they come out as well. 

Shane, as our resident culture and methods expert on InfoQ. In general, I'd love to get your thoughts on this, are you seeing a change of leadership? Is there a different strategy, different vision, or more product thinking? What are the big questions? I'd love to get your take on that, please.

Shane Hastie: "Is it a generational shift?" is one of the questions that sits at the back of my head in terms of what's happening in the leadership space, in particular. We're seeing the boomers resigning and moving out of the workforce. We're seeing a demand for purpose. We're seeing a demand for ethics, for values that actually make sense from the people who we're looking to employ. As a result of that, is organizational leadership changing? I honestly think yes, it is. Is it as fast as it could be? Maybe not. 

There is a fundamental turning the oil tanker, I think, going on in leadership that is definitely bringing to the fore social responsibility, sustainability, values, and purpose; developer experience touches it. Just treating people like people. On the other side, we've had massive layoffs. Anecdotally, we are hearing that those have resulted in lots more startups that are doing quite well.

That was a hugely disruptive period that seems to have settled now. Certainly, I would say the first six, maybe even nine months of 2023 were characterized by that disruption, the hype or is it hype around what is programmer productivity? The McKinsey article blew up, and everybody objected just what do we mean by programmer productivity? How do we measure programmer productivity? Should we measure programmer productivity? Then are we actually getting some value out of the AI tools in the developer space? Certainly, I can say for myself, yeah, I've started using them, and I'm finding value. The anecdotal stories, again, there's not a lot of hard data yet, but the anecdotal stories are that AI makes good programmers great. It doesn't make bad programmers good.

I think one of the things that scares me or worries me – and I see this not just in programming, but in all of our professions – that good architects get great because they've got the tools at their fingertips. Good analysts get better because they've got the tools at their fingertips. 

One of the things that I think we run the risk of losing out on is how we build those base competencies upon which a good architect can then use and leverage the AI tools, upon which a good programmer can then use copilot to really help them get faster. There's that learning that needs to happen right at the beginning as we build early career folks. Are we leaving them in a hole? That's my concern. A lot going on there.

What is the impact of AI/LLM-based copilots on software development? [10:08]

Daniel Bryant: There are a few things that tied together, Shane, as well, in terms of, what we talked I think last year on the podcast; about hybrid working and how not being in the office impacts exactly what you are saying. 

I mean, Thomas, on your teams, are you using Copilot? Are you using things like that? To Shane's point, how do you find the junior folks coming on board? Do you bootcamp them first and then give them AI tools, or what's going on, I guess?

Thomas Betts: We took a somewhat cautious approach. We had a pilot of Copilot, and a few people got to use it because we're trying to figure out how this fits into our corporate standards. There are concerns about my data, like Copilot is taking my code and sending it up and generating a response, where is my code going, and whether it is leaking out there. 

Just one of those general concerns people have about these large language models, what are they built on? Is it going to get used? For that reason, we're taking it cautiously. The results are pretty promising and we're figuring out when does it make sense? There's a cost associated with it, but if you look at the productivity gains, it's like, well, we pay for your IDE licenses every year. This is just one more part of that. It does get to how you teach somebody to be good at it.

I think there are a couple of really good use cases for Copilot and tools like this, like generating unit tests, helping you understand code, throwing a junior developer at, here's our code base, there's 10,000 lines and they just, eyes glaze over. They don't know what it means, and they might not be comfortable asking every person every day, what does this do? What does this do? I can now just highlight it and say, "Hey, Copilot, what does that do?" It explains it. 

As long as everyone understands that it's mostly correct, it doesn't have the same institutional knowledge as the developer who's been around for 20 years and knows how every bit of the system works. From what does the code do, it can help you understand that and get that junior person a little bit of an extra step-up to be able to understand, and then how do I feel confident about modifying this?

Can I do this? How would I interject this code? Even the experienced people, I've got 20-plus years of development. I've fallen into the trap of, just “add this method”. I'm like, great. It's like, that doesn't exist. It's like, yep, I made that up. The hallucinations get us all. 

There's still time to learn how all this works, but I still go back to the days before the Internet when you had to go to the library and look stuff up in a card catalog. Now, we have Google. Google makes me a much better researcher because I have everything at my fingertips. I don't have to have all that knowledge in my brain. I don't have to have read 10,000 books. Should I not have Google and search engines? No. It's another tool that you get to use to be better at your job.

Daniel Bryant: I love it. Thomas, love it. Srini, I know we're going to go into this, the foundations and some of the tech behind this later on with you. You did a fantastic trend report for us earlier last year. Srini, is your team using things like Copilot, and do you personally think there's value in it? Then what are you thinking of doing with that kind of tech over the coming year?

Srini Penchikala: I agree with Shane and Thomas. These are tools that will make programmers better programmers, but they're not going to solve your problems for you. You need to know what problems you need to solve. I look at this as another form of reuse. For example, let's say I need to connect to a Kafka broker, whether in Java programming language or Python. I can use a library that's available or write something by myself, or now I can ask ChatGPT or GitHub Copilot, "Hey, give me the code snippet on how to connect to Python." How to connect to Kafka using Python or Java program. 

It's another form of reuse if we use it properly and get better at being productive programmers. Definitely, I see this being used, Daniel, Copilot, especially. ChatGPT is not there yet in terms of mass adoption in the companies. Copilot is already there because of the Microsoft presence right there. Definitely, I think these tools are going to help us become more productive in the areas that they are good candidates to help.

Wes Reisz: Daniel, when people ask me about gen AI and large language models, the way I try to go back to it is how my car, it helps me drive these days, and that really, in my opinion, is what these tools are doing for us. It's helping developers. It's helping developers drive our code. It's augmenting what we're doing. I think there's quite a bit of hype out there about what's happening. At the base of it, the core of what we do as software developers is we think about problems, we solve them. Gen AI is not replacing how we think about problems. We still need to understand and be able to solve these problems. What it's doing, it's really just helping us work at a higher level of abstraction. It's augmenting. I think Thomas just mentioned that. This is really what gen AI is really about, in my opinion. Now, that's not to say there are not some amazing use cases that are out there that could be done, but it's a higher level of abstraction

Thomas Betts: Copilot is for developers, but I think the large language models in ChatGPT are useful for other people in the software development landscape. I like seeing program managers and product managers who are trying to figure out how to write better requirements. 

How do I understand what I'm trying to say? Our UX designers are going out and saying, how do I do discovery? What are the questions I should ask? What are some possible design options? It's the rubber duck. I like having the programmer ask a question too. Everybody can benefit from having that assistant. Especially, going back to we're all hybrid, we're remote. I can't just spin my chair around and ask the guy next to me or the woman over there or find someone on Slack because I'm working at 2:00 in the morning. It doesn't sleep. I can ask ChatGPT anytime to help me out with something. It's always available. If you find those good use cases, don't replace my job, but augment it. It can augment everybody's job in some different way. I think that makes software development accelerate a little bit better.

Wes Reisz: The rubber duck is a great example, ChatGPT truly is, it's a great rubber duck.

Shane Hastie: I've certainly seen a lot of that happening in the product management space, in the UX, in that design space. There are dedicated tools now, with ChatGPT being the true generalist tool. One I've been using quite a bit is Perplexity, because I find that one of the great things for me as a researcher is it gives you its sources. It tells you where it found this, and you can then make a value judgment about whether “is this a credible source”, or not. Then you've not got that truly black box.

Daniel Bryant: Fantastic. One thing that's touching on something you said, Thomas as well, is I was actually reading a bunch of newsletters, like catching up on everyone's predictions for the years, and Ed Sim is a Boldstart VC, said AI moves beyond GitHub, Copilot and coding to testing, ops, and AIops, slash, again, kind of thing. I know we all hate the AIOps, but it's something you said there, Shane, that made me think of that, too, in terms of referencing. As we move into the more operational space, you've got to be right pretty much all the time. 

You've also got to say, the reason why I think these things are going wrong in production or the reason why I think you should look here is because of dot, dot, dot. I do think taking the next step for a lot of operational burdens being eased by AI, we are going to need to get better at explaining the actions and actually referencing why the system recommends taking the following actions.

Wes Reisz: Daniel, I think it's more than even just a nice to have, it's a legal requirement. If you think about GDPR in the EE, the general data protection regulations that came out a few years ago, it talks specifically about machine learning models. One of the requirements of it is explain-ability. When we're using LLMs, not just setting aside for a second where the data comes from and how it collects it, there's a whole range of areas around that. The “explainability” of things that you add is a legal requirement that GDPR requires systems to be able to have.

How are cloud modernization efforts proceeding? Is everyone cloud native now? [17:48]

Daniel Bryant: Switching gears a little bit, Wes, when you and I last caught up in QCon SF and KubeCon Chicago as well, you mentioned that you've been doing a lot more work in the cloud modernization space. What are you seeing in this space, and what challenges are you running into?

Wes Reisz: Thanks, Daniel. Quite a bit of the work that I've done over the last year has been in the space of enterprise cloud modernization work. I'll tell you what's been interesting. It's like we go to QCons, we get into an echo chamber, and we think just cloud is a forgone conclusion that everybody is there. Finding this over the last couple of years that there are quite a few shops out there that are still adopting, and migrating, and moving into a cloud. I guess, the first thing in the space to point out is that when we talk about the cloud, it's not really a destination. Cloud is more of a mindset, it's more of a way of thinking. If you look at the CNCF ecosystem, it really has nothing to do with whether a particular set of software is running on a certain cloud service provider.

It has nothing to do with that. It has to do with the mindset of how we build software. I think Joe Beda, one of the creators of Kubernetes, when he defined the cloud operating model, he defined it as self-service, elastic, and API-driven. Those were the characteristics of a cloud. When we really talk about cloud migration and modernization, we should first focus that it's not a destination, it's not a location. It's really a way of thinking. A way of thinking means we're talking about making things ephemeral, we're talking about making things  very elastic, leveraging global scale. It doesn't necessarily mean it involves a particular location from a CSP. One of the things that I found really successful as we're talking about cloud modernization this year is revisiting the seven Rs that I think AWS originally came out with.

That's, if you remember, it was like retire, replatform, refactor, rearchitect. I think those are fantastic, but a lot of what I've been talking about this year is a little bit going beyond just rearchitect and actually talking about reimagining and resetting. What I mean by that is it's not just enough to rearchitect an application. Sometimes you have to really reimagine. 

If you're going from a database that runs in your data center to a global scale database, something like AWS Aurora, Azure's Cosmos DB, or even CloudFlare, CloudFlare's D1. They have an edge-based relational database now that's designed for CloudFlare workers. When you move to something designed like that, it changes how you think about databases. I think it requires a bit more of a reimagining of your system, how you do DR, and how you think about things like blue/green. All those changes when you start talking about a global scale database. It's not just there, it has to do with how you incorporate things like serverless and all the myriad of other cloud functions.

Again, it's a trap just to think about the cloud as a location. Because while I just mentioned serverless, and it certainly originated from cloud service providers, you don't have to have a cloud itself to be able to operate in the cloud native way. It's critical to be able to make sure we understand that cloud is a way of thinking, not a destination. Reimagining is one that I think is really important when you start considering cloud migrations and modernization work today. Then that last one hearkens back to your first question there at the very beginning, where you were asking about Team Topologies. 

A lot of times, when you're making that move into a more cloud-native-based ecosystem, there's a bit of a cultural reshift that has to happen. That cultural reset is, I think, super important. If you don't do that reset, then you continue the practices that maybe weren't present when you weren't using cloud native in a cloud system. I think that causes a lot of antipatterns to exist. Two things that I really think about in this space today are reimagining and resetting when it comes to thinking about how you do cloud modernizations.

Daniel Bryant: Big plus one to everything you said there, Wes. I'm definitely seeing this shift, if you like, with the KubeCons, in particular. I've been going to KubeCon pretty much since version zero. I was involved in organizing the London KubeCon back in the day, and shout out to Joseph Jacks, and the crew that put that one together. The evolution has been very much from the really innovative tech-focused type folks, like myself. We're all in that space to some degree, but more towards the late adopters now. The late adopters, the late majority, have a different set of problems. They just want to get stuff done. They're not perhaps so interested in the latest tech. I'm super bullish on things like EBPF and Wasm.

If you're into the cloud space, those technologies are super exciting. Envoy under the hood, I've done a lot of work in that space as well, but I think now people are really coming with actual business problems. Large IT estates, really they're just saying, how do I modernize incrementally? You want to avoid the big bang. As Martin Fowler says, if you do a big bang migration, typically, what you get is a big bang. That's never a good look. Folks are also looking at things like cost these days, doing more with less, and that's a big factor.

Monoliths vs microservices vs nanoservices [23:13]

Daniel Bryant: Related to this, Thomas, I wanted to get your thoughts. Something that's very popular on InfoQ queue this year is the whole monolith to microservices, the nano services/function as a service. Pick your name of choice. 

In fact, one of our most popular articles on InfoQ last year was by Rafal Gancarz, I have to name-check Rafal, as he has done a bunch of great work of late. It was Prime Video switched from serverless to EC2 and ECS to save cost. They talked about how they moved from a microservices architecture to a monolith. Now, obviously, a lot of hidden context behind that title. Rafal did a great job of diving deep. I know many other folks have discussed this online. Adrian Cockcroft, hat tip to him. He put a very sensible spin on some of the other spin coming out of this discussion

I'd love to know where you stand on this stuff, Thomas. What's best, microservices, monolith? Obviously, it depends, right?

Thomas Betts: This isn't a new thing. I remember back in QCon 2020, one of my top articles of the year was, Our Journey to Microservices and Back Again.

Daniel BryantYes, I remember that. Segment, wasn't it?

Thomas Betts: Segment, yep. You hear these stories of, we're adopting microservices because it'll solve all our problems, or we're building a new system, and we will start with microservices. I always have in my head the Martin Fowler article, you must be this tall to ride microservices. If you're not willing to take on this level of operational overhead and burden and manage a distributed system, why would you add that to the complexity? Software is hard enough as it is, so what's the right size? 

We don't like monoliths because they're big balls of mud. Well, that doesn't mean a monolith has to be a big ball of mud if you have it well-structured and organized. That goes to just good software hygiene. If you structure your code well so that it's maintainable, so that it's readable, and so that the software is sustainable. That's going to make it easier to modify that over time.

Microservices were, I want this little bit that I can understand all of my code and what this one service does. Well, that's great. Do I need to have a distributed system to be able to get that benefit? No. People are now, I think, finding the right size services and looking at what's the right thing. The story that you pointed to that Rafal [Gancarz] wrote was we switched from functions as a service to monoliths and it saved us money. It's like, well, it never said this was going to save you money, but maybe functions was the right decision at the time. Maybe you were able to get up to speed really quickly because you wrote just what you needed. 

Getting a product in the market is always better than having a product on the shelf that you are still working on for another two years. You're not making any money, so you can't save any money.

Then being able to evolve that over time, and I think that's the thing that we struggle sometimes as architects, is when can I say that it's the right decision to drastically change a big part of my architecture? I'm going to switch from functions back to a monolith. I'm going to go to EC2. Looking at all those factors and all those viewpoints, what is it actually costing us to maintain this system, from development headaches to the cost of ownership? All those things come into play. Looking at that and saying, does this architectural decision that we made two years ago still make sense? If it does, stay with it, but things change over time. 

The criteria that you made your decision on a year ago, two years ago, or three years ago, was the right decision based on the information you had, but it might not be the right decision now, and is it worth it to make that switch? Look at your situation. The architect's answer is always, it depends. ChatGPT's answer, it depends. That's what it is. That's the Thomas GPT.

Srini Penchikala: You bring up a lot of good points, Thomas, there. I actually published a couple of articles on cloud-native architecture adoption. This is exactly what I talk about in the article. Microservices, monoliths, and serverless architectures; are all patterns, and they all have areas in which they add value, and they also have their own limitations, like any design pattern. If you don't use it in the right place, you are only going to see the downside of that solution. 

Also, as you said, architecture is contextual as well as temporal. What worked five years ago may not be the right solution now. That's the evolution of architecture. Again, I think all of these are good solutions to the right problems. You just have to figure out what works for you.

Wes Reisz: I think we've all seen it. It's been the age-old thing with software. Premature optimization is the root of all evil. Premature microservices are kind of the root of modern-day architecture evil. Look, monoliths are fine. They solve a set of problems. The example that I always use when we talk about, like Thomas, you referenced Martin Fowler's [Microservices Prerequisites] article a while ago. The thing that I always talk about is to think about just a simple stack trace. When you're in a monolith, and you have an error, you have a stack trace. You can track it down. You get an error, thrown in an exception, you can walk the stack trace to see where it came from. You can help troubleshoot the application.

Now, take that same problem and put it into a microservice environment. Now all of a sudden, you have a distributed stack trace that you have to be able to pull together. If you don't have the right observability in place to be able to assemble that distributed stack trace, how do you get back to the basics of a stack trace that you had for software? You must have a certain level of observability just to be able to successfully operate the system. If you are not there, building microservices is a huge risk. Microservices solve a problem, you have to make sure you're solving the right problem.

Thomas Betts: The answer is rarely one of the extremes, and I think people hear, it's going to be microservices or it's going to be a monolith. Like no, there's a spectrum. Find what works for you and where are you on that spectrum. It's all about trade-offs. Like you said, every decision comes with pros and cons. Find the trade-offs, make the right decision and evaluate it again over time.

Daniel Bryant: I think I'm quoting you, actually, Thomas. I remember, I think, a conversation with some folks recently around if we can plug in the Git history and our architecture decision records into something like ChatGPT to Shane's point about referencing and all this kind of stuff – that could be fascinating. As in, hey, we made this choice three months ago. The technology has changed. That's something we're going to cover later on, Srini, retrieval, augmented generation. Look at what's currently going on in the landscape. Should we move to microservices? Should we move to the monolith? I think that is going to be a fascinating area of architect tooling, potentially. I did want to shift gears a little bit and move on to sustainability.

Are we seeing increased thoughts and actions related to sustainability? [29:11]

I wanted to frame this bit, I'm sure we've all got some really important opinions in this space, but I wanted to frame it around a recent popular piece again by Rafal, shout out to Rafal again, with the Frugal Architect, AWS promotes cost awareness. This is Werner Vogels, Dr. Werner Vogels' keynote at the AWS Reinvent Conference. I'm sure many of us working in the cloud space have been thinking about this for some time. It's a fascinating podcast, actually, with some folks, Roi Ravhon, talking about FinOps. That's become definitely a thing with The FinOps Foundation, behind the scenes now. I'd love to get folks' thoughts on, perhaps Shane, I'll start with you, in terms of sustainability. How important do you think that is in the bigger picture? Then we'll move on to Thomas, I know you've got some opinions on the cloud stuff, and Srini too.

Shane Hastie: As long as it's not greenwashing and veneer, it really matters. Our industry generates as much carbon as the airline industry. We could do better, we should do better. Taking a measured approach, a frugal approach. One, it's good for our organizations. I think it's going to be better for our customers and should be better for the planet. Not just in the architecture fundamentally having that frugal approach. 

I'm thinking of one of my recent podcasts where Jason Friesen, Frugal Innovation Saving Lives, building low-impact technology for emergency response that has to cope in environments where all of the infrastructure is degraded. After an earthquake, after a fire, how do we ensure that that infrastructure that we're using enables us to actually do that saving life stuff? Now, would that have a lower carbon impact? Yes. Would it have a lower cost to run? Probably. Will the UI be as intuitive? Probably not.

Daniel Bryant: All about trade-offs. Thomas, any thoughts? I know you're busy in the cloud space a lot.

Thomas Betts: This calls back to QCon London last year. There was a sustainability track, and great presentations. Holly Cummins had one. It was like, how to shut down cloud zombies? It was like, “light switch-offs” was her mantra. Just turn it off. 

There are a lot of resources that are just left running because it was really easy to spin them up, and we don't turn them off. Is it going to solve all the problems? No, but it's the same thing. The taxi is not allowed to sit idling outside of the hotel all day long because it's just sitting there. You have no idle rules. The same thing for our software. We should be able to turn stuff off. 

Same thing on that track. Adrian Cockcroft gave a good overview of cloud provider sustainability. What I appreciate, it was a dense talk, but it went into some of the complex problems we're trying to solve. Because, like Shane, you mentioned, is it going to reduce carbon? Yeah. Is it going to save costs? Yeah.

We tend to have those two as our only way of measuring the other. I spent less money on AWS, so I guess I was more green. Getting actual data on how much carbon was used is difficult, and we're working on it and the different levels where you measure it. Your company uses this, but your suppliers have to be considered, or you're not running your own data center, but you're in the cloud, so you're still accounting for the carbon footprint of your software. Is that software running in Virginia, where everything is on coal, or is it running somewhere they have more green energy? Where you distribute your software does make a difference. I think we talked about this last year that it's a trend we want to see, and some people are talking about it, but it's not pushed up to the forefront.

I don't see, here's your carbon footprint on your monthly bill, and pushing that down to the teams. We've talked about how we get the costs associated back to the development team. One development team knows, hey, you spent $2,000 last month, but the average team is only spending 500 bucks. What are you doing? Getting those reports is getting easier. I know we had platform engineering and platforms on the stack somewhere to talk about, Daniel. 

I think the next level, and this isn't going to happen tomorrow, but I think we're going to see those little green metrics of how green is your software. I want to believe that somehow AI is going to help because it's a complex problem, and sometimes if you can throw the computer at it, it can figure out the answer that it'd be hard for us to come at. We'll see what the next year or two evolves for those things.

Srini Penchikala: Thomas, along those same lines, maybe we can track this consumption as a debt type of thing, like how we have technical debt and then track it for application or software components and see which one is causing more or less green computing impact. It'll be good to do that

Daniel Bryant: If we want to talk about it, nice, Srini, but there's often the training of the models that get pulled up as not very green in terms of inference might be pretty good. You may do that on the edge or whatever. I think that's something to bear in mind as we get more and more reliant and we're training ChatGPT-5 or whatever. It's crazy money and crazy resources and crazy carbon, in that way.

Srini Penchikala: It's only going to get worse, in a sense.

Software architecture + data engineering [33:57]

Daniel Bryant: Switching gears perhaps a little bit. Thomas, in our pre-show notes, you mentioned architecture is increasingly about data pipelines, ML models and systems that depend on them. As I read that, I was like, yep, 100%. I was reading some other things, it was saying AI evolves from narrow tasks like writing text to enterprise workflow and automation. I'm from the days of business process modeling, and all that kind of stuff. I'm desperate to see that automated. Do you know what I mean? Because I've wasted far too much of my life with the ML models and BPM models and things, as much as it's super important. I did like what you were coming there, so I'd love to dive into that a bit more. Double click on what you're meaning by that architecture increasingly moving towards data.

Thomas Betts: I think we've talked about this last year. I looked at the notes, and it was like, architecture plus data. I think now it's not just the data pipelines that are being designed in, but the ML models and the whole workflow, you may have had your data analytics sitting off to the side like, here's our operational system, and everything goes with data warehouse, and we analyze the data and sitting over there, all that stuff. 

Even some of the ML generation is being brought in as first-class citizens, part of our product, part of our system we have to have, not a nice little add-on. That means all of the ilities that architects care about, like sustainability, redundancy, and fault tolerance. I now need those on my data pipelines, it's not a “nice to have”. If it turns off tonight, it'll be fine. We'll fix it. It'll run tomorrow.

No, we switch from batch processing to stream processing, and all that stuff needs to be up and running or our system as a whole isn't operational. Those design decisions are now coming into play for all the data. I think what shows us the most in the last six months, I feel like half of the news articles we've had come up for InfoQ, Srini and I have had to discuss “Which queue does this fit under?” Is this architecture design with a slice of ML and AI, or is it ML with an architecture flavor? 

You look at, well, are they talking about architectural design decisions? How did we factor these things in? Or is it more about, here are the models and how we're using them? Sometimes those articles from the news stories really overlap both, and I think that's just going to keep happening. Those things are going to be more and more, “You got your chocolate in my peanut butter”, and you can't separate them.

Daniel Bryant: Nice. Great analogy. Srini, I know you did a lot of very interesting work around the AI and ML trend report for InfoQ. We all do trend reports, I'd say check out the culture methods and check out the architecture design trend reports. We have Java, we have cloud as well. Srini, I wanted to definitely double click on all the good work you did around the AI, ML trend report. Plus, the one what Thomas is saying, as well, that there's an increasing blur across all of our topics in InfoQ. There's this bleed of cloud and data engineering and AI. Even the culture and methods in terms of, “Is it safe to do that”? What should we do with that blending into this? Gradually, it's all coming together. 

InfoQ AI and ML trend report summary [36:14]

Srini, anyway, I'd love to dive into what your thoughts were about the AI and ML trend report. Were there some key things that jumped out for you?

Srini Penchikala: Thanks, Daniel. Definitely, as Thomas mentioned, that data is the foundation for anything, including AI and the hype that's going on. Definitely, I would recommend our viewers and listeners to check out the AI, ML Trends podcast from last year. We posted it in September of last year to get more details on what's happening. To summarize, 2023 was the year of ChatGPT, of course, and the generative AI. We heard about so many different use cases where people are using ChatGPT. To me, I think most of those use cases are still in what I call “hello, world” cases. 

When you work for a real company, you cannot put all your data into the cloud and let the ChatGPT or open AI train on that. You still have to have some checks and balances. That's where you drag, the retrieval augmentation solution will come to help where you can augment the training models with your own private information to get more domain-specific predictions for your company.

That's going to be a big thing, I guess, for this year, 2024, along with other ChatGPT, as you said, and other innovations that are going to happen. Again, along with AI and generative AI, the same old topics, the related topics will definitely get more attention. The responsible AI has been a big deal. Now, responsible generative AI will be also a big deal. How can we make this gen AI solutions more ethical and less biased and fewer hallucinations, all the good stuff? Definitely, I think that's going to be another big thing in this space. Also, the security is going to be a big topic as well. How can we use these applications with the correct security and privacy in place? All of those are going to come back in a more specific way that's specifically LLMs.

The other one is LLM ops. I know you mentioned about AI ops and ML ops, Daniel. LLM ops will be another big topic that will be needed to operate these large language models in the enterprise setting. How can we take these models into production? How can we scale them up and how can we use them in an energy-efficient manner like we talked about? How can we make LLMs more green? All of those will be getting more attention. 

On the data side, if I can quickly talk about that, again, Thomas already mentioned data streaming is still a big component on the data side. On a stream processing streaming data, that is still the core part of the modern data architecture stack. That will continue to grow and provide more real-time solutions and data additions for the companies. The innovations in LLMs and hen AI are actually leading to some new innovative trends and innovative products, like vector databases. For the LLMs to work, you need to present the data in a specific format called vector embeddings.

There are some new dedicated databases that are used to manage this data. Like Pinecone, and there are a couple, Millworks, the other one. They're getting a lot of attention in this space. It'll be interesting to see how they evolve. Then the other one is the cloud. 

Cloud is always the foundation for any IT solution. I'm seeing the multi-cloud usage as a continuing trend, I guess. Not so much new, but continuing to become more popular in a way, if you have multiple different types of use cases. For example, a data analytics use case versus a use case that involves data, but it's not an analytics use case. You can actually use different cloud vendors. Cloud vendor X for analytics use cases, and cloud vendor Y for the other use cases, so you don't have to depend on one provider for everything, and you can take advantage of the best solution from each of these cloud vendors. The multi-cloud usage is another trend that we are seeing. I think that should be all. Basically, again, the data will play a prominent role whether it's in architecture.

Responsible and ethical AI: Everyone’s responsibility? [40:10]

Daniel Bryant: That's what I'm taking away from this. Shane, I'd love to get your thoughts. Something, you mentioned when you really tripped my wire there was the responsible AI, ethical AI. Shane, I know you've got the lens of the product mindset, and are folks actually thinking about this? Should they be thinking about this? How are they thinking about this when building products?

Shane Hastie: Responsible and ethical, I like to think that yes, we are. Sometimes I think reality proves me wrong. InfoQ has been really one of the organizations that have asked us to hold ourselves to account as an industry in terms of ethical behavior. One of the points that we've got in there is crypto. Well, yeah, what has happened in that space? Just because we can, doesn't mean we should. When we do, how do we make sure that we're doing these things "right"? Big, big questions that I don't know that we're touching on the answers yet. I like to think that we're moving as an industry in a more ethical direction slowly, but it goes with that leadership thing as well. This is an oil tanker we're turning. It's not a fleet of speedboats.

Thomas Betts: I think when you said that, that was generational changes, that's why it's slow. Sometimes you just have to wait for the next generation to come in, and they've just grown up with these as the expectations. For better or for worse, my son is off to college now and so he's always had a phone. I didn't have a phone in my pocket when I was that age, and that generation is going to be graduating, and they're going to be like, yeah, you always have all the apps. Either you have the mindset of we trust everybody, or we trust nobody. That's just the world they live in. 

They grew up with Google. The generation that's leadership right now that's leaving is still the generation that had none of this when they started their career.

Yes, good software companies, we've seen people evolve, but sometimes you just need to wait for that generational shift to be like, and now we're going to get across. I'm looking at things like I always have annual requirements for training on security standards and things like that. There's always some ethics component because I tend to work in industries that have ethical requirements. Software ethics, that itself isn't discussed. We don't talk about your need to take your software ethics training this year. We need to take the ethics training like don't embezzle money. Okay, don't do that. How do I write software that isn't biased? That's not a thing that everyone just accepts as a norm. I want to believe that we're getting there, but as Shane says, right now, there's just too much data that proves me wrong.

Srini Penchikala: That should be one of the “ilities” going forward, like responsibility or ethicality, whatever. Because we do need to write solutions that are responsible.

Wes Reisz: I think the bottom line is we have to remain diligent. If you remember, before COVID, there was quite a bit of work that ACM was doing around the software code of ethics, and it was really about, as we talked about LLMs and things like that just a little while ago, as we work at higher and higher levels of extraction, we're able to do more and more with the amazing resources that are available to us today. How do we make sure that we're respecting privacy, we're honoring confidentiality? How do we make sure that we're doing no harm? How do we make sure that we're building systems, as you said, Thomas, just because we can, should we? How are we making sure that we're doing things safely and correctly? I think that's an important part of dealing with software ethics. You're right, Thomas. We haven't embraced this as part of what we do every day, but we really have to continue to remain diligent and make sure we're doing the right thing.

Thomas Betts: I do want to make one call back to Srini was talking about a bunch of stuff and the retrieval augmented generation, and I think even when Shane was talking about citing your sources, was it perplexity? One of the podcasts I did just a couple of months ago with Pamela Fox from Microsoft, talked about if you wanted to get started with this, there are some sample apps on Azure. 

Microsoft has, here's how to get started using a large language model for your enterprise data. I think that's definitely going to be a big thing that we're starting to see more of. Either people are going to bill it themselves, or it'll be an off-the-shelf product. I think was it Microsoft 365, you can now pay to have all of your OneDrive and SharePoint documents searchable. What does that mean? Then what she pointed out is they're using ChatGPT both to help do the search, but also, to translate so that it asks a better question, because people aren't good at asking it.

When you type in, here's what I'm looking for, you've already restricted your dataset to the things that I've put in our enterprise data store. It's figuring out, you ask this, I know how to ask the question in computer speak. They're using the LLM as the input to ask a better question of the search engine and then formulate it as the output back into, here's a human-readable answer and here are my sources because it knew which PDFs it got them from. That also got into things like, because I was talking like the idiot that I am, she explained vector databases and what [retrieval augmented generation] (RAG) is, and all those things. That podcast had a lot of, if you're just new to this and you're hearing these terms for the first time, some good beginner level, what did these things mean? If you want to dive into it this year, that's probably a good place to start.

The future of open source licensing [45:19]

Daniel Bryant: Fantastic. I've taken a bunch of notes as we're going along for the show notes, and I'll be sure to link a bunch of these podcasts we've all mentioned. Excellent stuff. Just one thing we haven't touched on, which I think is quite important based on what we've heard last year, is around OSS licensing. We've seen over the last several years, there are several changes in OSS licensing. I do think it's the whole ethics thing, Shane, you've touched on. Just as an architect, I've got to think about this stuff now. I mean, many of us have for many years, but now it's really top of mind. I'd love to get your thoughts, any of you, on where this OSS licensing change is, and is it the end of open source? Is it a new dawn? Getting your thoughts on that would be much appreciated.

Thomas Betts: Giving the doom and gloom answer always seems to get clicks on your post, or likes, “OSS is dead” would be a great headline, but it's not dead. I still go to the XKCD cartoon of we're all built on this one little piece of software from 1992. That's funny, but true, in some ways. The software bill of materials is a concept that we're trying to see. I think this started from concerns about vulnerability. That zero-day hack went into some repo that you use, some package that you use, and everybody uses it, and everybody automatically gets the latest version. Then that little change ripples across. There's been a few different stories of this over the years of how you think that you've got this nice stable set of dependencies, but you don't actually know what all those dependencies are.

When we were in a closed-source environment, you wrote all your code inside your company. Yeah, you owned all of that. That's not the world we live in now. If you ask me what every single line of code that is in the software that I'm running, I couldn't tell you, and I don't think anybody could. If you're pulling in five npm packages or seven NuGet packages, they have a trail of dependencies that just fall out. All of a sudden, you've got a lot of things. I think you're also talking about sustainability. People who are writing opensource software, how do they make a living? Is this just a side project? If your software gets successful, at what point do you start saying, I have to turn this into a business and quit my day job, and how do I make money supporting this?

We're looking at some of the big companies paying to say, hey, we'll give you money, or we'll give you people to maintain your open-source projects. There are a few different funding models, but it'd be nice if we saw more companies that had some way to make it easy to just say, we're going to throw this developer who writes critical software because it's in all of our systems. You're getting it for free. You're a multimillion-dollar company, why are you using free stuff? Just because it's an easy npm dependency. How can you make it easier for us to support those? I don't know if we have a good funding model set up. It's not the app store. Click a little 99-cent thing, and buy another NuGet package.

Srini Penchikala: Thomas, I agree. Open source is not dead, and I agree with you. Also, free and open source are not synonyms for me. Open source is more than free. It just doesn't cost anything. It's more than free. Also, the other thing is like, I was going to highlight this earlier. Open source, the reason why I say it's not dead is we have a session in QCon London coming up in April on open source frameworks for LLM applications. It should be a very interesting topic. Open source is everywhere, even LLM space is getting it. It should be very interesting to see how that evolves. Definitely, I'm a big open source fan myself. I use a lot of frameworks. I have contributed to a couple of them in the past, so I definitely have a lot of respect for that.

Wes Reisz: I don't think any of us would say that open source is dead, but I do think that there are challenges when companies are building software that other companies build on top of that software, but then add their own software to it that is not open source, that don't contribute upstream and then even begins to compete with the original software. Then people who built the original open-source software begin to question whether that makes sense. These are challenges that have to be addressed, and I think it goes back to that software ethics question we talked about before. What is ethical, what is right and how do we approach these types of problems?

Daniel Bryant: Thanks, everyone. Just to put a pin in that for the listeners, there's a series of great articles if you want to know more about this on InfoQ. Got to give a hat tip to Renato Losio, who recently wrote a news article about Sentry, introducing non-open source functional source licenses. I remember there was a bit of confusion around Sentry, they reached out to us. Initially, they called the license open source, and then they later backtracked. I'm not picking on Sentry, because we've had this on many different vendors over the last couple of years, and everyone has got to make money. I do get it, it can be a complicated situation. 

Renato, in that article, linked a bunch, the interesting other sources. I know I often look to the OSS Capital folks, like JJ, I've already mentioned. Heather Meeker is really good. She's got fantastic books and articles. You want to go find more of her stuff.

You can learn a lot more about how these recent changes to some of the OSS and non-OSS licenses may impact your role as an architect. I would then say, if folks want a practical jumping-off point to this, knowing what's actually in your software, as in the thing you're building, is really valuable. It's a shameless plug. As Thomas did, I chatted with Tracy Miranda recently on the podcast, and she went deep dive into SBOMs and SLSA, one of the frameworks around this. That's a great jumping-off point because I learned a bunch of stuff. There are different SBOM formats, and they have strengths and weaknesses. She was basically saying just get started, and it's a great thing to do. You've got to take the plunge.

I thoroughly encourage folks to check out Tracy's podcast. I'll link to that, as with all the things in the shownotes too. 

What are our software delivery predictions for 2024? [50:23]

As we're coming to an end, I'd love to get a prediction for next year. We definitely, as part of next year's podcast, we'd have to look back and check how good we were in our predictions. I think we've done that already, as we've been going through the discussion today already. I'd love to go through it, and I'll just do the top-of-screen order again. Thomas, to put you on the spot, what is your prediction for 2024? What's the big tech, the big approach, the big change in leadership? Anything you want.

Thomas Betts: When we recorded this [InfoQ end-of-year 2022 edition of the podcast] a year ago, we recorded it a week or two after ChatGPT came out.

Daniel Bryant: Yes.

Thomas Betts: I think last year, it was like, are the robots going to take my job? I'm still here. We're all still employed. That didn't fall out. I do think, though, that we're getting past that initial hype cycle of these things are so amazing. Yes, the ChatGPT adoption of a million users in one day or whatever the crazy stat was, that's a hype cycle. It has to calm down. The new models are getting better, they're evolving. People are learning what they're actually good for. I think we're going to start seeing them show up in all the ways we've already discussed. They're going to show up in ways to make everybody's job a little better, and we'll start seeing those specialized tools. This is for product managers, this is for Copilot and whatever the tools for developers. This is for UX, and this is for enterprise search. We'll start seeing that just becoming expected.

People talk about ChatGPT and large language models, but we don't talk about how search engines work, we just use them. At some point, we're going to get to, yeah, there's just this AI underneath the covers. I think when we stop talking about AI, it's ready for the next thing of what AI actually means. I'd like us to stop saying AI in every conversation because we don't have it yet. We're not going to have general artificial intelligence in the next year, I predict that. We're not reaching the singularity. I think that's my vague prediction for next year, is we're going to have some tool that is not so revolutionary for everybody, but it starts becoming fast, the hype cycle to actual products that we're like, that's a good thing. I'll start using it.

Shane Hastie: I'll double down on what Thomas is saying there in terms of the tools that leverage and get beyond the hallucinations as well, with these tools that are going to be more focused on specific aspects like product management, UX design, and so forth. There are already some of them out there. 

These are the things that are going to genuinely accelerate the people who are doing this well. I'm still concerned about that gap of the newbie, and I hope that we're going to see ways to bring people up to a level where they can really leverage the tools. That, I think, for me is one of the bigger risks because risks creating almost a two-tier system where you've got these experienced folks who are really good and new folks who just can't get in. 

I'm going to say I see the organization culture stuff steadily improving. We're getting better at being ethical. We're getting better at being sustainable. We're getting better at thinking about not just developer experience, but employee experience. Steady, slow shift to "better" for the way we work with and deal with people in our industry. Is it a prediction, or is it a hope?

Wes Reisz: For me, I don't want to really focus on predictions. I don't think any of my predictions on these end-of-the-year podcasts have been particularly accurate. I do want to focus on maybe some things that I'm interested in, something I want to really look at in the new year. If you remember back, I can't remember if it was this past year or the year before, but one of the QCons, I remember chatting with Marcel van Lohuizen, he created a programming language called CUE. While it's not a general-purpose programming language, it is a programming language that's really focused on data validation and data template and configuration and things like that.

I'm hearing a lot of information in the Kubernetes space that even though there's no direct support for CUE and Kubernetes, it's being used quite a bit because of its roots in Go. Stefan Prodan of Weaveworks actually created a project recently I saw called Timoni that I want to take a look at, that really is supposed to remove, I think it's called a package manager for Kubernetes. It's powered by CUE, inspired by Helm. It's really, the idea of it is to allow you to stop mingling Go templates with the YAML like Helm does. Be able to stop layering in YAML like you do with Customize. For this upcoming year, I'm really interested in looking at CUE and some of the products that are coming out because of it, like this Timoni product.

Srini Penchikala: I agree with both Thomas and Shane. Next year, if we are talking too much about AI, that means we are not there yet. I think AI will become the behind the scenes transparent, the very fabric of software development that it'll start to make everything better. Whether it's like you guys said, product management, or I'm hearing even software like Workday is using AI now. AI is all everywhere. I think it'll start to become more invisible and start to add value without being so evident, and that's how it should be. Actually, my daughter just started going to college. She started with chemical engineering as a major, but she recently switched to computer science. She was asking me what kind of jobs would be there in the future. I told her there will be two types of people in the future, AI people who know how to use AI and there'll be AINT people, ain't people who don't know what to do.

Also, I just want to make one quick comment on the architecture. The only architecture that will last the time is the resilient architecture. Make sure your architecture is resilient in terms of scalability or easy to switch to different design patterns. One more thing, Daniel, more like a shameless plug, I guess. I recommend our audience to check out the data engineering AI/ML trends report from last year. It was published in September last year. Also, I hosted a webinar InfoQ Live webinar on ChatGPT and LLMs, what's next in the large language models? It was a great discussion as well. Obviously, we have a lot of topics like RAG, we talked about that and vector DBs, a little bit. For anybody who's new to them, we have excellent speakers talking about these AI topics at the QCon London in April. If anybody who's new to this, definitely check out the conference website.

Daniel Bryant: It's our podcast Srini, we could do as many shameless plugs as we like! We'll all definitely be there, or many of us will be there for QCon London. I also got a shout-out, we're doing a new InfoQ Dev Summit in Boston in June, I believe. If folks like what they're hearing and want to know more, I'm hoping to tag along with that as well. We're trying different formats. 

We've got the webinars, we've got the podcasts, they're all great fun. I think shameless plugs are more than allowed, Srini, so I'll definitely be doing a few I'm sure along the way. Just for my predictions, I'm going to say I was going to do the AI one as well. I think you've all said it perfectly as well. Something we didn't touch on, and I think outside the AI space in architecture, is a lot more composability I'm seeing in the products.

My first is Dapr – I know you did a fantastic podcast with the Dapr folks, Thomas – and I really enjoyed that podcast. Luckily enough, I met the folks from Diagrid who are behind all the Dapr work. I met them at KubeCon, myself and Wes. It was a great chat with Mark [Fussell], and Yaron [Schneider], and the team. I'm just really excited about the composability. A lot of the stuff that I built back in my software engineering days when the cloud was just becoming a thing, I now see this being popularized and correctly built with a community behind it, which is fantastic. I mean, everywhere I used to go as a consultant, we'd build the same things. Queue connectors, cron jobs in the cloud, all that kind of stuff. I am loving the fact that the Dapr community is coming together to do that. That composability in that space is super interesting. 

I'm also seeing a whole bunch in the CI/CD space as well. I was privileged to talk to Solomon Hykes, one of the original Docker co-founders, and also the Dagger co-founder.

He talked about how they're using code to build pipelines using open source Dagger framework. People can listen to the podcast. It genuinely blew my mind. Hopefully, you can hear me going along on that podcast going, "I totally get it. Composability, right? This is great." That, for me, I think outside of the AI space, I'm hoping to see more and more of that. System initiative, same kind of deal with CI pipelines around modeling, different abstractions. 

It's all complimentary to the way I think AI is going. The abstractions will allow us as humans to understand it and, hopefully, the AI systems to explain it in an understandable way, rather than just all be spaghetti code or spaghetti pipelines. We've got good abstractions like Dapr, Dagger system initiative, and many other frameworks in this space. I'm hoping, as an architect, I can just understand and compose these things in my mind. Outside of AI, that's what I'm excited about next year.

Thomas Betts: I'm so glad you brought those up, because I somehow missed them; it was in my notes earlier, and we forgot to bring it up. Thank you for bringing that up. Yes, those two podcasts you did about CI/CD pipelines, I'm like, I don't like working with them, and I learned so much. That is a totally new way of thinking about it, and I love hearing that on our podcast or any other podcast when you're like, I didn't think about this problem that way. Those are the innovative ideas that are really going to be changing things in the next few years.

Daniel Bryant: 100%, Thomas. 100%. We're all getting excited about AI with good reason, but I do remember that things are ticking along. To your point, Shane, steering the tanker ship. Whether it's culture and methods, whether it's like composable architectures. Srini, you mentioned about data hygiene. We're all getting excited about AI. Must have included, 100%. That tanker ship, it's going to take a while to turn. As long as we're all thinking about these things from a good viewpoint, thinking about ethics, and sustainability, I think all these things weigh into making the best decisions we can, given the context we've got at the current time. 

Outro

Fantastic! Thank you once again, all of you, for joining me on this amazing podcast. It's always fun to catch up. Usually, we do it before the Christmas holidays. Now, we're doing it in the New Year. I look forward to the podcast, but I really appreciate everyone's time. Hopefully, we'll catch all the listeners at the various InfoQ and QCon events coming up soon.

About the Authors

More about our podcasts

You can keep up-to-date with the podcasts via our RSS Feed, and they are available via SoundCloud, Apple Podcasts, Spotify, Overcast and the Google Podcast. From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Previous podcasts

Rate this Article

Adoption
Style

BT