BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Podcasts InfoQ AI, ML, and Data Engineering Trends Report - September 2023

InfoQ AI, ML, and Data Engineering Trends Report - September 2023

This item in japanese

In this episode of the podcast, members of the InfoQ editorial staff and friends of InfoQ will be discussing the current trends in the domain of AI, ML and Data Engineering as part of the process of creating our annual trends report. These reports provide InfoQ readers with a high-level overview of the topics to pay attention to and also help the editorial team focus on innovative technologies. In addition to the report and the trends graph available on InfoQ.com, this podcast is a chance to hear our raw conversation and the stories our expert practitioners shared.

Key Takeaways

  • Generative AI, powered by Large Language Models (LLMs) like GPT-3 and GPT-4, has gained significant prominence in the AI and ML industry, with widespread adoption driven by technologies like ChatGPT.
  • Major tech players such as Google and Meta have announced their own generative AI models, indicating the industry's commitment to advancing these technologies.
  • Vector databases and embedding stores are gaining attention due to their role in enhancing observability in generative AI applications.
  • Responsible and ethical AI considerations are on the rise, with calls for stricter safety measures around large language models and an emphasis on improving the lives of all people through AI. 
  • Modern data engineering is shifting towards decentralized and flexible approaches, with the emergence of concepts like Data Mesh, which advocates for federated data platforms partitioned across domains.

Transcript

Introduction

Srini Penchikala: Hey, folks. Before we get into today's podcast, I wanted to share that InfoQ's International Software Development Conference, QCon, will be back in San Francisco from October 2nd through 6th. QCon will share real-world technical talks from innovative senior software development practitioners on applying emerging patterns and practices to address current challenges. Learn more about the conference at qconsf.com. We hope to see you there.

Hello everyone. Welcome to this podcast. Greetings from the InfoQ, the AIML and the data engineering team and their special guest. We are recording our podcast for the 2023 trends report. This podcast is part of our annual report to share with our listeners what's happening in the AIML and data engineering space. My name is Srini Penchikala. I serve the team as the lead editor for the AIML and data engineering community at InfoQ. I'll be facilitating our conversation today. We have an excellent panel for today's podcast with subject matter experts and practitioners in the AI/ML and data engineering areas.

Let's first start with their introductions. I will go around our virtual room and ask the panelists to introduce themselves. We will start with our special guest, Sherin Thomas. Hi, Sherin. Thank you very much for joining us and taking part in this podcast. Would you like to introduce yourself and tell our listeners what you've been working on?

Sherin Thomas: Hey folks, thank you so much for inviting me. I'm so excited to be here. I'm Sherin. I'm a staff engineer at a FinTech company called Chime. I'm based in San Francisco. Before this, I spent a little bit of time at Netflix. Before that, Lyft, Twitter, Google, and for the last six years or so I've been building data platforms, data infrastructure, and I have a keen interest in streaming. Been active in the Flink community and very recently have also been thinking a lot about data discoverability, governance, operations, and the role data plays in new advancements in AI. In my free time, I have been advising nonprofits working in the climate change area, basically helping them architect their software stack and so on. So yeah, again, very excited. Thank you. Thank you so much for inviting me.

Srini Penchikala: Thank you. Next up, Roland.

Roland Meertens: Hey, yes, my name is Roland Meertens. I am working at a company called Bumble, which is making dating apps. So I am literally a date and data scientist and I'm mostly working with computer vision. So that's my background.

Srini Penchikala: Thank you, Roland. Daniel?

Daniel Dominguez: Hi everyone. Glad to be here for another year. I am Daniel. I'm an engineer. I have experience in software product development. I have been working with companies from Silicon Valley startups to Fortune 500. I am AWS community builder on machine learning as well. And my current company, we're developing artificial intelligence and machine learning products for different industries.

Srini Penchikala: Thank you. And Anthony?

Anthony Alford: Hi, I'm Anthony Alford. I'm a director of development at Genesys where we make cloud-based customer experience and contact center software. In terms of AI, I've done several projects there, customer experience related, and back in the 20th century I actually studied robotics in graduate school and did intelligent robot software control. So I'm really excited to talk about some of the advancements there today.

Generative AI [03:22]

Srini Penchikala: Thanks Anthony. Thank you everybody. Welcome to this podcast. I am looking forward to speaking with you about what's happening in the AIML engineering, and maybe I should say what's not happening. So where we currently are and more importantly, what's going to be coming up that our listeners should be aware of and should be keeping an eye on. So before we go to the main podcast discussion, a quick housekeeping information for our listeners. There are two major components to these trends reports. The first part is this podcast, which is an opportunity for you to listen to the panel of expert practitioners on how the new and innovative technologies are disrupting the industry and how you can probably leverage them in your own applications.

The second part of the trends report is a written article which will be available on InfoQ website. It'll contain the trends graph, which is one of my favorite content on the website. This trend graph will highlight the different phases of technology adoption and it also provides more details on individual technologies that have been added or updated since last year's podcast. So I recommend everyone to check out the article, as well as of course this podcast, when the article is published. Back to the podcast discussion. There are so many interesting topics to talk about, but let's start with the big elephant in the AIML technology space. That is the generative AI or also known as GenAI, a large language model, the LLMs, which leads GenAI is based out of, and of course the ChatGPT. Who hasn't heard about ChatGPT? So generative AI has been getting a lot of attention since GPT-3 was announced a couple of years ago, and especially since GPT-4 and ChatGPT came out earlier this year.

So ChatGPT was used by 170 plus million people in the first two months of its release. So it has taken over pretty much every other technology and solution in terms of faster adoption. So all the big players in this space have announced their plans for GenAI. We know that ChatGPT is from OpenAI. Other than that, Google also announced Bard data AI solution and Meta AI released the LLaMA 1 and LLaMA 2. So there's a lot of activity happening in this space. So in this podcast we want to highlight the value and benefits these technologies bring to the users, not just all the hype that's out there.

So Anthony and Roland, I know you both have been working on or focusing on these topics. Anthony, would you like to kick off the discussion on GenAI and how it's taking the industry by the storm? Also, for our listeners who are new to this, can you start by defining what is generative AI, what is an LLM and why they're different from the traditional machine learning and deep learning techniques?

Anthony Alford: I actually was trying to think what is a good definition of generative AI? I don't think I have one. I don't know. Roland, do you have a good definition?

Roland Meertens: No, I was actually surprised because I was at a conference about AI a couple of weeks ago and they were running a parallel session. They were running a separate conference on generative AI. And I was surprised because I love the field. I've been giving talks about generative AI for years now, and I just didn't know it was such a big topic that it would warrant its own separate conference. So I think that's what people normally nowadays use as generative AI, is just all AI-based on auto completing a certain prompt. So you start with a certain input and you just see what AI makes up with this. And this is super powerful in terms of you can do zero-shot learning, you can take your data and turn it into actionable written out language. There's so many options you have here. Yeah, so I think that's what I would take as a definition.

Anthony Alford: What's interesting, we kind of think of this as new, but looking back, it didn't start that recently. Generative adversarial network models, or GANs, for image generation have been around for almost 10 years and it seems like the language models just all of a sudden caught fire. I think probably in 2019 when GPT-2 came out and OpenAI said, "We're not going to release this to the general public, because it's too dangerous." Now, they changed their tune on that, but 2020 was GPT-3 and 2021 was DALL-E, a different kind of image generator, but I think just the second half of 2020 things just took off. And so we're talking about large language models a lot, but the image ones were hotter in the summer of 2022, the Stable Diffusion. And so we've got both now. We've got the generative AI for images, we've got generative AI for text. People are putting them together and having ChatGPT create the prompts for the image generation to tell a story and illustrate it. So we're starting to see humans cut out of the loop there altogether. So what are your thoughts on what's next?

Roland Meertens: Yeah, from my side, what I noticed is this massive improvements in the generation of the text itself. So ChatGPT really took a step up with GPT-3.5 and GPT-4 is, for me, in a whole different ballpark. You can very clearly see amazing improvements for the generation itself. Again, it's just larger networks, more data, higher quality data, and I was kind of amazed that everybody now started adopting it. And because we talked about this technology last year, a year ago when it was not such a big thing, and already on a very good level and very usable. And I think what we can see is that with ChatGPT, the usability experience makes that now everybody is using it, including my father. He's a big adopter of ChatGPT and he is not a technical person. He emailed me because his URL I bookmarked for him broke.

But so everybody can now use it. And what I find amazing about the state of the art in image generation is that for me, the clear winner at the moment is Midjourney. They have absolutely the best generated images in my opinion, but the way you work with it is that you have to type your prompts in Discord to a bot. The company, I think, only has seven employees at the moment. So there's no usability and still it's so good that people are happily paying their $35 a month it costs to have a subscription and are still happily working with it. So I think that in terms of trends, once we go over this hurdle of it being difficult to use, once that goes open to the general public as well, that will be an amazing revolution.

Daniel Dominguez: I think all this term that has been happening with ChatGPT, it's amazing because as Roland said, I mean everyone is using it. For example, right now we're doing a project internally in the company for one client and we did all of this generative AI for his own solution and I believe we use the GPT technology, we use TensorFlow, we use the DaVinci model, and at the end there the client said, "But what is the difference between this and ChatGPT?" And we said, "ChatGPT has the same technology, it's a product, but we're using the same technology for your own product."

But the thing is that when you mention artificial intelligence or generative AI, ChatGPT... I mean, it's the same that when you say you don't surf the web, you Google it. So right now, we ChatGPT is like you need something on artificial intelligence, you use ChatGPT and that's the magic of OpenAI, being the first one on the market doing this. And as we're going to see there are a lot of companies and there are a lot of competence right now regarding the generative AI, but right now with ChatGPT is like the synonymous of generative AI and the thing that everybody knows how to use and anything that everybody's using right now.

Sherin Thomas: And to me what's interesting is how a new cottage industry of little companies built on top of generative AI are cropping up. Everything from homework assignments to code generation, all kinds of things. So for me, I'm really interested in all these creative things, creative ways people use this. So I'll be staying tuned for that.

Roland Meertens: What I find amazing is that there are now a couple of companies, I see this happening on LinkedIn, I see this happening with my friends who are in big companies. These companies are now creating their own vision, their own roadmap for how they are going to use generative AI. And I'm so eagerly waiting to see more companies apply it because we already mentioned this last year, but the playing field is wide open for everyone who wants to use these APIs. People just need to find a way to add value to people's lives, which goes beyond just the simple cookie cutter approach. Everyone is now getting with ChatGPT, and I think people will come up with applications we cannot even dream of existing right now and it's so easy to integrate this into your product. So I'm very excited about this. I think there's just more room for creativity at the moment than there are technological hurdles. So yeah, I'm really hoping to see more companies experiment.

Anthony Alford: In terms of trends, what's interesting is when GPT-3 came out, the value proposition was it does not need to be fine-tuned, it works great with just in context learning. But now we're starting to see, with these large language models in particular, people are fine-tuning them, especially the smaller and open source ones like LLaMA, people are using those, they're fine-tuning those. We're starting to see things like with Google's PaLM model, they did their own fine-tuning, they created a version for medicine, they created a version for cybersecurity. So they're fine-tuning for specific domains. It's getting easier to do because the models are so large, you need pretty hefty hardware to do that as well as you need a bit of a dataset. But we're starting to see those models shrink a bit and now people have the ability to fine-tune them themselves. I don't know if anyone has thoughts on that.

Srini Penchikala: Yeah. I just want to chime in. Anthony, I was also looking at who actually are using the companies that we know of, of ChatGPT. The list is pretty long. So it starts with Expedia, the travel services company, to Morgan Stanley to Stripe to Microsoft Slack and so on. So the adoption is increasing, but like you said, we have to see how these adoption evolves over the time. So any other trends you guys are seeing that are interesting to our business?

Prompt engineering [13:32]

Anthony Alford: I don't know if we got into prompt engineering. Roland mentioned prompts a little bit.

Srini Penchikala: Yeah, we can go to that one. So do you have any comments on that, Anthony? On prompt engineering?

Anthony Alford: One of the most interesting developments to me was the idea of the so-called chain of thought prompting. So if researchers were using these language models, they found that if you tell it, "Explain your thoughts step-by-step," the results came out a lot nicer. So that's been something that was built into some of the models like PaLM where you can get a very big variation in the quality of your results based on your prompt. And it's similar for the image generation as well. Depending on what you tell the model, you can get quite a bit of a different result.

Srini Penchikala: And also prompt engineering, the way I'm understanding it, is going to be a discipline in its own, its own way. So I'm wondering how our prompt engineering responsibility role, or whatever, our tasks will be integrated into the traditional software development process. So would we have a dedicated prompt engineer in each project to help the team with how to do things. So does anybody have any thoughts on that?

Roland Meertens: Well, maybe to go the other direction or maybe to go against what you think might be happening. I think that the miracle is that everyone is using this at the moment. So I think there will be as many prompt engineers in your team later as there are Google engineers who are helping you Google things. That would be ridiculous, right? Everybody has somehow learned how to do this. There's no class about this in high school. I don't think there will be a need for prompt engineering class. Everybody will know this at some point, except of course your grandmother, who by then will do the opposite thing. Instead of typing in Google, how can I bake the best cookies? And she learns the best cookie recipe, she will now go to ChatGPT and just type "best cookies" and ChatGPT has no clue what to do with it.

Srini Penchikala: I think it'll become part of our lingo, I guess. Okay, we talked about ChatGPT mainly today, but we also mentioned LLaMA from Meta and Bard from Google and also there is another product called Claude. I haven't done much research on this. Does anybody know how it's different from others?

Daniel Dominguez: And also Amazon is now with Amazon Bedrock, is the bet on generative AI. So that's also another one, which should take a look to this year, what is going to happen with that.

Srini Penchikala: Yeah, thanks Daniel. So what do you guys think about the next step for LLMs? It's going so fast, we don't even know where we'll be in six months or one year.

Anthony Alford: Well, what I mentioned, the smaller models and people shrinking them and doing the LORA... I can't remember what the A is for, but distilling and shrinking the models and especially, for example, OpenAI language models are quite famously not available for you to download. Whereas Meta has been releasing theirs, Meta has released LLaMA. Now people give them a hard time about the license. It's a non-commercial license, but still they give you the weights and people are taking those and fine-tuning them. So we have a proliferation of these takeoff names from LLaMA like Vicuna and Alpaca and so forth. So people are fine-tuning these models. They're smaller than ChatGPT, they're not as good, but they're pretty good. And so that lets companies who have concerns about using a closed API and sending data out to who knows what, that lets them alleviate that concern.

I think that's a pretty interesting trend. I expect it will continue. Another one and then I'll let you all chime in, is the sequence length. That's how much history you can put into the chat. And as we know, the output of the language model is really one word, it's one token. You give it a history, it outputs one token, the next token, then you got to put it all the way back in. They're auto aggressive. Eventually you run out of that context, it's called. GPT-4 has a version that supports up to 32,000 tokens, which is quite a bit. And there's more research into supporting maybe up to a million tokens.

At that point, you could basically put Wikipedia almost as the... Maybe not, but you could put a book as the context and that's the key for this so-called in-context learning. You could give it a whole book and have it summarize, answer questions. You could give it your company's knowledge base and you could ask questions on it. So I think these two trends are going to help bring these models and their abilities into the enterprise, into premise maybe, whatever you want to call it, basically out of the walled garden.

Srini Penchikala: I can definitely see that changing the adoption in a positive way. So if companies can use some of the solutions within their own environments, can be on-premise or can be in the cloud, but with more flexibility, that will change the name of the game as well.

Sherin Thomas: So speaking about summarizing, this is a trend that I'm seeing quite a bit. A lot of law firms are using this to summarize legal documents and then I've been working with a group called Collaborative Earth where scientists are looking at papers and they're like 30,000 papers and they need to understand, some pull tags out of this. So that's another area where I see a lot of application of this and people are onboarding this trend of summarizing papers and documents.

Srini Penchikala: Thanks, Anthony. Thanks, Sherin. The other one I know we can talk about is a speech synthesis. So how can we use these solutions for analyzing the speech data? So, Anthony, do you have some thoughts on this?

Anthony Alford: What's interesting is that both Google and Meta seem to be working quite hard on this. They've both released several different models just this year. Of course OpenAI released Whisper at the end of last year and they actually did basically open source that and Whisper is quite good for speech synthesis. Meta and Google, they're doing multilingual things in particular. So Google is doing speech-to-speech translation. So it does the speech recognition of one language and does the output of it in speech synthesis in another language. In my industry in particular, people are excited about that because you could have an agent on the phone with a customer, maybe they don't speak the same language, but with this in the middle it's like the Hitchhiker's guide, right? It's the thing in the ear that can do automatic translation for you. That's pretty exciting.

The other one recently that came out with Meta, released one called Voicebox and it basically does... In images, we'd call it in-painting, but basically it can take a speech audio and kind of replace bits of it. So it could take a podcast like this and edit out a barking dog. It could change what I say from saying, "I love AI," to, "I don't like AI," or something like that. So they're in the situation OpenAI was in where they're not sure they want to release this because they're not sure how it could be abused. So if you guys hear me say something that you don't think I would say, well, blame AI.

Srini Penchikala: Oh, that's going to add that new dimension to the deep fake, right?

Anthony Alford: Exactly. It's literally what it is. Yeah.

Srini Penchikala: Definitely. I know it kind of brings a lot of ethical and responsible AI consequences. Anthony, we'll go to the topic later in the podcast, but it's a good one to keep in mind. Anybody else have any comments on the speech side of the AI?

Daniel Dominguez: Yes, I remember that I wrote an article on that on InfoQ with the update of the Google AI regarding the universal speech model, which has the 1000 language initiative. It's going to be huge, all of these things happening and obviously for all the prompts that are also involved with speech, for example the Google products or Alexa, all that stuff, there's going to be a whole new way of prompts with all artificial intelligence or all these models once they're implemented on their own hardware and on their product. So it's going to be amazing to see what is going to happen. Now asking Alexa or now asking Google to give better insights of their answers based on the prompts, on the voice that were giving on those prompts. So that's something that eventually is going to have a lot of improvement on the prompt side for these companies.

LLMOps [21:27]

Srini Penchikala: So with all this innovation happening with text-to-data, speech and images, so with large language models, billions of parameters. So once we start seeing a lot of this adoption and these applications, the enterprise applications are deployed in production, whether they're on-premise or in the cloud, one big thing that teams will need to own and maintain will be the operation side. So there's this new LLMOps term coming up. So what does operations mean for LLM-based applications? Sherin, I know you have some thoughts on this, so please go ahead and share with us what you think what's happening here.

Sherin Thomas: The MLOps kind of brings rigor to the whole process of building and launching pipelines. So in that sense, those same operational requirements apply to LLMs as well. But I see that there are some nuances or some requirements for LLMs that make that a little bit more challenging or we need to think a little bit differently about operationalizing LLM. So I guess one is maybe around collecting human feedback for reinforcement learning, maybe prompt engineering as we discussed. That is going to be a big piece that will be coming up.

And then also performance metrics for LLMs are kind of different and this is a very constantly emerging area right now, so we don't even know how this is going to pan out in the future. So that might completely differ. Also, the whole LLM development life cycle consists of the data ingestion, data prep, and prompt engineering. There may be some complex tasks of chaining LLM calls that make also external calls, like a knowledge base, to answer-question-answer. So this whole life cycle requires some rigor. And from that sense, I feel like LLMOps might just end up being its own little thing with MLOps being just like a subset of it.

Srini Penchikala: Yeah, definitely. I think that will become more critical when we have a significant number of apps using these LLM languages. So does anybody else have any other thoughts on that? What LLMOps should look like?

Daniel Dominguez: I think definitely that's something that is going to be increasing in the industry and in the companies because for example, right now with the clients that we're working is like, but it's artificial intelligence, so that's done. That's an artificial intelligence problem or whatever. But even though you have to consider that behind that artificial intelligence, you have a team, you have to tune out the data, you have to work continually in the prompt engineering, you have to continually see what is happening on the server on those architecture and that stuff. So it's not thinking that the artificial intelligence is there and is doing everything. No, behind that artificial intelligence there will be a team between all this stuff to make sure that generative AI should work correctly and that generative AI will work correctly. There should be a team behind making sure that all these things happening out of the space, are working correctly.

Vector search databases [24:13]

Srini Penchikala: Right. Thanks, Daniel. Thanks, Sherin. Yeah, we can switch our focus here a little bit. The other area that's getting a lot of attention is the vector database technology, the embedded stores. So I have seen a lot of use cases on this. One of them, interestingly, is using the sentence embedding approach to create an observability solution for generative AI applications. So Roland, do you have any thoughts on this? I think you mentioned this in the past vector databases.

Roland Meertens: Yes, maybe let's first start with the question, why do you need a vector search database? As we already mentioned at the moment, these large language models have a limited history of, I heard Anthony say, 33,000 tokens. I mean that's about half a podcast, maybe a whole podcast. But what if you want to know something about Wikipedia, or I heard Sherin say, but if you have a lot of legal documents, one thing which I think will happen more and more with companies is that they can make a summary of a certain document that will be stored as a certain feature vector. So like you and I, we just write down this document is about X and Y. But of course large language models can just create a feature vector and that will maybe leave you with thousands, millions, hundreds of millions depending on how many documents you have of feature vectors.

And if you want to find similar vectors, or maybe you can query your large language model with, "Hey, I am searching for this document which probably contains this." You can find a similar feature vector inside these vector databases. So where with normal databases, you're just going through all the documents and finding what is the most relevant documents, but once you have too many documents, which are all summarized features, you want to have a vector search database where you can find the nearest neighbors of the thing you're searching for.

And what I find interesting about this or what intrigued me as a trend over the last year is that I saw a tiny increase in adoption from developer's perspective, which is good because these things are amazing, but I saw a massive increase in funding for these technologies. So in this case it looks like investors have rightfully realized that vector databases are going to be a big part of the future and somehow developers are kind of lagging behind. It's maybe a difficult topic to get into. So I really think that next year we are going to see more adoption. We are going to see that more people will realize that they have to work with a vector search database such as Pinecone or Milvus. So I think that these technologies will keep growing.

Srini Penchikala: Roland, I also heard about Chroma, I think, is the open source solution in that space.

Roland Meertens: Yeah, I mean maybe we can have a dedicated podcast and interview someone who is working on these technologies. I think the bottom line is that depending on what is in your feature vectors, some things make more sense than others. So depending on what kind of a hyperdimensional space you're searching in, do you have lots of similar data? Do you have data over the place? You want to use one version or another one.

Srini Penchikala: Make sense. Yeah, definitely something to keep in mind. And a feature stores have definitely become a big part of the machine learning solutions. This one now probably will have the same importance. Anybody else have any thoughts on that? Vector databases, it's still an emerging area.

Sherin Thomas: I think, again, the applications of similarity search are also going up. So a couple of years ago I was working on a site project with NASA FDL where they wanted to ingest... Basically they applied self supervised learning on all the images of the earth collected from NASA satellites. And when scientists are searching for weather phenomena like hurricanes and things like that, they want to search other images of hurricanes that happened over time and that's a problem that they're still trying to solve. And that was two, three years ago, we tried using Pinecone AI and now I've seen that these technologies have really developed, rapid improvement, in the last two years and at that time it wasn't there yet. So yeah, this is an amazing space as well.

Robotics and drone technologies [28:11]

Srini Penchikala: Let's switch to the next topic, which is also very interesting. So the robotics and the drone technologies. Roland, I know you have recently published a podcast from the ICRA conference. A good delivery, a good podcast, a lot of great information. So would you like to lead the discussion on this and where we are with robotics and drone technologies and what's happening in this space?

Roland Meertens: Yeah, absolutely. From my side, I was super excited to go to this ICRA conference and see what's happening here, so we have a separate podcast on this. And one thing which I think we see as a trend overall in the entire tech industry, is that we see less investments also with robotics, which always makes me sad because I think that robotics is a very promising field, but it always needs a lot of money. We do see that Boston Dynamics has started an AI Institute, so that seems very promising and we do see cheaper and cheaper remote control robots, besides Boston Dynamics who are still leading their leg robot race, but a couple of years ago it was unthinkable that you could buy a legged balancing robots who could walk over some unstable terrain. And nowadays, I think the cheapest ones you can probably get for only $1,500.

So it's getting more and more viable to buy a robot as a platform and then integrate with that API to put your own hardware on top of it. So yeah, hopefully we can soon see computers go to places where they have not gone before. And yeah, the robot operating system is still seen as leading software with more and more adoption to ROS 2. I also have seen one company, VIAM, which has also started to build a bit of a middleware where you can easily add some plugins and configure some plugins. So that's exciting. Overall, it's an interesting field with lots of developments, which is always kind of slowly, invisibly moving in the background. Yeah, super exciting.

Anthony Alford: What's interesting to me is how Google in particular has been publishing research where they're taking language models and using them to control robots. They're basically using it as the user interface. So instead of having a planner, you tell the language model, "Go get me a Coke." And it uses that chain of thought prompting to do step-by-step, down to basically the robot primitive of, "Drive here, pick up here, come back." I think that's a pretty interesting development, as well, especially considering how hard that was for us back in the nineties. Google's also trying to integrate sensor data into this. I'm not sure why Google is doing robotics research, but they are doing a lot of it and it's very interesting. I don't know if they had any posters at ICRA or anything like that, Roland.

Roland Meertens: No, but I'm also very interested in this topic. You indeed see that, for example, for planners in the past you had to say on a map, "Here is a fridge," and you had to say on a map, "Here's the couch." So if you have to bring someone a beer, you have to program the commands, walk to the fridge, walk to the couch. But now with these large language models and also with large computer vision models, why not recognize where the fridge is and remember that on some kind of semantic map. And if someone describes it in a different way, why not try to figure out what they could mean? And I think that's very exciting because it's definitely traditionally a very hard world to work in. So I'm very excited to see where we can go next in that domain.

Srini Penchikala: Yes, definitely. I think the one area that can definitely use this, unless it's already using some of this is manufacturing. So there is the virtual manufacturing platforms, there is the digital twins idea. So definitely I think we can bring the physical manufacturing plan closer to the virtual side and try to do a lot of these things that we can not afford to do in the physical space because of the cost or the safety and try it out with the virtual and semi-virtual, with the drones and the robots.

Daniel Dominguez: But I think with robot technology it is going to happen the same that happened with other industries that, I mean everybody knows that there is a lot of research and cool stuff happening, but there's not a real product that people can see that robot is there. So probably in the next year. For example Tesla, which is doing with the Optimus robot, a humanoid, that's going to happen. That we're going to start seeing robots more approachable as products and not only as research. So I think with all this advance and the things that are happening, we are years ahead of seeing physical robots on the streets.

Ethical AI [32:29]

Srini Penchikala: Makes sense. Let's jump to that, another big item here. So with all this power of technologies comes the responsibility of being ethical and being fair. So let's get into the whole ethical dimension of these AI/ML technologies. So I know we hear about the bias, the AI hallucinations, the LLM attacks. There's so many different ways these things can go bad. So Daniel, you mentioned about the regulations, how the governments are trying to keep a check on this. So what do you think is happening there and what else should happen?

Daniel Dominguez: I think obviously as we talked last time, the technology is really cool, but once we know what is happening with all this cool stuff, we need to consider the other aspect. And that's where AI ethics is very important. So for example, you need to consider all the biases and discriminations. So how the AI algorithms can perpetrate biases and discriminations on decision making, we need to take into consideration privacy and security. So what is the potential risk to personal data and privacy in the need of the office security systems? We need to consider ethical decisions. So how can we ensure that AI systems make ethical decisions and accurate information? We need to consider all this unemployment and economic impact that is going to happen, which AI is the potential displacement of jobs and economics regarding the AI adoption. And we need to consider sustainability because obviously addressing all this environmental impact in the long-term with AI technology is going to have an implications.

I think right now the governments in different parts of the world are thinking of their own solutions. I don't know from my personal perspective, if that's something that is going to work because I mean the AI is something that is going to affect the entire humanity, and it's not only the governments and their citizens in different locations. So probably the things that the United States is thinking regarding the artificial intelligence regulations are different based on the United Kingdom, are different based in Europe, different in Latin America, different in Asia, different in Japan. But probably at the end it's going to be something that the entire United Nations is going to take care of what is going to happen with humanity because this is going to affect the entire humanity and not only the citizens of the different countries. So I think we're just starting to see governments taking care and thinking about this responsibility and the regulation, but at the end it's something that needs to be done by the entire humanity.

Srini Penchikala: Roland, I know you had a podcast on this topic with Mehrnoosh, right? So do you want to mention that? What were the takeaways from that?

Roland Meertens: Yes, indeed. So from my perspective, I can really recommend the InfoQ podcast episode with Mehrnoosh Sameki, who I interviewed at InfoQ here in London. And personally from my perspective, what I find so interesting is that everybody agrees that safety is an important topic in generative AI. But on the other hand, people are also complaining about ChatGPT becoming dumber and dumber where people say, "Hey, a couple of months ago it answered this query about medications for me," or, "It answered this query about acting as my psychologist and now it refuses to do this. It refuses to do my homework. I don't know what."

And I think this is very interesting that I'm also very torn between, whoa, this technology is moving fast, we need to put a hold on it. But then also I'm also very done with the mandatory starting with, "Hey, as an AI language model, I cannot do this for you." I know you can, I just want some information, I don't want to listen to you. So I think there's an interesting thing which we will definitely see as a trend this year that people will start discussing this more and more.

Anthony Alford: I don't know, that's almost as annoying as having to accept or reject cookies on every site. You mentioned ChatGPTs output. These companies that are serving these language models are of course extremely concerned. The models often say things that are just not true. Nobody knows what to do about that. They also can say things that are very hurtful or possibly even criminal. They're trying to put safeguards in there, but they're also really not sure how to guarantee that. I think this is a problem that nobody's really sure how to solve and it looks like it's going to get worse.

I just had an InfoQ news piece where a research team figured out how they could automatically generate these attacks. They could basically automatically come up with prompts for so-called jailbreaks, not just ChatGPT, but basically any language model. So in a way, obviously when you do research like this, it's like white hat hacking. You're trying to show that there's a problem so that hopefully people will work on it. We'll see... Like Roland said, it's already kind of a problem and I think it may just get worse.

Roland Meertens: Maybe from my perspective, the two things I want to make sure that this is takeaway from the podcast, is I think my two tips are that it's important as we improve the lives of all the people and all your users. So don't just say, oh, but it works for a couple of specific users or it works for me. I think it's always important to make sure that it really works for... Consider all the possible edge cases you can have on your platform and also consider everything which ChatGPT can do for you. So consider both the false positives but also the false negatives. So this needs to be more ingrained in the minds of people because if you start using ChatGPT for the first 10 minutes, you are of course amazed by all the things it can do. And only if you start digging a bit, you find some false positive cases, you find some false negative cases.

If you are creating a new application which you roll out to the entire world, there will be a lot of false positives and false negatives. So in that sense, of course, it's important to remind users that this was generated by a large language model and maybe it cannot do all the things for you because at some point you're getting into the more dangerous iffy waters of what you want a large language model to show for you.

Srini Penchikala: Yes, it makes sense. So discrimination even for one demography or one user is not acceptable in some use cases. The other similar topic here is the explainable AI. Sherin, would you like to talk about this a little bit? So what is the explainable AI for somebody who's new to this and what's happening in this space?

Sherin Thomas: So explainable AI in a gist is basically a way to explain how a model came to a result or a conclusion. So it could be like what were the data points it used or how did it make this decision, et cetera. And I think this is going to take center stage and become really important as we are talking about ethics of AI, and as Daniel mentioned, how a lot of governments are making regulations, discussing new laws. And when Roland talked about how models sometimes are getting dumber and we want to know why it is doing what it's doing. And the way I see this might play out, a few years ago we saw a big disruption in data governance and data privacy automation as a result of GDPR and CCP and those laws coming into the picture. And I see that push happening on the AI explainability side as well.

And moreover, we've already seen some AI failures because of bad data. Famously there was a... I don't know if you've heard about this. A few years ago, Amazon, I think they were using a model to make decisions to interview people and it disproportionately selected more men because it was trained on last 10 years of data, which was disproportionately men's resumes that were coming to Amazon. So things like that. So yeah. So I feel like in this new world of AI explainability, data discovery, lineage, labeling operation, and good model development practices are all going to become super important.

Data Engineering [40:10]

Srini Penchikala: So we can do a quick transition here and talk about some data engineering topics as well, similar to how AIML space has been going through a significant number of developments and innovations. A lot of emerging trends and patterns are happening in the data engineering space as well. So let's look at some of these important topics that our leaders should be looking at in their data management projects. Sherin, I know you have done a lot of work on this and you are probably the expert on these topics in this group. Would you like to talk about what do you see happening in the data side, whether it's data mesh or data observability or even data discovery and data contracts?

Sherin Thomas: A few trends that I'm noticing. One, is there is a lot of emphasis on speed and low latency. So earlier most data organizations were batch first and streaming maybe like 10% of use cases would be like streaming. But now that piece of the pie is increasing. There are a lot of unified batch and streaming platforms coming out. Car part architecture is gaining adoption. Then data mesh has been a buzzword. As data is increasing and organizations are getting more complex, now it's no longer sufficient for just a central data team to manage everybody's use cases and data mesh came out of that need. Another buzzword that I'm hearing is data contracts. So there is a lot of emphasis on measuring data quality, having data observability, and this is just going to become more and more important with this whole new world of AI that we are entering.

Srini Penchikala: What do you think about data observability? It is definitely becoming a main pattern in the data side. I recently hosted a webinar on data observability and I learned that it is not what it was a few years ago.

Sherin Thomas: So earlier when we used to talk about observability, it was mostly around measuring observability at the systems and infrastructure level. But as we are adding more and more abstractions, so now we talk about data products as an abstraction, and then on top of data products we have a machine learning pipeline. That is another abstraction. So now it's no longer sufficient to have observability just at the systems and infrastructure level. We need observability at those different abstraction layers as well. As I mentioned earlier, data contracts is a theme that I'm hearing a lot where with data teams getting more and more distributed and with a lot of actors being involved in a whole full life cycle of data ingestion and processing and serving and all of that, it makes sense to have contracts across those boundaries to make sure that it's almost like unit tests. Make sure that systems and data products are behaving as expected.

I also notice a lot of companies coming up in this space, so Monte Carlo, Anomalo, and one person that I follow, Chad Sanderson, he has a lot of great opinions about the subject. So I encourage you to follow him on LinkedIn or I think he has a blog as well. And I see with AI the whole need for data observability is just going to increase. And we talked about AI explainability earlier. So now we want to know what kind of data we are getting, what is the distribution, all sorts of things. And we have heard so many stories of AI failing because of data. So the whole Zillow debacle, and I already spoke about the Amazon recruitment model thing. So now the data observability is also around what type of data or what distribution of data is coming in. It's not just system info.

Srini Penchikala: Definitely. I see the data disciplines that you discussed under the AI side we talked about are basically two sides of the same coin.

Sherin Thomas: Yes.

Predictions for the next year of AI [43:50]

Srini Penchikala: You need to have all of this in place to have an end-to-end data pipelines to manage the data and the machine learning models. I think we can wrap up with that discussion. I think we have talked about all the significant innovations happening in our space, so we can go into the wrap up. So I have two questions so you can answer both the questions or pick one question to wrap up. And then we will go to the concluding remarks. So the two questions are, what is your one prediction in AI or data engineering space that may happen in one year? So when we do this podcast next year, we will actually come back and see if that prediction has come true or not. We will start with Sherin.

Sherin Thomas: I think, I'm noticing that a lot of companies are feeling that data teams are becoming a bottleneck. So I see data mesh adoption going up in the coming year. So that, and the second part is around explainability. I think that is also an emerging topic and I think there might be a lot more adoption in that area.

Srini Penchikala: Okay. Anthony?

Anthony Alford: Predictions are hard. Can I make it a negative prediction? I predict we will not have artificial general intelligence. I know people think maybe the LLMs are a step on that. Certainly not next year. I'd be surprised if it happens in my lifetime to be honest. But I'm over the hill, so maybe we'll see. You never know.

Srini Penchikala: AGI, right?

Anthony Alford: Yeah, I know that was not a very brave prediction, but I'm going to predict, no.

Srini Penchikala: It's good to know what’s not going to happen. Daniel?

Daniel Dominguez: I think AI is here to stay. I think this year we saw with ChatGPT and all these new things happening and focusing on the product base also where people can start using it and massifying all this technology, AI is here to stay. Probably, I would say by next year there's going to be more new cool products, more new cool stuff to use. I know, for example, Elon Musk is going to start working on artificial intelligence in many of his own companies. So there are going to be more and more approaches to the artificial intelligence for normal people and not only for the research and for the things that we were used to do, which was to read research on papers and all that stuff. But now the artificial intelligence is going to be on more and more products that people are going to start using it more and more.

Srini Penchikala: Roland.

Roland Meertens: So from my side, the one thing I am personally very excited about, is the field of autonomous agents. So right now you are taking the API of OpenAI or whatever and you have to feed it prompts and then you have to connect it to these other APIs. What I'm really excited about are these autonomous agents where you simply say, "Come up with an interesting product to sell," and then the autonomous agent will by itself start looking at what's a good product to make. And then it'll autonomously email some marketing companies saying, "Hey, can you help me market my new product?" And it will automatically email some factories saying, "Hey, can I get this?"

And I think it'll be super powerful if you could have maybe a couple of basic things in a year connected to this. So maybe I could say I want to go to a romantic restaurant in this city where I'm traveling to, and then it'll automatically start finding a couple of romantic restaurants, read up on what's the most romantic restaurant is, and then also on my name, on my behalf, using my Gmail email the restaurant owner asking, "Hey, can I have a table?" And then the date. I think that would be amazing, these autonomous agents.

Anthony Alford: If I could outsource buying Valentine's gifts and so forth, sign me up.

Srini Penchikala: Well Anthony, I think Roland is in the dating app development business.

Anthony Alford: Oh, right.

Srini Penchikala: These are good features to add.

Anthony Alford: Roland, he didn't do a prediction, he gave us his product roadmap.

Srini Penchikala: He's talking about his product roadmap. Okay. Yeah, I think my prediction is I think the LLMs are going to be a little bit more, I don't want to call mainstream, but a little bit more in the reach of the community. We heard about LangChain, which is the open source LLM framework technology. I mean, solutions like these will be more available next year, and LLMs will not be just a closed source type of solution. So okay, let's go to the last question and then we can conclude after that. So I know ChatGPT is more powerful because of a lot of the plugins that are available there. We can start with Roland on this. So I want to ask you guys what ChatGPT plugin would you like to see added next?

Roland Meertens: I think just how right now I am not remembering things anymore, but I remember how to Google for the things I want to remember. I think the next step would be a ChatGPT plugin for my life, such as maybe starting with WhatsApp and Gmail, such as it'll remember things for me. So it would be like a Remembrall in Harry Potter, where suddenly some ball will become red and you think, "Ah, I forgot something." And if you're lucky, if you upgrade to the premium version, it'll also tell you what you forgot.

Srini Penchikala: Cool. So the basic model and the premium model. Exactly. That'll help me out. I forget a lot of things. Okay. How about you, Anthony?

Anthony Alford: I'm kind of liking the restaurant plugin. I think, Roland, you need to get on that for me.

Srini Penchikala: Yeah, there you go. Okay. Daniel?

Daniel Dominguez: I would like to see something with a voice. For example, ask ChatGPT something and instead of typing or other stuff, just like you do with Google or Alexa, say something and the answer. And if the answer is good... For example, say, "Answer an email that I have, I need the answer," and I just send that email without me touching the keyword. Something like that would be very nice.

Srini Penchikala: Yeah, that'll help. Sherin, what do you think?

Sherin Thomas: I'll give all my money to whatever plugin can make decisions for me. Just make my decision, run my life for me, and I'll be happy. Yeah.

Srini Penchikala: I think for me, along the same lines. I would like to have a plugin that will tell me what I don't know I don't know. So, unknown unknowns. Okay, we can wrap it up guys. Thanks for joining in this excellent podcast and sharing your insights and predictions, what's happening in the AI/ML space and data engineer space. To our listeners, we hope you all enjoyed this podcast. Please visit infoq.com website and download the trends report that will be available along with this podcast in the next few weeks. And I hope you join us again for another podcast episode from InfoQ. So before we wrap up, any remarks you have, we'll start with Daniel.

Daniel Dominguez: No, I think it's going to be very important to see what is going to happen. And as I mentioned, I think AI is here to stay. So let's see how, if it's going to be good for humanity or bad for humanity, it's all going to depend on the way that everything develops. But I think this is a very new way to explore things that were before unexplored and things are getting more approachable to all of us in terms of this technology. So we'll see what is going to happen in terms of the use at the end of humanity is going to give to this technology.

Srini Penchikala: Okay. Anthony.

Anthony Alford: Definitely interesting times. Stay tuned to infoq.com to keep up with the trends and new developments.

Srini Penchikala: There you go. Roland.

Roland Meertens: As a large language model, I'm unable to create any closing remarks.

Srini Penchikala: Okay. It'll be available in the premium version. How about you, Sherin?

Sherin Thomas: Let me just quickly ask ChatGPT to generate a closing for me. Yeah, it was so nice chatting with you all and yeah, I hope the listeners enjoy our chit-chat.

Srini Penchikala: Thank you. Thanks everybody. So we'll see you all next time and we'll see how many predictions have come true and what all happened in the last one year when we talk again. So thank you. Have a great one. Until next time. Bye.

About the Authors

More about our podcasts

You can keep up-to-date with the podcasts via our RSS Feed, and they are available via SoundCloud, Apple Podcasts, Spotify, Overcast and the Google Podcast. From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Previous podcasts

Rate this Article

Adoption
Style

BT