BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Podcasts InfoQ AI, ML and Data Engineering Trends Report 2022

InfoQ AI, ML and Data Engineering Trends Report 2022

Bookmarks

There have been a lot of innovations and developments in the AI and ML space since last year. In this podcast, InfoQ’s AI, ML and Data Engineering editorial team discusses the latest trends that our readers should find interesting to learn and apply in their own organizations when these trends become mainstream technologies.

Key Takeaways

  • In AI/ML applications, the transformer is still the architecture of choice.
  • ML models continue to get bigger, supporting billions of parameters (GPT-3, EleutherAI's' GPT-J and GPT-Neo, Meta's OPT model).
  • Github's Copilot helps with developers' productivity by generating basic functions instead of us writing those functions from scratch.
  • Open source image-text data sets for training things like CLIP or DALL-E are enabling data democratization to give people the power to take advantage of these models and datasets.
  • Data quality is critical for AI/ML applications throughout the lifecycle of those apps.
  • The future of robotics and virtual reality applications are going to be mostly implemented in the metaverse.
  • AI/ML compute tasks will benefit from the infrastructure and cloud computing innovations like multi-cloud and cloud-agnostic computing.

Transcript

Introductions [00:05]

Srini Penchikala: Hi, everyone. Welcome to the InfoQ podcast Annual Trends Report in AI, ML and data engineering topics. I am Srini Penchikala. I am joined today by the InfoQ editorial team, and also an external panelist. There have been a lot of innovations and developments happening in AI and ML space. I'm looking forward to discussing these innovations and trends with our expert panel group. Before we jump into the main part of this podcast, let's start with the introductions of our panelists. First, Rags Srinivas. Rags, can you please introduce yourself?

Rags Srinivas: Glad to be here. I was here for the previous podcast last year as well. So, things have changed quite a bit, but I focus really mainly on the big data infrastructure and the confluence of that. So quite a few developments happening there that I'd love to talk about when we get there. Myself, I work for DataStax as a developer advocate, and essentially, again, it's all about data, AI, infrastructure and how to manage your costs and how to do it efficiently. And hopefully, we'll cover all that.

Srini Penchikala: Next up, Roland Meertens. Roland, please go ahead.

Roland Meertens: Yes. I'm Roland, I'm a machine learning engineer, and I hope to talk a lot about transformer models and large-scale foundational models.

Srini Penchikala: And Anthony Alford. Anthony, please introduce yourself.

Anthony Alford: Hi. I'm Anthony Alford, I'm the director of development at Genesis, a contact software company. For InfoQ, I like to write about some of the latest innovations in deep learning, and definitely want to talk about NLP and some of the multi-modal text and image models.

Srini Penchikala: Next is Daniel Dominguez. Daniel, please introduce yourself.

Daniel Dominguez: Hello. Thank you for the invitation. I'm Daniel. For InfoQ, I write about Meta AI. I like to write about the metaverse, new technologies, deep learning. And my work, I'm a member of the AWS community builder with machine learning, and I also like to write about many things straight up in AWS on machine learning.

Srini Penchikala: We also have a panelist from outside the InfoQ editorial team, and she is a speaker at the recent QCon London conference. Dr. Einat Orr. Einat, please introduce yourself.

Dr. Einat Orr: Thank you for having me. I'm a co-creator of the open source project LakeFS, and a co-founder and CEO at Treeverse. And I would love to talk about the trends that we see concerning data transiency, either in data centric AI, or with data engineering tools.

Srini Penchikala: It's great to have you all join this discussion. For our readers, let me quickly go through the scope of this podcast and what we expect. The focus of this podcast is to report on innovative technologies and trends in artificial intelligence, machine learning and data engineering areas that our readers may find interesting to learn and apply in their own organizations, when these trends become mainstream technologies. So InfoQ teams also publish trend reports on other topics like architecture, cloud and DevOps, and culture and methods. So please check them out on the InfoQ website.

Srini Penchikala: So there are two major components to these trend reports. The first part is this podcast, which is an opportunity for you to listen to the panel of expert practitioners on how these innovative technologies are disrupting the industry. The second part of the trend report is a written article that will be available on InfoQ website. It'll contain the trends graph that shows different phases of technology adoption and provides more details on individual technologies that have been added or updated since the last year's trends report. So I recommend you all to check out the article as well when it's published later this month.

There are a lot of excellent topics to discuss in this podcast. But in order to organize the discussion a little bit better, we have decided to break the overall trends report into two episodes. In today's podcast, we will focus on the core technologies underlying AI and ML solutions. In a separate podcast in the future, we will discuss the tools and frameworks that enable the AI, ML initiatives in your organizations. So one of the main areas that has been going through a lot of innovations in the AI space is Natural Language Processing or NLP. Companies like Google, Amazon, and Meta recently announced several different AI language models and training data sets and how these models perform against different industry benchmarks.

Natural Language Processing (NLP) [04:13]

Srini Penchikala: Anthony, you've been exploring this area more and you've been writing about NLP topics. Can you talk about the recent developments in NLP and related areas like NLU and NLG, as well as some research happening in institutions like Stanford University and other organizations.

Anthony Alford: So one trend that has stayed steady at least is, the transformer is still the architecture of choice. It's basically taken over the space as opposed to the previous generation of models used recurrent neural networks, such as the LSTM or the GRU. So transformers, their architecture of choice seems to be holding steady there. One thing that we do see is that the models continue to get bigger. Of course, GPT-3, when it came out, was the biggest. But it seems like every few months there's a larger model. And one of the nice things is we're seeing more of these models being open sourced. So, for example, there's a research organization called EleutherAI. They're releasing their version of GPT. They call it GPT-J, and I think GPT-Neo. Those are completely open source.

And similarly, Meta recently released their OPT model, which is that's open source, I think. What's the parameter count, 175 billion, with a B, parameters. So it's really nice for those of us who are not working for these big companies to be able to get our hands on these open source language models. And the models are very good. We saw recently Google has a model called PaLM. It can explain jokes. If you have to explain a joke, maybe it wasn't that funny.

Rags Srinivas: It's not a joke anymore.

Anthony Alford: Exactly. But the models are so good, now that some people think the models are actually sentient. They're aware, they've achieved intelligence. We probably saw stories about Google's LaMDA model, where an engineer had a discussion with it and thinks, "Wow, this thing's basically alive." Obviously there's still a little skepticism there, but who knows. The other thing that's nice is we're starting to see tools for debugging and controlling the output from these models. So because these models are so big and sort of black boxed, a lot of times, it's hard to figure out why does the model give you the output it does or how can we keep it from outputting something that's factually wrong or offensive. So there's research now.

So for example, Stanford University, they're working on making these language models controllable. And what's interesting is typically these language models like GPT-3, they're autoregressive. So their output gets fed back as the input, because the model just predicts the next word in the sentence. And so then you're building the sentence word by word and feeding that back in. That's autoregressive, while Stanford is looking at using a new type of generation technique called Diffusion. We'll talk a little bit more about Diffusion models when we get to the multimodal image. But it looks like this has some promising new capabilities.

Srini Penchikala: Thanks, Anthony. Roland, I know you have also been looking into this area. So do you have anything to add about that?

Roland Meertens: What I can comment on is your mention of these models are becoming sentient. So, Yes we indeed had this whole thing where someone said, "Oh, this model is sentient." But it seems very similar to, I don't know if you guys know Clever Hans, which was a horse, which could do all kinds of arithmetic-

Anthony Alford: Exactly.

Roland Meertens: ... but it turned out that the person interviewing the horse and giving the questions just, without him knowing, gave some cues. So I think this explains the sentient part. But what I am very excited about is the understanding of the world these models seem to exhibit. So in the last couple of weeks, I played a lot with DALL·E Mini. And if I give a prompt such as a pigeon wearing a suit or an otter wearing a face mask, it knows that a pigeon suit should probably go on the chest of the pigeon. There's no explicit encoding where a suit should go for a pigeon. There's no place you learn this. That's probably not in your training data, but you see that the AI manages to map concepts from one thing, being pigeons, and the other thing being suits and how to wear them, combine them, or that the face mask should go on your face, and also then on the face of an otter.

I think that is at least something which I think is really exciting. And I'm actually thinking that there should maybe be some new kind of AI psychology field where you now take one of these big models and try to find out what did they learn, what can they do, how do they function inside. I think that's very exciting.

Anthony Alford: That brings up a good point. With these models, what the people building them are chasing is some kind of metric, an accuracy metric or something like that. And it may be that it's time for some new way to measure or to study these models instead of just in terms of accuracy or similar.

Roland Meertens: But maybe something to move onto is that I also just listened to the podcast we recorded last year. I listened that back, and we are already talking about transformer models. For me, last year was really the year of the transformer, and really using the so-called foundational models. So what I see people do in their work life, but I'm not seeing this enough yet. More people should do this, is that you have a task, one of those downstream tasks, and you take an existing model, such as GPT-3 or CLIP, and just see what you can do with it, see if you can use it, see how useful it is for your task. And I'm noticing that I'm less and less training models, but more using these existing models and finding a good model which fits my use case, and see how I can adapt it to my own downstream tasks. Do you see this as well?

Anthony Alford: Well, one thing that I do see in regards to that, plus the idea of metrics, researchers are starting to use BERT as a way to measure the goodness of their models. They use it when they have these models generate output text, instead of comparing that generated text with some reference text, using Ingrams or something like that. Instead, they take both the reference text and the generated text and run them through BERT, to get the embeddings, and they look at the similarity of the embeddings. So these foundation models like that, they're becoming a utility, for sure.

Roland Meertens: And one thing I really noticed last year in the podcast, we mentioned that we wanted to get access to GitHub Copilot. So last year, this finally went online as a general public thing. I think you have to pay $100 a year for it. I know that I am absolutely going to do this. Did any one of you use this?

Github Copilot [11:07]

Srini Penchikala: No. Actually, Roland, I was going to ask you about that because last year, we talked about the GPT-3 and the Copilot and other topics. Yes, I was going to see if you have seen any major developments in the Copilot project.

Roland Meertens: Well, now for half a year, I also have access to it. You should actually listen to the interview I did with Cassie Breviu on the InfoQ podcast. We discuss it a bit. But my productivity has gone up 100%. I don't have to find anything on stack overflow anymore. Whenever I need to write any basic functions, I don't have to think about them anymore. Copilot generates them for me. So the $100 a year price is absolutely worth it, in my opinion.

Srini Penchikala: $100 is probably nothing for the productivity we're gaining out of it.

Roland Meertens: Yes, indeed. But I still see an increase in usage of these models, and people are discovering how to use the models. Maybe also a shout out to Hugging Face, which has a really nice collection of data sets and models and tooling to easily host them online. It's something I have been using a lot last year, and still hope that more people start using it. It's so incredibly easy to take a simple task and shape it in such a form that CLIP or DALL·E or GPT-3 can work with it. You can try your task before actually setting up your data collection pipeline and your labeling pipeline.

AI/ML Training Datasets [12:37]

Srini Penchikala: Yes, definitely a lot of great information and innovation happening there. So how about the data sets? I know recently I saw a few different organizations have been kind of releasing and open sourcing their data sets. Anthony, do you have any thoughts on that? That's another great area where machine learning efforts become easier because you don't have to create your own data synthetically or otherwise, you just use these retrained data sets.

Anthony Alford: And in fact, you could very much make the case that the availability of large high quality data sets is one of the key ingredients of the deep learning revolution. If you think about ImageNet, we didn't really get deep learning for Vision until after ImageNet was available. So I think it's definitely key for us to have these datasets. Now, no dataset is perfect. Now, they're looking at ImageNet and saying, "Well, there's a lot of mislabeled things or this and that." But Yes, you're right. We definitely do see this. Amazon is releasing multilingual data sets for essentially training the voice recognition. We see open source image-text data sets for training things like CLIP or DALL·E. And they're huge. They're billions, again, billions with a B, of image-text pairs. So I think this is a positive development. In terms of the trend, the trend is toward more call it democratization, open source models, open datasets to give people the power to do their own research or to take advantage of these models and datasets.

Rags Srinivas: And I think companies are figuring out how to monetize. So just giving away the data, but still monetizing it somewhere else. Right?

Anthony Alford: Only if you're a cloud provider.

Rags Srinivas: Exactly. Yes.

Anthony Alford: Exactly.

Rags Srinivas: That's what I meant.

Roland Meertens: But I think that  the monetization part is actually very interesting because I have the feeling that at the moment, with these models, all you need to start a new startup is creativity on how to use them. I think most of the downstream tasks can be solved relatively easily with models like this. And there is such a huge space for exploration on how to use it for smaller applications, from maybe sorting fruits, to maybe classifying whether language is foul language or not, or maybe a spam filter. All these things you can now do with these foundational models and at least get started before you are collecting your data, before you are annotating your data. I think you can get a huge speed up in setting up maybe a machine learning company in a weekend.

Srini Penchikala: Yes, definitely.

Dr. Einat Orr: I think it also correlates very well with another trend that we see, of tools that really focus on the data with ML, rather than focusing on the models themselves. As you said, when you get this dataset shared, you can see the mislabels. And we know that data engineers and machine learning specialists spend about 60% to 80% of their time with data preparation. And as you said, it really saves that time and allows them to focus on their models. But in other situations where you need to obtain the dataset, then that part of the process is time consuming and extremely important for the accuracy of your results. And this is why there is tooling coming up that really focuses on that part of the process, and also on the approach of focusing on the data itself. This is what is called the data-centric AI.

So it starts with tools that provide version control for data, and those focus on the data. So we had Pachyderm in 2014 already, and Data Version Control, DVC, in 2016. But in the last year, we see additional three or four companies named Activeloop and Graviti, with an I, that are really focused on unstructured data, as you mentioned. And their main mission is to help you manage the data throughout the life cycle of modeling, rather than the model itself. And we all know garbage in, garbage out. To just prevent that, there's a lot of tooling that shows you an excellent visualization actually of the quality of your labeling and optimization algorithms that allow you to prioritize the labeling in a way that more efficiently improve your models because they cover the right parts within the data sets that you would like the model to improve in, and so on.

So I think it's beautiful that the tooling is coming together with this democratization of the data sets themselves. So we would have the democratized data sets in very high quality of preparation and labeling in a way that will really allow to get excellent results from those. And of course, commercial companies who don't share the data sets would be able to enjoy those tools, in order to improve that very frustrating part in the machine learning life cycle of data preparation.

Roland Meertens: I think that what you mentioned is so true, that especially if you are just starting with a new problem, the quality of your data is so important, you're always training, you're always telling a neural network and punishing it by getting an answer right or wrong. And this has to be perfect all the time, especially if you are just starting a new project. You just have to have the perfect data with high quality, and indeed have to have an interesting overview of all the data which is going into your system. I totally agree to that.

Dr. Einat Orr: Also in the later stages, once you already implemented your model into production, so following its accuracy and making sure that it's still relevant, includes adding additional data points that you have just collected from production, put those back into your training sets in order to improve the model as fast and possible and adjust it to the changes that you see in the qualities of your data set as it goes along. So again, the tools that focus on data, try and focus on it on all parts of their data, life cycle management of ML. It's really fascinating and a very useful trend.

Roland Meertens: Do you have any tips on data selection? If you would have to select data, any tools you would specifically use?

Dr. Einat Orr: Well, I'm afraid I'm a theoretician of this. I read about the companies, but I am currently not practicing. Although I did for very many years, I'm not practicing as a data engineer, and I have not tried any of them. I'm just impressed by the approach that I really believe in. As we all said, from our own experience, having high quality data is critical. And making sure it stays high quality throughout the life cycle, is just critical.

Roland Meertens: And also realizing the quality of the data you're working with, because your model is never going to perform better than the quality you put in. So realizing exactly where the weaknesses in your data are, will tell you where the weaknesses in your performance will be. So Yes, data quality is massively important.

Dr. Einat Orr: So Galileo (https://www.rungalileo.io/), if you search in Google, have a beautiful offering around that as well.

Srini Penchikala: Yes, definitely data is where it all starts. I want to share probably a silly example of the Google Mail AI/ML innovation. So the other day, I was composing a new email message, and I typed everything up, and I was going to find a subject for the email. But Google Mail already automatically parsed the content of the message that I just typed, and did the verb to noun in translation, and actually it came up with a recommended subject for my email. And it was so accurate, so it was kind of scary, scary accurate. I thought like, "Wow." So it just parsed through like two paragraphs of the content and found exactly what was the focus of the email, and suggested the email subject, email title. I thought like that's very interesting and also a little bit scary. Right?

Rags Srinivas: I think most of my email, I compose in two characters. Everything else is a tab.

Srini Penchikala: Yes, that's true.

Rags Srinivas: But I don't want to minimize data. But obviously, the entire life cycle is important. And Srini, I don't know if we're going to talk about MLOps in general, but we talked about it last year for sure. But how does that factor in the bigger discussion? Because part of that is obviously kind of making sure your data is accurate, making sure your data is up to date and so on and all that. But not only that, then you move on to the ModelOps where you're making sure your model is correct and so on and so forth.

ML Ops [20:51]

Srini Penchikala: Yes, definitely. Yes, if anybody has any thoughts on that, the whole operationalizing, bringing the CI/CD and DevOps practices to machine learning, if you have any thoughts, please go ahead and share those with us.

Rags Srinivas: I think I'll start off, and probably go with the things that I mentioned last podcast, but I think we've gotten to a point where we realized that it's really about all phases of the life cycle, if you will. So not just data, but also it's tuning your model, keeping it consistent, tweaking those parameters, making sure... Again, going back to my developer world, I want to be able to store everything somewhere. I want to be able to snapshot it somewhere. And that's where the Github and the Github Copilot and all those come into the picture where those can help me not only kind of snapshotting the model, but also kind of trying to make it easier to tweak it and pushing it along the chain. So I think not something revolutionary that I'm saying here, but definitely, we are trying to mimic the DevOps model to MLOps model. But the MLOps now is really more about DataOps, ModelOps, and then so on and whatever you want to put in front of Ops, right?

Dr. Einat Orr: Yes. There are so many tools out there...

Rags Srinivas: Exactly.

Dr. Einat Orr: ... with so many different missions that partially overlap. And I think in the last year, what we see is that they're actually offering more and more of the life cycle in hope, I guess, to help their users in getting what they need. But it also means that their ability to cater for more complex use cases just drops because their offerings are so wide.

Rags Srinivas: Exactly.

Dr. Einat Orr: So this is, right now, a catch on whether there is an end to end solution here, or are we going to see actually the tools that are really focusing on one mission deeply, survive this, while the end-to-end ones might find themselves only with the beginners. But of course, it's an open question, hard to know.

Rags Srinivas: Absolutely. Is there like an opinionated implementation?

Anthony Alford: If we want to put in a plug for some InfoQ content, Francesca Lazzeri did a talk at a recent QCon event. She gave a talk about MLOps, which we have that content on the site.

Robotics and Virtual Reality [23:03]

Srini Penchikala: The other area that's getting a lot of attention is the robotics space. So with augmented reality, virtual reality and mixed reality are part of this space. So, Daniel, you mentioned that you have done some work on this. So would you like to lead the discussion on robotic space, what's happening in that area, and what our readers should be aware of?

Daniel Dominguez: Okay. Basically, I think one of the most important things that are going to happen in a near future are going to be related obviously to robotics and virtual reality, mainly with the metaverse. As you know, probably last year, the metaverse was not something that we were thinking of or seeing. But right now, with all the things happening around all these new trends and new technologies that are coming on, obviously there are going to be a lot of things to catch up on in this area, mainly in artificial intelligence. For example, right now, Meta, with the Meta AI lab, are doing amazing stuff regarding e-commerce, regarding deep fake detection, regarding a lot of things, mainly on augmented reality and mixed reality. This year also, Apple is going to be showing the advance on their augmented reality glasses and all the things that are happening on its way.

I think there are going to be a lot of things happening with the interaction of artificial intelligence and machine learning, mainly focusing on metaverses, also a lot of things with blockchain technology that are also related with artificial intelligence, and it's also related on the metaverse for the tokenization, for the things that are going to happen in this space. So it's going to be a lot of noise and a lot of things that are going to be related to this. So I think it's going to be something that definitely our readers are going to start taking a look at regarding these new technologies that are coming in the next year.

Anthony Alford: Daniel, I was wondering about your thoughts on this. We see with robotics research, this concept of embodied AI. And a lot of researchers are doing essentially simulations, 3D world challenges for example. What are your thoughts on that? Do you feel like that's a good solution to try to build real world robots?

Daniel Dominguez: Yes, definitely. For example, all the things... Right now, for example, AWS has a new tool which is also for finding robotics and the AWS Robotics. So there are a lot of simulations that probably companies or people interested in research on robotics can work on those tools, can simulate all the environments, can simulate all the aspects that they were to do on robotics. And this is based on virtual reality. So there is going to be a lot of important information and important things that they can do before the actual robot is built. Also, for example, in medicine, there are a lot of tools now. For example, I just read some time ago that there was a first surgery made on virtual reality from one doctor in one place, making the surgery to a patient in another place, and everything was done by virtual reality, and that virtual reality was controlling the real robot in the real environment. So that was a pretty cool thing to see on how the robotics and virtual reality are interacting in the virtual world and in the real world.

Roland Meertens: I actually, last week, was at a meetup where the CEO of the company couldn't be there, but he had his virtual reality glasses with him, and the company was building a remote presence robot. So he was virtually present while wearing his virtual reality goggles in the train and controlling the robot at a distance. These things are just amazing to see if you can be at any place at any time, as long as there is something for you to control. And especially during the pandemic, I noticed that hanging out with people in virtual reality is actually a great alternative when you cannot meet each other in person. Did anyone else here in this group try this?

Rags Srinivas: I prefer meeting people in person, but I'm old fashioned, let's put it that way. But I think the tools kind of help a lot. And I also saw one demo that kind of blew me away, which is like a virtual reality where the conductor was virtual. And I think this was a pretty famous demo, whatever, much talked about. But essentially, part of the orchestra was in different cities, and they were able to kind of have a concert, seamless. And this was at the peak of COVID I guess. But I still think those are very powerful examples of where AI has made inroads like never before. Right?

Roland Meertens: I think, just going back to your first comment about, "I like to meet people in person," I see it really as, these virtual reality meetings are, for me, a great alternative to having a Zoom call. So we are recording this podcast, and I've seen Srini many times virtually, but I would love to see him in person. But going to him would take a lot of time and a lot of effort. So that's why I like talking to him on the phone like this. And the virtual reality just adds an extra dimension to it. I can recommend playing mini-golf together with people, to have a bit of real bonding going on without actually all flying and driving to a minigolf place.

Srini Penchikala: Yes. You can do a lot more than just screen sharing. Yes, definitely in manufacturing and industrial use cases, we hear the term digital twins. That's pretty much what it is, the virtual self of a particular entity. So whether it's a car or a robot or anything else. So definitely that's going on. A lot of innovations are happening there. Anyone else have any thoughts?

Roland Meertens: Maybe just going back to that, I think that... But this is, I think, the second time I'm saying in this podcast, that we need more psychologists, but there needs to be more research and more reasoning around embodiment. When do you feel present somewhere? When do other people appreciate you being present? A couple of weeks ago, I was with a friend, Kimberly, who joined the podcast last year at a robotics conference, but she couldn't be there. So she was in a robot body, and I was with my real presence in Philadelphia. And you did see a difference in how people react to people in person and people with a robot body. And this is all something we have to figure out, how do we best represent everybody? How do people react to you? How can you really feel present at a conference if you're not physically present?

Dr. Einat Orr: Well, I'm still not in social networks. So for me, it's just... I'm a generation behind from...

Daniel Dominguez: Not only that is funny

Dr. Einat Orr: ... virtual presence perspective.

Roland Meertens: And this is why it's so important to come to the InfoQ conferences.

Daniel Dominguez: It's funny because actually in social media, for example, one thing that you have to do is that you have, I don't know, to have clear usernames or to have an online presence established, so people know who you are and have a search engine recognition of your personal brand. But now with the metaverse and the virtual reality, it's going to happen the same, but your own avatar. So there are going to be a lot of avatars and a lot of platforms. So you have to start thinking, how are you going to be recognized on those platforms? The same way that you have a user name, which was probably in all your social networks. Now your own avatar or your own physical aspects on the virtual world, somehow you're recognizable. So you are recognized in those virtual worlds. So that's going to be also something that... Like the personal brand, now's going to be like in the personal world. So there's going to be a lot of things happening around that space as well.

Dr. Einat Orr: There's also the concern that, as we know from social networks, when people don't really identify themselves, they allow themselves to behave way more radically. What happens when that has a physical aspect to it, even if it is a virtual physical aspect? So I think there's a lot of moderating to be done here.

Srini Penchikala: Yes. And I think that's where the psychology comes into the picture again. Roland has a great point. As we bring the virtual world as close as possible to the real world, we have to treat it with the same expectations, same behavior in both worlds. Right?

Dr. Einat Orr: Maybe we also need philosophers.

Roland Meertens: Yes. And we just have to discover what makes you feel real, what is acceptable behavior. This is just something which we see happening right now. Virtual reality feels a lot more real when you are having robot hands in virtual reality. So that's at least one thing, but it's just the same for when you are a robot at a physical place, do you need to have hands, and also what are the acceptable behaviors. When cell phones were just introduced, we would all have funny ringtones. And now everybody acknowledged that you are not having a ringtone. No, your phone is silent. These things will change over time. We have to, as a society, figure it out. I think that is definitely an interesting emerging trend.

AI/ML Infrastructure, Cloud Computing and Kubernetes [31:39]

Srini Penchikala: Yes. They evolve over time. Okay. I definitely want to also make sure that we have time for the other part of this overall AI/ML discussion, which is the infrastructure. Infrastructure is the foundation. That's where basically everything kind of starts. Without a good infrastructure, we would not have any successful AI/ML initiatives. So let's talk a little bit about that. Rags, I know you've been focusing on this area for a while, especially on how technologies like Kubernetes can help with developing and deploying software applications, including machine learning apps. Can you discuss a little bit about how do you see AI/ML infrastructure shaping up since last year, and what's new happening in this area?

Rags Srinivas: I used to jokingly refer that your silver bullet for all your problems is Kubernetes, which is not true. We know that.

Anthony Alford: Now you have two problems.

Rags Srinivas: Now you have two problems. Right. Right, exactly. But I think the point is that the whole compute kind of becomes multiple dimensions. The first thing you want to think about is you really don't have the power backing your on-prem systems. So most of the computing is happening on the cloud. I don't have statistics for that, but I have a feeling that quite a bit of this, the data might still remain back in on-prem. And that is another thing that you need to want to consider when it comes to infrastructure. There are bigger and bigger pipes being built, even between cloud providers.

Multi-cloud has become a big thing in the Kubernetes world. There is a lot of talk about multi-cloud, and really not many are exactly doing it. And if you think about AI, there is definitely a case made for cloud-agnostic computing. You want be cloud-agnostic and you want to be able to kind of, for example, use my GPU's in Azure, use my message passing on GKE and use my whatever favorite CPU it is on EKS, whatever it is. I need to be able to use those different combinations. And I don't think that has even really been uncovered yet, primarily because even though Kubernetes is solving a lot of problems for the world, especially when it comes to the infrastructure side, it's still freaking complicated. It is very complicated from a user perspective. And if you expect users to be able to set it up, to be able to tune it themselves in all that, it becomes really hard.

So unfortunately, again, I don't think there is any major thing that has kind of made it a lot easier from an infrastructure perspective, to be able to... I think Amazon recently announced a package offer, which is very similar to what Azure had before, for HPC. So the idea is that, again, can we bundle up a few of these things in a kind of an opinionated implementation that I was talking about before. Kind of similar to that. And again with the AI, I think it kind of stretches most of the compute to the limit. Like for example, being able to autoscale, to be able to scale to thousands and thousands of nodes, really stretches the cloud limits and Kubernetes as well, quite a bit. But I still think that I'm a big fan of multi-cloud and cloud-agnostic computing. And I think we are kind of moving there and moving actually quicker than I thought it would be, because I know that none of the clouds really, really want to do it, unless they are forced to do it, kind of kicking and screaming, and that's kind of where we are right now.

Closing Remarks [34:59]

Srini Penchikala: Thanks, Rags. I think that's the last topic we wanted to discuss in this podcast. Let's go ahead and wrap it up. If we can go around the room and talk about, briefly, closing remarks, and then we can end the podcast. Einat, we'll start with you.

Dr. Einat Orr: Thank you very much. I think it was a fascinating discussion. So my take is of course around the importance of the data, it's quality, it's manageability from the inputs throughout the intermediate artifacts. The model itself is also data, and of course the implementation and the tracking and production. So we are dealing with all kinds of data artifacts that will require our attention. The better their quality, the better the quality of our work. And focusing on tooling and best practices around that, would improve the work, not only of ML engineers, but any data practitioner that is using data for any other purpose as well.

Srini Penchikala: How about you, Rags?

Rags Srinivas: It's a great discussion, a lot of things that I learn here being part of the panel. But essentially, I'm hoping that the infrastructure will provide for a more cloud-agnostic multi-cloud, make it easier from a cost perspective to be able to solve all these different dimensions. It's not an easy problem. I'm sure it's a multi-year effort, but I think it's not technologically very hard. It's just politically not seen as a big thing to do right now. But hopefully, something is going to change and something is going to trigger. And I think AI might be the trigger that will make it happen.

Srini Penchikala: How about you, Daniel?

Daniel Dominguez: I think it was a very interesting conversation. A lot of things are going to change from the community trends and technologies. And I think it's very cool to see all the things that are going to happen in this space over the next few years.

Srini Penchikala: Roland?

Roland Meertens: Maybe one clock, if people want to learn more about robotics, is to subscribe to their weekly robotics newsletter by Mat Sadowski. I think he started it a few years ago. It's really good. It gives you all the latest and greatest in robotics. So Yes, I can recommend his newsletter for robotics-minded people.

Srini Penchikala: And Anthony?

Anthony Alford: I can just say what an exciting time to be involved with this. There's so many developments going on, it feels like the boundaries are being pushed constantly. I've lived through a couple of AI winters myself, by now, and I sometimes wonder is a little bit of the shine wearing off? But as long as we keep seeing new developments, like these image-text models and the democratization, I think that we'll continue to see some new developments, and I think that's very exciting.

Srini Penchikala: Thank you all. I agree with you all. So we used to hear the phrase software is eating the world. Now I want to say AI and ML are eating the world. So thank you all for your participation. That's a wrap for this podcast. Thank you.

About the Authors

More about our podcasts

You can keep up-to-date with the podcasts via our RSS Feed, and they are available via SoundCloud, Apple Podcasts, Spotify, Overcast and the Google Podcast. From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Previous podcasts

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT