BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Podcasts Rosaria Silipo on Codeless Deep Learning and Visual Programming

Rosaria Silipo on Codeless Deep Learning and Visual Programming

Bookmarks

In this podcast, Srini Penchikala spoke with Dr. Rosaria Silipo on codeless deep learning and visual programming topics. with focus on low code visual programming to help data scientists apply deep learning techniques without having to code the solution from scratch.

Rosaria Silipo is currently the head of data science evangelism at KNIME, the open-source data analytics platform. She is the author of over 50 technical publications, including books like Codeless Deep Learning with KNIME, Practicing Data Science: A Collection of Case Studies and A Guide to Intelligent Data Science.

Key Takeaways

  • Learn about innovations in deep learning such as convolutional neural networks (CNN), long short-term memory (LSTM) and generative adversarial networks (GAN).
  • Generate synthetic (fake) data for testing and model training purposes where the real data is protected by privacy or copyright laws.
  • Low-code, no-code techniques for using deep learning in your applications.
     

Transcript

Introductions [00:16]

Srini Penchikala: Hey everyone, just to let you know, our online software development conference, QCon Plus is back this November, the event will be held from November 1st through November 12th. You can expect curated learning on the topics that are relevant and matter right now in software development. QCon Plus is a practical conference, laser-focused on learning from the successes and failures of domain experts at early adopter companies. If you enjoy the conversation we have on this podcast, you'll get a lot out of QCon Plus. To learn more about the conference, visit the website at qcon.plus.

Hi everyone. My name is Srini Penchikala. I am the lead editor for AI, ML, and Data Engineering community at InfoQ website. Thank you for checking out this podcast. In today's podcast, I will be speaking with Rosaria Silipo from KNIME team on deep learning topic, specifically the new trend in deep learning called codeless deep learning and visual programming. So let me first introduce our guest to the listeners. Dr. Rosaria Silipo is currently the head of data science evangelism at KNIME, spelled K-N-I-M-E. The open-source data analytics platform. She is the author of over 50 technical publications, including books like Codeless Deep Learning with KNIME, Practicing Data Science: A Collection of Case Studies and A Guide to Intelligent Data Science.

She holds a doctorate degree in bioengineering and has spent more than 25 years working on data science projects for companies in a broad range of fields, including IoT, customer intelligence, financial services, and cybersecurity. Rosaria is no stranger to the InfoQ audience. She has already published articles on infoq.com website on machine learning and deep learning topics. Rosaria, thank you for joining me today in this podcast, before we get started, do you have any additional comments about your research and projects that you've been leading recently that you would like to share with our listeners?

Rosaria Silipo: Thank you for inviting me. Yeah. So I think you've said everything. I mean, I don't want to go into the details of what I have done before in my research and projects, because otherwise, it's going to take a long time to explain everything. I can only say that I've been in the space of data science for really a long time. When I started, it was not called data science, but it was called actually artificial intelligence. I started with artificial intelligence for my thesis at the beginning of the 90s. And my thesis was on neural network at that time, my master thesis. And then after my master thesis, I continued in the field that was called data mining, then that was at the end of the 90s. Then I continued and the field was called big data and we are already in the year, 2000. And then after that, it became data science and we are already in the 2010 or something. And then now, finally I am back with artificial intelligence. So basically I completed the full circle of the data analytics names.

Deep Learning [03:16]

Srini Penchikala: Yeah, definitely last 25 years have seen a lot of evolution and innovations in the artificial intelligence and machine learning space. We can get started. So, first question is, can you define deep learning for our listeners who are new to this technology?

Rosaria Silipo: So, deep learning is actually the evolution of the neural networks. So at the end of the eighties, beginning of the nineties, there was a big hype in a neural network. Back propagation was published at the end of the 80s, 87, I think. And everybody started using feed forward neural networks, the multilayer perceptron for everything, basically, trained with back propagation, everything would be sold with the neural feed forward neural network. At that time, even though there was a huge interest on the neural networks, of course, they became quickly hard to train because the machines were not up to it, so there was not enough memory. So usually we had to build the neural network and really, really spare the hidden layers because otherwise the machine wouldn't be able to train it. So deep learning is the evolution of that neural networks from the nineties.

It started being called deep learning, because you could add more than one hidden layer in the feed forward neural architecture. So as I said, at the end of the 80s, we were always sparing the number of hidden layers. But with deep learning, you could have deeper and deeper networks because you could have more and more hidden layers. But then also bigger architectures, bigger neural architectures were possible because of the better hardware machines. But at the same time, not only more complex architectures could be implemented, but new, also more sophisticated neural paradigms, like for example, the long short term memories or the convolutional neural networks.

The long short term memories, for example, are all not exactly new. I think they were proposed at the end of the 90s and quickly abandoned because they were a bit too complex for the hardware of the time. So, now with the modern hardware, they became easier to train, and then they got rescued. So, also the new paradigms are not completely new. They were the more sophisticated neural networks at the time and deep learning, the current deep learning rescued these all paradigms, made them more complicated and definitely easier to train. So I would say that deep learning is the evolution of the old neural networks field.

Srini Penchikala: So what are some deep learning techniques or technologies that our listeners should be familiar with if they're interested in learning about them, or at least just want to be aware of them?

Rosaria Silipo: At KNIME, in my team, so I'm the head of the evangelist team at KNIME. So at KNIME, we had a long discussion and we had on one side, the data scientist with a bit, many more years of experience, and the younger data scientists with less years of experience. So they, let's say, more senior data scientists, they were saying that deep learning is nothing new it's just a more powerful neural networks and the younger data scientists, they were saying that, of course, everything is different. So we had a long discussion about how deep does the neural network have to be, how many hidden layers do they have to have to become deep learning neural networks. So long, long discussion on that. So the new techniques and the new technology, of course, deep learning has introduced the possibility to train traditional feed forward neural network, back propagation train neural networks with more hidden layers.

This one I already said. So a lot of hidden layers already qualify as deep learning, but then of course the biggest jump was actually with the convolutional neural networks. In 2012, AlexNet, I think, was the first convolution neural networks that produced fantastic performance on the ImageNet large scale visual recognition challenge. And after that, after these fantastic performance of AlexNet, everybody started to pay attention to the convolutional neural networks. And they started also to rescue all the neural paradigms that had been abandoned in the past. So this one was the start, the convolution neural networks made the computer vision problems much easier to solve, for example, face recognition. And definitely, they were the origin of the deep learning frenzy. So on this particular topic, when we used to work with neural networks in the 90s, I remember that one of the biggest objections to neural networks is that they were not interpretable.

So for example, in hospitals or in medicine, physicians, they need to be able to understand what the decision process of the model is. And I remember that was a huge drawback of the neural networks. When the convolutional neural networks somehow made the face recognition problem, much easier to solve, of course, I was wondering if this problem of the non interpretability would still be a problem. And then I remember that I was thinking, "Well, if you have 1 million faces to recognize, I mean, who cares if you cannot really understand the decision process," right? At some point there are problems where the quantity of data to analyze is so large that if the accuracy is good enough, you just overlook the fact that it's not interpretable. So, after the convolutional neural networks, then somebody started to reinvestigate the long short term memories. They took them off the shelves and they started implementing them.

The long, short-term memories, they had this fantastic feature that they remember the past. So, if you have a time series, for example, with a feed forward neural networks, you can still predict the next sample based on the past samples, but somehow it's not performing well because the feed forward neural network is not able to remember the past samples. The long short-term memories on the opposite, they have the structure so that the memory of the past of the inputs can be retained. And that's why the long short-term memories they started to be using of course, for time series analysis, but also for example, for NLP, for Natural Language Processing, because in that case, the past becomes the context of the sentence, so all the words that have been written or said until this current moment, and then the context, of course, makes you more able to understand the meaning of a sentence, right?

And so the long short-term memory became another big battle horse for the deep learning, because they made a lot of problems about natural language processing solvable. So another branch of the new deep learning paradigms are the generative adversarial networks, short they're called GANs. GANs are famous for generating all those deep fake images or text, or other kind of data. So they are trained on real data. And then even though they're trained on real data, they become able to generate consistent, realistic, yet fake data from literally nothing, so from thin air. Because they take the noise as input, and then based on the training they had before, they can produce some realistic image or text. There are more and more new neural paradigms popping up every moment, also in the past years and are already currently under study. But I would say that these three have been the biggest innovations of the deep learning, the convolution neural networks, long short-term memories and the generative adversarial networks.

Srini Penchikala: Yeah. Regarding the GANs that you just mentioned, what are the business use cases for using these generative adversarial networks?

Rosaria Silipo: Yeah, that's an interesting question. So some people they say, "But can GANs do something else than just generate deep fake images or deep fake texts?" A lot of the use cases for the generative adversarial networks are of course, generating fake data. Generating fake data is a big thing, actually. Now, here I go into something that I don't know very well, but I'll say it anyway. For example, if I have a data set of images that are, for example, protected by privacy laws or by copyright laws, right? And I can't use them, I can use them, but I cannot resell them or redistribute them.

Now, if I generate fake images, of course, I train them model on a lot of different images so that there is no traceability. So the old images are not traced, it cannot be traced back in the training set. So if I train my network on a lot of images and then I train images that have nothing to do with the original images in the training set, but yet they are believable and realistic, then I can use these new images instead of the previous ones that were covered by privacy or covered by copyright. Now the one of the copyright, I'm not completely sure because that's a legal issue, but in theory, those are new images generated out of thin air from a model that has somehow learned from the images in the training set what a realistic image is. Creating the new data is definitely the biggest use case for generative adversarial networks.

Low-code / No-code Deep Learning [12:01]

Srini Penchikala: Like you mentioned that there have been a lot of innovations, right, especially with the introduction of Tensorflow from Google and your own platform, the machine learning and deep learning have become more accessible and kind of mainstream technologies to use by the developers, right? So you have recently published on the topic of codeless deep learning, which is one of the emerging trends on how to build, train and deploy various deep neural network architectures using KNIME analytics platform. Can you talk about this new trend of low code or no code deep learning and also called visual programming? Right? So how they are different from traditional ML and DL tools?

Rosaria Silipo: Visual programming is not new. So KNIME has not invented visual programming. Visual programming is this feature, this interaction feature, so that instead of writing code you drag and drop blocks. So instead of writing a text of instructions, you just build the pipelines of images of blocks. So, that's what visual programming is. And visual programming is the basis of many low code and no code tools, not only in data science, but also in a bunch of other branches of the software industry, like for example, many reporting tools are now fully low code based, right? But many other project management tools, they're also fully low code based. So low code applications are relatively widespread and they're all based on this particular feature of visual programming. Now, the difference between a low-code tool and a code based tool for data science is this: So in a low code tool, you proceed with dragon drops and you take your blocks from a repository and then you drag and drop into an editor and then you build your pipeline of block and you take your data from A to B in this pipeline, right?

In the code based tools, you write instructions. So you write one instruction, then another instruction, then another instruction, and then again, the sequence of instructions takes your data from A to B. So at the end, they do the same thing. Of course, it depends on the kind of solution and the kind of tool that you are adopting, how extensive the coverage of the machine learning algorithms for example is, or how flexible it is. But in principle, in one case you have a line of blocks to take your data from A to B, in the other case, you have a sequence of instructions to take your data from A to B. So they do exactly the same thing. In practice, there is no difference. Of course, they have to be good enough tools, but that's true either if they're code based or if they are visual programming based. They have to be extensive, they have to be flexible, they have to be correct, right? And they need to have all these features to be reliable.

Srini Penchikala: I think similar innovations are happening in the other areas like you mentioned earlier. So GitHub has something called Copilot, which kind of is trying to help the developers do the same thing. And also OpenAI recently released something called Codex, which is a AI based pair programming type of tool. So definitely very similar developments are happening. So in terms of low-code deep learning approach, so what do you see as a pros and cons? Like every new tool, these solutions have benefits as well as some limitations. So how do you see low code kind of working in the sense? And also why it's easier compared to other solutions, but also what are the limitations?

Rosaria Silipo: So deep learning so far has been mainly in the labs, right? So you have had all in a research lab, you have had a bunch of people, they were trying new architectures and they were experimenting with use cases, new encoding of the data, new architectures, new parameters, new everything, right. But now I think deep learning has reached the more mature stage so that it's going to go out of those research labs and it's going to end up to be used by, I don't know, an accountant or a doctor, or a physician, or some different professional figures that not necessarily know, or want to code. It's not that they cannot, maybe they want to be fast, and so they prefer to use a quicker tool in terms of assembling the solution. So the pros and cons are mainly a subjective preference in this case. If you like to code, then of course a code based tool is the best that you can do, right?

However, if you don't like to code, or you don't know how to code, or you learn how to code in Java, but now you want to learn to code in Python because you don't have the time, because your job is something else, then probably a visual programming based solution it's an easier approach. So the pro is that it's easier to learn. Of course, if you don't have the time to learn another code, or if your preference is in learning visually rather than in learning sequentially with coding. Besides that, there is no difference. So the only difference is in the preference that you have in learning. Visual learning is a bit faster, especially for some people than sequential learning in coding. Another maybe, secondary advantage, is that when you use visual programming, you have the whole solution in front of you, right? Instead of pages and pages of code.

So you can document your code, your solution at a higher level, and you can have the overview of whatever is being done in each part of the solution, right? Instead, in the code, maybe you have to scroll down and then you have to read the comments and in each part of the code. Yeah. So it's mainly, I would say, the easier part to learn, an easier approach to learning. I don't know, once I had a discussion with some people on LinkedIn and a guy said, "Well, I find it easier to learn the tool that I already know." And I think nobody can object to that. So of course, it depends a bit on your preference. If you have to learn something new and something new it's based on visual programming, and then this part might be easier to learn then of course it's better. Of course, if you prefer coding, then it's your own preference.

Srini Penchikala: Yeah, definitely. I think these tools bring some value, but they need to be used in the right context, right? So it looks like these tools are not going to replace the developers, that's for sure. Like, how they're always saying, "More and more tools are coming up that we may not need the developers," but usually that's not the case. So who are the target users of this low code and no code tools? And also does it mean that you don't need to know math anymore, or you don't need to know data science anymore?

Rosaria Silipo: Right. So no, first question, no. The developer's job is safe, so don't worry about that. They can keep developing. The target audience for low-code tools for these visual programming based tools are, as I said, people who don't have the time, or don't want to learn a new programming language. So for them, for the time that they have available, it's easier to produce a solution based on visual programming also, because in this way, they can concentrate more on the math, behind the solution, on the process, on the algorithm, behind the solution, rather than on the programming part. So, there are more and more of these people. As I said, deep learning is coming out of the lab now, and it has reached this maturity stage, that many more professional categories are going to start adopting it. And you can't expect a physician to start learning Python, right? If he doesn't have time, he doesn't have time, especially in COVID time.

So of course that's not possible. Do they need to learn math? Oh God, yes. So visual programming removes the coding barrier. So it makes it easier to assemble a solution without knowing how to code. Of course, the algorithms behind, you have to know, you need to know what a gradient descent does, you need to know what the learning rate is, you need to know what the regularization are. Because even in the low code tools, there are going to be checkbooks or something that you need to enable if you want to have a regularization term added to your cost function, or if you want to have a lower learning rate, or if you want to have a batch training or something.

So they become parameters in a visual ways, but they're always around and you need to set them accurately knowing what you're doing rather than randomly. So it's not like automated machine learning that takes care of everything for you. And I think that's another opinion, I think automated machine learning is for complicated problems might be of limited utility. It is really like programming, deep learning solutions only that you do it visually. And as I said, that removes the coding barriers, but doesn't remove the algorithmic understanding part. So please, if you want to start with deep learning and you want to use an easier solution based on visual programming still do learn your math and your statistics, please do.

Innovations in Deep Learning [20:35]

Srini Penchikala: Yeah, definitely. You need to know what's happening under the hood, right? So what are some innovations happening in the deep learning space? What do you see coming up in the future, in the next few years, or even after that?

Rosaria Silipo: So there are a few new paradigms coming out at the moment, but I have to say, as I said before, deep learning seems to have reached its maturity at the moment. And there are a few successfully working solutions. They are available, at the moment have been trained on tons of data, but they seem to be working nicely in a productive environment, and they are constantly updated with new data and with slightly more sophisticated techniques. So I think that the key now for the progress of the deep learning space is actually in the maintenance, in the improvement of these already existing technologies. So, that means optimization, that means update of the old models, that means replacement of the old models, triggering monitoring of the models and triggering a new retraining every time that the old models somehow don't seem to work correctly or every time that new data is available.

So all these operations that are mainly about the maintenance, and improvement of what already exists, I think are going to be the biggest challenge for deep learning in the future. It is a bit as if the deep learning algorithms and have moved out of the research lab, as I said before, and now they are in production environments, and now they've become more of an engineering kind of task rather than more of a research task as it was before. At the end a lot of the productive learning solutions that I see around, they are a masterpiece of engineering, because the data preparation and the data increase update and the retraining on the new data and the constant monitoring, the logging, the auditing, the storage, the dependencies, all this stuff, sometimes they're really masterpiece of engineering to work in a professional production environment.

Srini Penchikala: Right. The same challenges that the software developers have gone through several years ago, and now they have matured, right, so in terms of DevOps and security built in and everything, right?

Rosaria Silipo: We talk about the MLOps now in this space, that's now the word that is going around. But I think that's definitely the problem, because a model per se can be fantastic, but if you don't make it able to work in the real conditions, then of course the advantages you get are limited, so it's not that useful. So I definitely think it's more of an engineering problem rather than a research problem now.

Srini Penchikala: Yeah, definitely. Any solution, including machine learning models, they're only valuable after they're in production and are being used by the end-users, right?

Rosaria Silipo: Right.

Srini Penchikala: So do you have any additional comments before we wrap up today's discussion?

Rosaria Silipo: I have a lot of experience in producing solutions and I must say that I also teach at universities and sometimes I have this objection, "Why do I have to look at old machine learning if everything now can be sold with deep learning," right? And that's a bit what the students, especially they focus on. I would like to give whoever is starting in this field, a word of advice that it's better to start easy. So with an easy architecture, and then maybe to complicate the architecture later. Because if you start with something complicated and it doesn't work, you don't know which one of the many parameters that you have been tuning is the one that is not performing correctly.

While on the opposite, if you start from something easy, the free parameters are not that many, and then you can at least get an idea if the model is really not working because the model is not complex enough, or the model is not working because you have still to figure out the perfect configuration for the model. So my advice is always to start easy. Even if it's not deep learning, start with linear regression, something. You would be surprised how easy it is to solve classic data science problems with classic techniques. And then if it doesn't work, then move onto something more complicated for example, the deep learning networks.

Srini Penchikala: And also Rosaria, you have one of your books available from the publishers and you have some promotional information. Could you go ahead and share that with our listeners?

Rosaria Silipo: Right. So on the KNIME side, we have a little branch called the KNIME Press. On the KNIME Press webpage, you can write a number of books. KNIME Beginner's Luck is the introductory book on how to use KNIME. You can download it for free using this code “InfoQ-podcast”. You have until the end of the year, this promotion code is available until the end of the year. So remember InfoQ-podcast to get a free download of the ebook, KNIME Beginner's Luck from the KNIME Press webpage. This book is about how to move the first steps into the KNIME analytics platform world. So into this visual programming based tool. The tool is open source and it's free to use, free to download. So just download it, take the book, and start playing with it. And then you can check if visual programming is something for you, or if you are more of a coding person.

Srini Penchikala: Thank you. Yeah, definitely for our listeners, we will include this information, the link to the book and the promotion code as part of the transcript of this podcast. So if anybody's interested, they can check it out. We have reached the end of our podcast. Rosaria, thank you very much for joining today. It's been great to catch up with you and discuss the deep learning topic, especially the new innovations here, in terms of developer productivity and low-code deep learning approaches, right? So, to our listeners, I want to thank you again for listening to this podcast. If you would like to learn more about machine learning or deep learning topics, check out the infoq.com website. I also encourage you to take a listen to the recent AI/ML Trends podcast that we published last month. This podcast talks about the other innovations in this space like GPT3, MLOps, drones, containerization, and so on. So definitely a lot of good stuff in that podcast as well. So Rosaria, thank you for your time. We will talk to you next time. Thank you.

Rosaria Silipo: Thank you.

About the Guest

Dr. Rosaria Silipo is currently the head of data science evangelism at KNIME, spelled KNIME, the open-source data analytics platform. She is the author of 50+ technical publications, including books like “Codeless Deep Learning with KNIME,” “Practicing Data Science: A Collection of Case Studies” and “Guide to Intelligent Data Science.” She holds a doctorate degree in bioengineering and has spent more than 25 years working on data science projects for companies in a broad range of fields, including IoT, customer intelligence, financial services and cybersecurity. She also launched “Data Science Pronto!” 1-3 minute video explanations of data science concepts.

More about our podcasts

You can keep up-to-date with the podcasts via our RSS Feed, and they are available via SoundCloud, Apple Podcasts, Spotify, Overcast and the Google Podcast. From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Previous podcasts

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT