BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Podcasts Grady Booch on Today’s Artificial Intelligence Reality and What it Means for Developers

Grady Booch on Today’s Artificial Intelligence Reality and What it Means for Developers

Today on The InfoQ Podcast, Wes Reisz speaks with Grady Booch. Booch is well-known as the co-creator of UML, an original member of the design patterns movement, and now work he’s doing around Artificial Intelligence. On the podcast today, the two discuss what today’s reality is for AI. Booch answers questions like what does an AI mean to the practice of writing software, and how it seems to impact delivering software. In addition, Booch talks about AI surges (and winters) over the years, the importance of ethics in software, and host of other related questions.

Key Takeaways

  • There have been prior ages of AI that have lead to immediate winters where reality sets in. It stands to reason that there will be a version of an AI winter that follows today’s excitement around deep learning.
  • AIs are beginning to look at the code for testing edge cases in software, and to do things such as looking over your shoulder and identifying patterns in the code that you write.
  • AIs will remove the tedium for software developers; however, software developing is (and will remain) a labor-intensive activity for decades to come. AI is another bag of tools in a larger systems activity.
  • Many of the AI developers are young white men from the United States; there are a number of inherent biases in this fact. There are several organizations that are focused on combating some of these biases and bringing ethical learning into the field. This is important for us to be aware of and encourage.
  • The traditional techniques of systems engineering we know for building non-AI systems will still apply. AIs are pieces of larger systems. That might be a really interesting part, but it’s just a part of a larger system that requires a lot of non-AI engineering use cases.
  • Early machine learning systems will mostly learn and forget systems. You teach them, you deploy them, and you walk away. Today, we do continuous learning and we need to integrate these new models into the delivery pipeline.

Show Notes

Why did you call your keynote presentation “Building The Enchanted Land”?

  • 02:20 Allen Newell was one of the founders of AI and together with Herbert Simon pioneered the symbolic approaches to AI.
  • 02:25 In the 1950s Allen wrote a fantastic article called Turing Land, in which he projected the next several decades predicting the rise and pervasiveness throughout the land.
  • 02:45 It had this delightful phrase “With the rise of computation, we have the ability to produce an enchanted land”, which is where I got it from
  • 03:00 To update the phrase: software is remarkable - it’s the hidden language that speaks the stories of possibility to our hardware, and software developers are the storytellers.
  • 03:15 We are the ones who are building that enchanted land.

You don’t subscribe to the mindset that AI is going to replace software any time soon?

  • 03:40 Not any time soon - but you changed the topic in a way, to say that there’s the fear of AGI in general, but what might that mean for the software developer in general?
  • 04:00 They’re related but subtly different topics.
  • 04:05 At one end, there is a fear of AGI from the public, because it’s been in the media and part of our mythology - Terminator, HAL, Alexa - so this notion of having a pervasive AI has been in media and there is fear.
  • 04:30 I did a TED talk on it [https://www.ted.com/talks/grady_booch_don_t_fear_superintelligence] because Elon Musk and others were fear-mongering, and I think that’s wrong.
  • 04:45 What does AI mean for the software developer? It’s a fascinating and evolving topic.
  • 04:50 Firstly, from a systems and software developer, what does it mean to build a system with AI components?
  • 05:05 Secondly, what does AI mean for me - does it change the way I build systems; might I see AIs that help me debug and test?
  • 05:15 We’re beginning to see AI play a role in the software development process itself, and that’s exciting.

How far away are we between AI and AGI?

  • 05:50 There’s an interesting spectrum to consider - in fact, the problem with AI in general is that we keep moving the goalposts.
  • 06:00 What we called leading AI in 1950s and 1960s - now we’d call that interesting analytics: so the bar keeps changing.
  • 06:15 I suspect a decade from now the wonderful things happening in deep learning, they’ll say it’s probably just some interesting gradient descent thing.
  • 06:35 I have a litmus test for AI: it represents a system that learns and reasons.
  • 06:40 Hinton pointed out in a recent lecture that the symbolic AI of 1960-70s, they viewed reasoning as the definition of what AI does, which is reasonable.
  • 07:00 The ability to play chess, go, to manipulate blocks, to converse in the world - those are all elements that require reasoning.
  • 07:10 The deep learning community emphasises the notion of learning; that an AI system is one that learns over time, with or without supervision.
  • 07:25 If a system does not learn or does not reason then it is something else.
  • 07:40 When I move to AGI, I had the opportunity to do a video recording with Anthony Daniels (the actor behind C3-PO).
  • 07:50 He asked me if AGI was going to happen in his lifetime.
  • 07:55 I responded no: it’s going to be a multi-generational thing.
  • 08:05 I believe that AGI is possible - I’m a pragmatist, a materialist, and the mind is computable so it is inevitable.
  • 08:15 But we’re a long way away from it, because AGI requires degrees of common sense, inference, deductive and inductive reasoning and adductive reasoning - the ability to build models of the world, self, others and reason about them without interacting on the world itself.
  • 08:35 We’re a long way from doing any kind of general model building or common sense reasoning.
  • 08:45 It’s inevitable, and will happen some day, but not in my lifetime and not of children’s lifetime.

So you think that AI will assist humans in a loop as opposed to replace them?

  • 09:00 I think AI in general should be something that helps augment the human body and mind - that’s the role that it should play.
  • 09:15 A lot of what’s happening in contemporary AI is what I call sensory AI: using deep learning to look at streams of data, images, videos, voice, and detect patterns in them.
  • 09:25 A lot of it has to do with pattern classification.
  • 09:30 If you think about it, symbolic approaches to computation which we’ve been doing for decades, and deep learning approaches which are based on models of the mind through neurons, they’re computationally/Turing equivalent.
  • 09:45 That means I can do processing in either one; it just requires different amounts of space or time.
  • 09:50 What’s great about deep learning and approaches that are inspired by how evolution has built intelligence: it gives us some really convenient mathematical tools to do this kind of reasoning and learning.
  • 10:05 Its best successes have been in the area of pattern matching: finding cats and dogs, or melanoma in images.

What were the different stages of evolution of AI over the past fifty years?

  • 10:55 Andrew Ng has done a delightful summary of this, so I’ll summarise his summary.
  • 11:00 The first season of AI was the 1950s and 1960s, which was the height of the cold war, so a lot of things fuelled the AI research.
  • 11:20 A lot of what happens in modern computing was born from warfare: we see computing as woven on the loom of sorrow.
  • 11:30 That’s definitely what we see in the early days of AI: the fears of the cold war funded some of the early AI research.
  • 11:35 What we realised was that language is really messy and difficult, and one couldn’t easily codify it.
  • 11:50 It wasn’t until the 1960s and 1970s that we saw a renascence where the ideas of representing knowledge was important.
  • 11:55 This was where Ed Feigenbaum and others began to play a role.
  • 12:00 Campbell Soup had one man who was responsible for the final taste testing - and they knew he was going to retire.
  • 12:10 How do you capture his knowledge?
  • 12:15 So from a series of interviews they were able to develop some rules for the taste tests.
  • 12:25 The problem with rules is that they don’t scale well, so much like the earliest days of AI, the funding collapsed too.
  • 12:35 At that time, Silicon Valley was up and running, and spawned companies like Symbolics, Thinking Machines and the like, and they fell down the implosion as well.
  • 12:45 It wasn’t until recently where you saw the perfect storm of tremendous amounts of data (Facebook, Google, Amazon and the like) together with the rise of GPUs giving tremendous computational power.
  • 13:00 Together with the gradient descent algorithms of the 1960s-80s - the perfect storm of them coming together is what has enabled the breakthroughs in deep learning.
  • 13:10 Will there be a winter? Well, I’m wearing a sweater now - folks like Gary Marcus have observed that there are limits with the current models of deep learning.
  • 13:30 It’s going to be different from the previous years, because AI has proven itself to be economically interesting in so many places, but I don’t think it’s going to be as big as what many had first hoped.

What are some of the limits of deep learning?

  • 13:50 There’s a great commercial from Saturday Night Live a few years ago where they were advertising a delicious dessert topping and floor wax. 
  • 14:00 The problem with deep learning is that it’s not a dessert topping and a floor wax - there are some things it’s well suited for (pattern matching) and some things that it isn’t (model building).
  • 14:15 There is an existence proof that deep learning can lead to incredible amounts of learning and reasoning - you and I are talking, reasoning in symbols, although that is built upon neurons.
  • 14:35 We know it’s possible, but will we be able to build billions upon billions of neurons to build those kind of systems in the near future?
  • 14:45 We’re a long way from it, because computationally it’s very hard to do so.
  • 14:55 Most of the deep learning that we will see in the coming years will be mostly predicated on pattern matching type of things, and we probably need to have a breakthrough to build models of the world.
  • 15:10 When I think of models of the world, my brain is making models of people, boats, waves - and I can think about those.
  • 15:20 It’s easy to build an AI using neural techniques that is ELIZA like - it’s reactive.
  • 15:30 We haven’t quite figured out how to build those models in neurons yet.
  • 15:35 This goes back to Rodney Brooks’ idea of “Intelligence without representation” – we see things like slugs and lower animals that can move in the world, but they don’t build those models of the world.
  • 15:45 We haven’t got there with deep learning at the moment: I think we’re a long way from it.

How do you think the rise of levels of abstraction affecting the art of software today’?

  • 16:05 You raised patterns, which I think is important - as you said, I was involved at the beginning of the patterns movement.
  • 16:15 The classic book is “Design Patterns” by Gamma, Helm, Johnson and Vlissides. [https://en.wikipedia.org/wiki/Design_Patterns]
  • 16:20 In it, it makes the wonderful observation that the code is the truth, but it isn’t the whole truth.
  • 16:30 There are some things that transcend individual lines of code or classes; for example, the MVC pattern which pervades most UI systems.
  • 16:40 Even then, there was a recognition that there were common patterns that transcend code.
  • 16:45 If you think about it, deep learning is a lot about patterns - so could we use deep learning techniques to find those kind of patterns?
  • 17:00 It was probably Microsoft that pioneered this some years ago when they were doing automated device driver testing for Windows.
  • 17:05 They recognised that there were some patterns of implementation and use that they could automated tests for it.
  • 17:15 It wasn’t AI, but it pointed to the direction of things that could be done.
  • 17:20 Imagine now, from a developer’s perspective, having an AI look over your shoulder that would say “It looks like you are trying to use this pattern?”
  • 17:35 I hate to use the phrase, but it’s like an AI Clippy [https://en.wikipedia.org/wiki/Clippy] on steroids.
  • 17:40 That’s where I’m seeing people experimenting these days - building AI assistants that are helping the developer.

What are some of the use cases that you’re seeing for AI for software development?

  • 17:55 The first is testing, where I’ve begun to see some AIs looking at the code and create recommendations for the code itself.
  • 18:10 It can spot edge cases and build tests against them, or stack overflow.
  • 18:20 The other place is looking at the source and recommending simplification, or discovery of patterns.
  • 18:30 You might see an AI enabled refactoring.
  • 18:40 Martin Fowler has a second edition of his book Refactoring: you should rush out and buy the second edition now.
  • 18:50 You can imagine an AI that has been taught those refactorings and could assist you.

Does it mean that software engineering is a dying discipline, to be replaced by AI?

  • 19:05 I think software engineering will be a flourishing discipline and will continue to do so.
  • 19:10 What those AIs will do is to remove some of the tedium that we’ve had to do as humans, freeing our minds to go off and deal with the more complex things.
  • 19:20 Software development is complex, and that’s not going to change.
  • 19:25 People have often said that we’ll have software that writes the software itself: we already have that, they’re called compilers.
  • 19:30 We haven’t been able to build systems that do meaningful design - we’re a long way from that.
  • 19:40 Software development will remain a labour intensive activity for decades to come.
  • 19:50 Back in the COBOL days, people thought that you could have non-programmers be able to write rules about business.
  • 20:00 However, COBOL writing still requires you to express it in a precise way, which is programming.

What did you mean by machine learning and AI are going to be pieces of a larger software product?

  • 20:15 If you look at every AI system, from AlphaGo to Cortana to Alexa and others visible to the public eye, AI is a piece of a much larger system.
  • 20:35 Siri has AI - natural language recognition, understanding - those are distinctly AI kinds of things.
  • 20:50 There’s also some Bayesian things going on - if I ask for something it can figure out what my intent is, and through some interesting learning gives me an answer.
  • 20:55 Google does this with translate, by having billions of translation requests taking place, and learns patterns from things it sees.
  • 21:15 AI is only part of the system, which has to scale up to a global scale, applied to multiple languages, store the data behind itself.
  • 21:25 What we’re seeing is that economically interesting systems that have some value often have AI pieces to them, but there’s a lot of engineering that must take place around them.
  • 21:40 While people make focus on these wonderful and great things, like using AI to discover melanoma, it has a piece that says ‘hey, this image might be melanoma’ but in a larger healthcare system.
  • 22:05 There will continue to be people who need to be skilled in AI - that’s great.
  • 22:10 Remember though, we used to need - and still do - people who are database experts, who are devops people
  • 22:20 Think about it as being another skillset in the larger systems building activity.
  • 22:25 In the last several months, people have asked me what should they do to learn AI?
  • 22:40 My answer: you need to become familiar with it, but there are tens thousands of people globally who are becoming familiar with it - look at China - so ask what you can do differently.
  • 22:55 It’s important that you know what your passion is and follow it; but remember if you have skills in building other systems, ask what would it take to build those with AI components.

How does the rise of AI affect ethics?

  • 23:25 When you have systems that impact individuals, organisations, societies, nations, then those questions do become important.
  • 23:35 Twenty years ago I started on this path that ethics was important - an interviewer asked me why any software developer would care about ethics.
  • 23:55 The phrase I use: every line of code has a moral or ethical implication.
  • 24:05 If you’re a developer at a certain car company in Germany that makes bug-like cars, you may recall that they were rightfully cited for writing code that cheated on emissions tests.
  • 24:20 So you have a non-AI case, where a developer had to write a line of code, that said they were going to cheat on this emissions test.
  • 24:35 When you move to AI, it becomes a little scary - because you have systems that are trying to move towards human intelligence, that are reasoning and learning.
  • 24:50 So you have to ask how you can impose your ethics on those systems?
  • 24:55 One of the things that’s very clear - and it’s wonderful that this is now known in the community - is that there are clear biases that come from data in our AI systems.
  • 25:05 A lot of this comes from the way we select our models: like it or not, most of the AI developers in the world are young white men from the US, which has a number of implicit biases.
  • 25:20 A lot of the test data sets for facial recognition were mostly white people, so they horribly failed when they were tried against people of colour.
  • 25:30 Once those things happen, and you start applying them to real world cases (like for arresting people, identifying them as potential criminals) then all of a sudden that code which looked benign at first has some very personal implications.
  • 25:45 For that reason, I’m delighted to see organisations such as The Partnership for AI, Open AI - there are about half a dozen - that are focussed about how to bring ethics int our AI learning systems.
  • 26:00 The good news is that at least it is now in the public psyche, and in AI developers it’s recognised as a real issue.
  • 26:10 How do we solve those problems? There are smarter people than me who are going to have to tackle that one.

What do you recommend for people that are building systems positioned for an AI future?

  • 26:30 The good news is that the systems engineering things that we know and love in building non-AI systems are going to apply for building AI systems as well.
  • 26:40 I can give you a great example: one of the things we know from test-driven development is that having a very explicit set of tests against which you can do continuous integration is really important.
  • 27:00 That’s exactly what we did in Watson and what the AlphaGo team did.
  • 27:05 If you look at Ferrucci and crew did for IBM Jeopardy, there was a test base with questions and answers to every Jeopardy game - so the team was able to build a set of tools (even though it was an AI system) to run integration tests overnight, and see how well you did.
  • 27:30 You could follow this path of whether you were getting better or not - the AlphaGo folks had a similar kind of thing.
  • 27:40 The important thing is that traditional techniques for systems engineering for testing configuration management still applied.
  • 27:50 In a way, software developers who are out there listening, you can teach your AI colleagues at the same time you are learning from them.
  • 27:55 It’s going to come together in a renaissance of what software engineering is - we’re beginning to see how AI impacts us, how we impact it - it’s a wonderful and vibrant time for the community.

What is the implication for writing test cases when you’re talking about neural networks?

  • 28:45 There’s an overall process issue, which is that the early days of deep learning systems were mostly learn-and-forget systems.
  • 28:55 You’d teach them, then you’d deploy them, then you’d walk away.
  • 29:00 One of the things that becomes important is that you do continuous learning and continuous testing, so they change over time.
  • 29:05 This has some interesting devops implications, because it means not only are you continuing to monitor those systems, you’re also teaching them along the way.
  • 29:20 I don’t know how we integrate that in the lifecycle, but it’s one of the directions that is certainly going to take place.
  • 29:25 In that process, one discovers edge cases and use cases that you couldn’t have a priori known.
  • 29:35 There’s going to have to be - for AI systems that matter - this kind of growth of the test, devops communities working with AI.
  • 29:45 We’re now building in a way living breathing organic pieces of software that we must keep alive.

What about other techniques for AI development?

  • 30:10 In many ways, software development is simple - it’s all the details.
  • 30:15 You want to focus on CRISP abstractions, simplicity, good separation of concerns, balanced distribution of responsibilities.
  • 30:20 It’s basic systems practices - you want to build systems that have lots of well-defined parts that are reasonable loosely coupled - the same is true of AI systems.
  • 30:25 We’re beginning to see AI systems that are growing more and more complex: ideas of cohesion and coupling and modularity we now begin to start applying to our AI systems.
  • 30:40 Where does AI fit within the container world? What’s the best way to map an AI system into a container?
  • 30:45 We don’t have enough experience in these yet, but my guess is that the configuration management and the way we devise systems today are going to be applied to the AI world as well.
  • 31:00 The other thing that probably takes place (that hasn’t taken place yet) is that configuration management of datasets becomes important.
  • 31:10 Notice the direction I’m taking here: it’s more about applying the systems engineering kinds of things to AI as opposed to the other way around.
  • 31:20 We’ve been engineering systems for decades - longer than we’ve been building AI systems.

Any final thoughts?

  • 31:25 It’s an exciting time to be a developer - there’s so much change that’s going on right now, it’s a very vibrant time.
  • 31:35 I remember in the 1980s and 1990s when we were seeing the change from algorithmic languages to object-oriented languages.
  • 31:40 I’m beginning to see the same kind of excitement and vibrancy now because we have these new very powerful components with which we can build systems that matter.
  • 31:50 That’s wonderful: It’s an exciting time to be a developer.

More about our podcasts

You can keep up-to-date with the podcasts via our RSS Feed, and they are available via SoundCloud, Apple Podcasts, Spotify, Overcast and YouTube. From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Previous podcasts

Rate this Article

Adoption
Style

BT