BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Grady Booch on the Future of AI

Grady Booch on the Future of AI

This item in japanese

Bookmarks

According to Grady Booch, most current AI systems are about pattern matching of signals at the edge and inductive reasoning, not true Artificial Intelligence. During his second day keynote at the 2018 QCon San Francisco, "Building the Enchanted Land", he explained his view that AI today is a "system engineering problem with AI components."

True AI uses decision making and abductive reasoning, which allows those systems to reason and learn. Current artificial intelligence applications are far from being able to accomplish that; they are just components in larger systems.

Contemporary AI is not of recent origin either because many of the architectures and algorithms are decades old. The difference today is the abundance of computational power, and the existence of large bodies of tagged data. In fact, working with the data to understand it, and get it into the proper format will most often take more work that the building of the model.

Contemporary AI

Pattern matching is about teaching a system, with lots of evidence, for what to search. Today, that is mostly signals such as images, video, and audio. These signals tend to be at the edge of a system, rather than at the center. The actual matching is done with inductive reasoning. Inductive reasoning is not decision making. It is also not abductive reasoning where you build a theory from looking at the data.

Contemporary AI is also not all that contemporary; the existing algorithms have been around for decades. For example, the first artificial neuron dates from 1956, so the idea of neural level computation has been around for a long time. However now you have large amounts of tagged data, as well as an abundance of available computational power. Essentially old algorithms are now practical, but those algorithms were not about reasoning and learning. Reasoning means that there is some human level of thinking where induction, deduction, and abduction are mixed together. Learning over time is necessary as well. Without all these elements it is not really artificial intelligence.

A Bit of AI History

Artificial Intelligence has had a very long history of algorithms, developments, setbacks, and advances. Machine learning is only one aspect of artificial intelligence, and deep learning is not even a narrow part of that. The emphasis on deep learning obscures that fact.

Within machine learning, there is supervised learning, unsupervised learning, probabilistic learning, and non-probabilistic learning.

To put things in perspective, there have been many springs and winters in the development of artificial intelligence. The first winter was in the 1950s during the height of the Cold War. There was a great deal of interest in machine translation in order to translate Russian into some other language. According to an often quoted story, they put in statements such as "The spirit is willing, but the flesh is weak". Translated into Russian and back, the result was "The vodka is strong, but the meat is rotten." Language learning was a lot harder than people first thought.

The next spring arose with the ideas of Newell and logic theorist Terry Winograd that used the idea of manipulating small world blocks, which led to some progress. Of course that was the time when Marvin Minsky stated that there will be human level intelligence in three years. No one makes those kinds of claims any more. Computational power and expressiveness were the limits to this approach.

Next were rule based systems first developed by Ed Feigenbaum and others. MYCIN based medical diagnosis came out of this approach. Campbell Soup used these techniques to capture their secret recipes so they were not dependent on human memory. The problem was that rule based systems did not scale after a few hundred rules. Symbolics and other companies tried to build hardware based on these systems. When the limits of these systems became apparent, DARPA stopped funding, and there was another AI winter.

AI Today - Deep Learning

So where are we now?

Advances in artificial intelligence are being made parts of systems, so they are not going to disappear as they did in the past. Nonetheless, one should keep in mind the views of skeptics and pragmaticists such as Gary Marcus. Deep learning, however great its contribution, is not a universal solution - it is primarily on the level of signals, and it is just a component of a larger system.

This, according to Booch, is where the current generation of developers comes into play. What is AI for the hardcore developer? They can say: "Wow. I have got some really cool new toys that I can put into my system to make them even better". It becomes a systems problem.

Just like the developer world has its own set of tools and environments, the AI world is developing its own sets of tools and environments. It is not clear, however, what tools and environments will predominate in the marketplace as they become commercialized.

Life Cycle

Building AI systems is radically different from building traditional software systems. The primary reason for this is the abundance of data. You are going to spend a lot more time on data curation than you are in defining, building and training the system.

You have to ascertain if your data is biased, and whether have you selected meaningful and socially appropriate data. This is both a technical and an ethical issue. Figuring this out requires an entirely different skill set from that of the traditional software developer. Data scientists have to be part of the system lifecycle from the start. Identifying the data and use cases need to begin long before the solution is defined.

Hardware Infrastructure

Another challenge for developers is the introduction of hardware to support the software that is being developed. For example, Google has its own TPU to support TensorFlow. Neuromorphic computing is being explored. IBM has a project called True North to build chips that mimic the way neurons work in the brain.

Nonetheless there are significant different form factors between a brain and a computer. The former has about 100 billion neurons and uses 20 watts of power; the latter has a few billion transistors and uses several hundred watts of power. The computer system can do some things faster than the mind could ever do since the brain runs at about 20 Hertz. The neurons on chips tend to be binary in nature with weights on the end, with a probability output between one and zero. The neurons in the brain spike which means there are time signals associated with it.

Architectural Implications

There is a tradeoff between inferencing and learning. IBM tried to train a robotic arm and it required quite a few hours of cloud computation time. Ultimately they trained a neural network and were able to fit it on a Raspberry Pi working in real time. Perhaps not everything has to be done in the cloud, and some of it can be done on the edge, but this is an architectural tradeoff.

Another architectural factor is that artificial intelligence does not require the same degree of precision as in other domains. Hence, the computation on the edge can be different than in other kinds of systems.

As John Gall argues in his book Systemantics, every successful large system was grown from smaller systems that already worked. Often these components that we use were built by others, and what we build will be used by others. Understanding this is the common, shared denominator among the different philosophies of Agile, DevOps and Lean - the continuous release of executable architectures. This is very different from the Artificial Intelligence world view.

Booch, as an architect, looked at the architecture of some AI systems: Watson and Alpha Go.

However impressive Watson was on Jeopardy, it was just a pipe and filter architecture that had AI components. You get a statement, get a number of potential search results, and make some hypotheses which expand out to hundreds to thousands of possibilities. Then you start to build evidence against these possibilities. Effectively this is forward chaining, with a bit of backward chaining looking for evidence that supports the hypotheses. In the end you reduce it to the top three choices. The artificial intelligence happens inside the components. Natural language algorithms are used against those choices to pick the response. It is the pipeline architecture that puts the AI components together. In fact, the pipeline is open source, UIMA. In order to do this in a few milliseconds a great deal of hardware was also required.

AlphaGo at its core is a convolutional neural network. Outside of that network it is holonomic - a decision of what to do next is based on what is perceived immediately, without regard to previous history. Many autonomous cars use this approach. This class of architectures is reactionary; you give it some state, and it makes inferences based on that state. There are, however, things you cannot do without history. For example, a human driver sees children playing with a ball, and assumes they might jump into the street. Most autonomous vehicles cannot make that judgement.

IBM has been exploring hybrid architectures, such a system called Self that combines these approaches. Gradient descent has been around for decades. Agent based systems date back to Marvin Minsky's Society of Mind and blackboard systems. So does CMU's Hearsay experiments. IBM has been exploring how massive agent-based systems can use blackboards to communicate opportunistically with AI components. They have been able to build social avatars, social robots, and social spaces with this approach.

Ethical Issues

With the enormous opportunities of artificial intelligence come the ethical implications associated with them. All software systems are a resolution of forces, and those forces depend on your domain. With real time systems, such as planes, you miss a cycle, and you have a disaster which is very different from a web based system. But you have to ask what are the things that shape what you build.

Some of these are purely business forces such as cost, schedule or mission. For others, it is development culture or tools. Sometimes it is performance, reliability, usability, or any of the other "ilities".

Increasingly, with AI systems ethical and legal issues appear. Booch often says that "every line of code represents an ethical and moral decision". Even the decision to participate in the building of a particular system, or even the decision to design or write code can have large implications.

Conclusions

Developing AI systems requires the same skills as classical software development: crisp abstractions, separation of concerns, balanced distribution of responsibilities, and an attempt to keep things simple. You want to grow your system through incremental and iterative releases.

Of course there are unresolved problems. One of them is how to bring together symbolic, connectionist, and quantum models of computation. Booch strongly disagrees with the views of the DeepMind community that thinks that AI and neural networks are going to be at the center of systems.

He called attention to Moravec's paradox. There are far more neurons in the human brain devoted to signal processing, the visual cortex, and the hearing cortex than there are in the decision making system. This kind of split appears in the hybrid systems of today. Artificial Intelligence for the edge, symbolic systems for decisions processing, and conventional software around it.

As Alan Newell observed in one of the AI winters, "Computer technology offers the possibility of incorporating intelligent behavior in all the nooks and crannies of the world. With it, we can build an enchanted land." To Booch, software is the invisible language that whispers stories of possibility to the hardware. Today, software developers are the storytellers, the ones who are going to build the enchanted land.

Rate this Article

Adoption
Style

BT