Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage Articles Q&A on the Book Rebooting AI

Q&A on the Book Rebooting AI

Key Takeaways

  • Any account in the media of some AI system that is either very excited or very terrified is likely to be unrealistic, ill-informed hype, and should be read skeptically.
  • Deep learning, which is currently the leading approach to AI, is often very powerful at carrying out narrow tasks when there is an enormous amount of relevant data available, but will not lead to human-level AI.
  • There are real dangers associated with the misuse of AI, deliberate or unwitting; but the greatest danger is that its enormous potential for good will remain unrealized.
  • Human-level AI will need to have a rich understanding of the situation and the tasks at hand and have a body of common sense knowledge.
  • AI can only be made safe and trustworthy through a combination of good engineering, common sense knowledge, human values, and regulation.

The book Rebooting AI explains why a different approach other than deep learning is needed to unlock the potential of AI. Authors Gary Marcus and Ernest Davis propose that AI programs will have to have a large body of knowledge about the world in general, represented symbolically. Some of the basic elements of that knowledge should be built in.

InfoQ readers can read excerpts of Rebooting AI to get an impression of the book.

InfoQ interviewed Marcus and Davis about the state of the practice of AI and main concerns, the limitations of deep learning and their suggestion for bringing "common sense" to machine learning, what's needed to make AI safe and trustworthy, and what they expect AI can bring us in the near future and what will take a longer time.

InfoQ: What made you decide to write this book?

Gary Marcus and Ernest Davis: There are two things we hope to accomplish with this book. First, we want to give readers a clear idea of the state of artificial intelligence and where it is going. In the present, what are its real accomplishments, what are showy but superficial stunts, and what is hype? In the future, what is the real promise, what are the real dangers, and what is sci-fi fantasy?  Second, we want to argue that the current direction in machine learning, particularly "deep learning" from large data sets, is inherently limited in that it will be able to achieve, and that achieving anything like human-level intelligence will require a quite different approach, focusing on understanding the world in a deep sense.

InfoQ: For whom is it intended?

Marcus and Davis: For several different audiences. For the general reader, and for journalists and writers, we want to give a clear idea of what AI is and where it is headed. For decision-makers in government and industry, we want to give guidance so that we all can reap the benefits of AI and avoid its hazards. For AI researchers, we want to argue for a sea-change in the direction of research.

InfoQ: What's the state of the practice of AI? What's currently possible with AI?

Marcus and Davis: AI has achieved some successes that are impressive and important, such as speech transcription, machine translation and photo tagging, and some that are astonishing but basically frivolous, such as the programs with superhuman abilities at chess, Go, Jeopardy!, and other games.  AI is also, increasingly, a basic tool in all kinds of fairly humdrum data analysis in government, industry, and science.  What it can't do are tasks that require a real understanding of the situation and knowledge of the world. It can't read a book or watch a movie and understand what is going on.

InfoQ: What are the main concerns that you have regarding AI?

Marcus and Davis: There are many legitimate concerns about AI. People with bad intentions - criminals, terrorists, militaries carrying out war, authoritarian governments carrying out surveillance - will undoubtedly misuse it, as they do every powerful technology. People, both in the general public and in positions of authority, are apt to trust it too much. Unless it is audited very carefully, AI can perpetuate existing social biases, as we've seen in many scandals over the last decade, such as the Amazon job recruitment program that was unshakably biased against women applicants.

But our largest concern is that the great potential of AI that could benefit mankind will end up unrealized: first, because people will be frightened by the dangers and, after a certain point, discouraged by the limitations and failures of existing AI; and, second, because AI research, fixated on the short-term successes of machine learning, will fail to explore other approaches that have longer-term payoffs but a greater benefit in the long term.

InfoQ: What are the limitations of deep learning?

Marcus and Davis: Deep learning is often very effective at a task if there exists, or you can create, an immense amount of training data for the task, and if the examples that will arise in the future are fundamentally similar to the examples in the training data. It works poorly when there is little data you can use in training or when circumstances change. And the kinds of changes that confuse deep learning systems are, from a human perspective, surprising; a self-driving car trained in one city may do badly in another city, a reading program trained to read black on white may do badly in trying to read white on black. Deep learning systems are also very susceptible to so-called "adversarial examples" - a small change in a text or a photo that, from a human standpoint, seems trivial or even undetectable, can completely confuse a deep learning system.

InfoQ: You mentioned in the book that machine learning seems to be ignoring evidence from fields like developmental psychology and developmental neuroscience. What might be the reasons for this?

Marcus and Davis: It's partly cultural - the people developing machine learning technology are mostly trained in computer science and math and have at most a limited knowledge of the cognitive sciences. It's partly intellectual arrogance. More fundamentally, though, the structure of machine learning technology makes it very difficult to incorporate the kinds of insights that the cognitive sciences provide.

InfoQ: What's your suggestion for bringing "common sense" to artificial intelligence?

Marcus and Davis: AI programs of all kinds need to have basic common sense. A robot waiter serving drinks at a party should know, without being told, not to give a guest a broken wine glass. An AI program that reads the sentence "Paul emailed George and he answered immediately" should realize that it was George who answered, not Paul. To do this reliably, AI programs need to have the basic knowledge about drinks, calling, answering and all the other aspects of everyday life that we all take for granted.

Getting that kind of knowledge into AI programs has turned out to be very difficult. We argue that a solution will require a number of components. First, AI programs will have to start with some basic knowledge built in; certainly, the fundamental properties of time, space, and causality; probably also some knowledge about physical objects and physical interactions and people and their interactions. Second, AI programs will need the ability to deal explicitly with concepts and to reason about the world in terms of the relations between concepts. Third, AI programs need to be structured so that they have a body of knowledge about the world in general that they can call in executing different tasks, rather than learn each individual task in isolation. Finally, there will have to be a learning mechanism which builds up common sense knowledge incrementally from observing the world and interacting with it.

InfoQ: What's needed to make AI safe and trustworthy?

Marcus and Davis: Good engineering practices: the same kind of attention to safety and reliability that is required to make sure that bridges don’t fall down and toasters don’t catch on fire. A broad common sense understanding of the world: a robot needs to know what will be safe and what will be risky in acting or in failing to act. An understanding of human values: an AI needs to understand that it should not make money for its owner by cybertheft or selling drugs.  Appropriate regulation: the government needs to enforce that companies building or using AI follow ethical norms.

InfoQ: What do you expect AI to bring us in the near future? What will take a longer time?

Marcus and Davis: In the near future: self-driving cars are perhaps a decade or so away. Chatbots like Siri and Alexa will gradually improve and gain more functionality. Robots will be increasingly common. In the long term: the sky's the limit. AI programs that can read and understand the entire web, the way that web search programs can now search it. Household robots that can assist the aged and disabled, or the busy homemaker.

About the Authors

Gary Marcus is a scientist, best-selling author, and entrepreneur. He is founder and CEO of Robust.AI, and was founder and CEO of Geometric Intelligence, a machine learning company acquired by Uber in 2016. He is the author of five books, including The Algebraic Mind, Kluge, The Birth of the Mind, and The New York Times best seller Guitar Zero. He has published extensively in fields ranging from human and animal behavior to neuroscience, genetics, and artificial intelligence, often in leading journals such as Science and Nature, and is Professor Emeritus at NYU.

Ernest Davis is a professor of computer science at the Courant Institute of Mathematical Sciences, New York University. He is one of the world's leading experts on commonsense reasoning for artificial intelligence. He is the author of four books, including the textbooks Representations of Commonsense Knowledge and Linear Algebra and Probability for Computer Science Applications, and Verses for the Information Age, a collection of light verse. With his late father Philip J. Davis, he edited Mathematics, Substance and Surmise: Views on the Meaning and Ontology of Mathematics.

Rate this Article