BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Researchers Publish Survey of Explainable AI

Researchers Publish Survey of Explainable AI

This item in japanese

Bookmarks

A team of researchers from IBM Watson and Arizona State University have published a survey of work in Explainable AI Planning (XAIP). The survey covers the work of 67 papers and charts recent trends in the field.

The team, led by Prof. Subbarao Kambhampati of ASU's Yochan Lab, focused their review on the area of automated planning systems: those that produce sequences of actions (or plans) that aim to achieve a goal state. Explainable planning systems are able to answer questions about why a particular action or sequence of actions were chosen. The team noted that explainable systems in this field can be categorized as either algorithm-based, model-based, or plan-based, and while all types have seen increased research in recent years, most work has been done on model-based systems.

Explainable AI (XAI) has been an active research topic in recent years, spurred by DARPA's 2016 initiative. The widespread adoption of machine-learning for "perception" problems such as computer vision and natural language processing have led to the development of explainability techniques for classifiers, including LIME and AllenNLP Interpret. While perception is an important skill for determining the current state of its environment, an autonomous system---a robot, a self-driving car, or even a game-playing AI---must also make decisions about what to do. These AI systems often employ planning, which generates a series of actions for the AI to take in order to achieve its goal.

Explainable AI Planning (XAIP) systems are able to answer questions about its plans; for example, why a particular action was or was not included in the plan. The team categorized the systems as algorithm-based, model-based, or plan-based. Algorithm-based explanations are often most helpful for the system designer to debug the algorithm, as opposed to an end-user. Plan-based explanations use summarization or abstraction to enable users to understand plans that operate "over long time horizons and over large state spaces." Most research has been done on model-based explanations, which take into account the fact that users have "considerably less computational ability" than the AI, and often have a mental model that differs from "ground truth." For these systems, explanation requires reconciling the user's mental model with the system model.

DARPA's XAI program notes that one motivation for explainable systems is to increase users' trust in the results produced by AI. However, Kambhampati's research team points out that the explanation process can also be "hijacked" to produce explanations that "no longer are true but rather are whatever users find to be satisfying." Other researchers suggest that such deception may even be necessary if AI and robots are to be effective in society. Deep-learning pioneer Geoffrey Hinton downplayed the need for explainability, tweeting:

Suppose you have cancer and you have to choose between a black box AI surgeon that cannot explain how it works but has a 90% cure rate and a human surgeon with an 80% cure rate. Do you want the AI surgeon to be illegal?

Kambhampati characterized this as a "false dichotomy," arguing that in the long term we should expect both accuracy and explainability.
 

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT