BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Researchers at Stanford Use Brain Signals to Control Intelligent Robots

Researchers at Stanford Use Brain Signals to Control Intelligent Robots

This item in japanese

In a paper presented at the 7th Annual Conference on Robot Learning last November, a team of Stanford University researchers presented an intelligent human brain-robot interface that enables controlling a robot through brain signals. Dubbed NOIR, short for Neural Signal Operated Intelligent Robots, the system uses electroencephalography (EEG) to communicate human intentions to the robots.

According to the researchers, their brain-robot interface proved capable of carrying through 20 distinct everyday tasks, including cooking, cleaning, personal care, and more. Besides the sheer ability to execute tasks, NOIR also aims to be able to learn about and better adapt to users' intentions using AI:

The effectiveness of the system is improved by its synergistic integration of robot learning algorithms, allowing for NOIR to adapt to individual users and predict their intentions. Our work enhances the way humans interact with robots, replacing traditional channels of interaction with direct, neural communication.

NOIR uses a first component to decode goals processing brain signals and another to support a library of primitive tasks.

The first component is implemented as a modular pipeline that uses EEG signals to decode user intentions. To understand what object to manipulate, the first pipeline module relies on steady state visually evoked potentials (SSVEPs), i.e., rhythmic stimuli arising in the occipital lobe, which is commonly associated with the visual cortex. Those stimuli are then classified using Canonical Correlation Analysis (CCA) to create a Canonical Reference Signal which is matched to the EEG signal to identify the object that was made to flicker at that frequency.

The next pipeline stage deals with decoding the action to carry through on the target object. This is accomplished using motor imagery (MI) signals which are translated into one of four available classes, namely Left Hand, Right Hand, Legs, and Rest, to identify the body part that the user imagines using to carry through the task.

Finally, EEG signals are used again to guide a cursor on a screen and select the location for executing the skill. A special safety mechanism is implemented at this stage to allow the user to confirm or interrupt the operation. To this aim, the system collects electrical signals generated from facial muscles. For example, if the user frowns or clenches their jaw, the ensuing signals are interpreted as a negative response.

The second component implements a parametrized library of robot skills which include picking, pushing, placing, and others. Those fundamental skills can be combined to create more complex tasks.

According to the researchers, NOIR has been able to accomplish 20 long-horizon tasks made of 4 to 15 skills, including meal preparation, cleaning, personal care, and entertainment. Each task required a number of successive attempts, ranging from 1 to 3.33, while the mean task completion time was about 20 minutes.

While these results show that a system like NOIR entails some inefficiencies in comparison with humans accomplishing the same tasks, especially in the first attempts, the researchers are confident that robot learning methods have the potential to address those inefficiencies over time.

About the Author

Rate this Article

Adoption
Style

BT