A team from the Massachusetts Institute of Technology (MIT) Computer Science and Artificial Intelligence Laboratory (CSAIL) and Boston University is working to enable human operators to correct a robot’s choice in real-time using only brain signals. This work may help develop effective tools for brain-controlled robots and prostheses.
Using data from an EEG monitor that records brain activity, the system can detect if a person notices an error as a robot performs an object-sorting task. The team’s novel machine-learning algorithms enable the system to classify brain waves in the space of ten to 30 milliseconds. While the system currently handles relatively simple binary-choice activities, the paper’s senior author says that the work suggests that we could one day control robots in much more intuitive ways.
Past work in EEG-controlled robotics has required training humans to “think” in a prescribed way that computers can recognize. For example, an operator might have to look at one of two bright light displays, each of which corresponds to a different task for the robot to execute. The downside to this method is that the training process and the act of modulating one’s thoughts can be taxing, particularly for people who supervise tasks in navigation or construction that require intense concentration.
To make the experience more natural, the research team is focusing on brain signals called “error-related potentials” (ErrPs), which are generated whenever our brains notice a mistake. As the robot indicates which choice it plans to make, the system uses ErrPs to determine if the human agrees with the decision.
“As you watch the robot, all you have to do is mentally agree or disagree with what it is doing,” said CSAIL Director Daniela Rus, PhD. “You don’t have to train yourself to think in a certain way; the machine adapts to you, and not the other way around.”
ErrP signals are extremely faint, which means that the system must be fine-tuned to classify the signal and incorporate it into the feedback loop for the human operator. In addition to monitoring the initial ErrPs, the team also sought to detect “secondary errors” that occur when the system doesn’t notice the human’s original correction.
“If the robot’s not sure about its decision, it can trigger a human response to get a more accurate answer,” said Stephanie Gil, PhD, a CSAIL research scientist. “These signals can dramatically improve accuracy, creating a continuous dialogue between human and robot in communicating their choices.”
“This work brings us closer to developing effective tools for brain-controlled robots and prostheses,” says Wolfram Burgard, a professor of computer science at the University of Freiburg, Baden-Württemberg, Germany, who was not involved in the research.
Editor’s note: This story was adapted from materials provided by MIT.