When will we see the first prosthetic hand that can play the piano? It’s impossible to know, but thanks to the work of a transatlantic research team, it may be coming sooner than you’d think. According to principle investigator Zoubin Ghahramani, PhD, a professor of information engineering at the University of Cambridge, England, designers of neuroprostheses currently don’t fully understand how movement translates into brain signals, and current brain-signal decoding methods can’t produce data that could direct a neuroprosthesis with speed and accuracy of an intact human arm.
“Neurons are noisy information channels,” he was quoted as saying in the June 21 issue of The Engineer. “So you get activity from many, many neurons spiking, and it is a challenge to infer the desired action and direction of movement…. There have been advances in the field over the last decade or so, but the methods people have used have generally been fairly simple linear filtering methods for decoding neural activities… The main thing we’re hoping to contribute is much more advanced machine-learning methods.”
Ghahramani’s team, which includes researchers from both Cambridge and Stanford University, Palo Alto, California, are working to develop algorithms for decoding neural activity into physical commands for robotic devices. Working under a £410,000 Engineering and Physical Sciences Research Council (EPSRC) grant, the team hopes to develop decoding algorithms that are more adaptive than existing ones. The new algorithms will be designed to deal with the changeability of the brain environment, where “electrodes might drift or the neural wiring of the brain changes gradually over time,” Ghahramaniw explained.