Researchers in the Utah NeuroRobotics Lab at the University of Utah introduced artificial intelligence (AI) to reduce the cognitive burden experienced by people using upper-limb prostheses. By integrating proximity and pressure sensors into a commercial bionic hand and then training an artificial neural network on grasping postures, the researchers developed an autonomous approach for a more natural, intuitive way for prosthesis users to grip objects. They said that using AI to finetune a robotic prosthesis improved manual dexterity by finding the right balance between human and machine control.

Four participants with transradial amputations performed tasks such as picking up small objects and raising a cup, using different gripping styles without extensive training or practice. When working in tandem with the AI technology, they demonstrated greater grip security, greater grip precision, and less mental effort.
“As lifelike as bionic arms are becoming, controlling them is still not easy or intuitive,” said Marshall Trout, PhD, who led the research with Jacob A. George, PhD. “Nearly half of all users will abandon their prosthesis, often citing their poor controls and cognitive burden.”
Most commercial bionic arms and hands can’t replicate the sense of touch, and dexterity also comes from subconscious models in the human brain that simulate and anticipate hand-object interactions; a prosthesis would need to learn these automatic responses over time.
The Utah researchers addressed the first problem by outfitting a Taska prosthetic hand with custom fingertips. In addition to detecting pressure, the fingertips were equipped with optical proximity sensors designed to replicate the finest sense of touch. The fingers could detect an effectively weightless cotton ball being dropped on them, for example.
For the second problem, they trained an artificial neural network model on the proximity data so that the fingers would naturally move to the exact distance necessary to form a perfect grasp of the object. Because each finger has its own sensor and can “see” in front of it, each digit worked in parallel to form a perfect, stable grasp across any object.
But one problem still remained. What if the user didn’t intend to grasp the object in that exact manner? What if, for example, they wanted to open their hand to drop the object? To address this, the researchers created a bioinspired approach that involved sharing control between the user and the AI agent. The success of the approach relied on finding the right balance between human and machine control.
“What we don’t want is the user fighting the machine for control. In contrast, here the machine improved the precision of the user while also making the tasks easier,” Trout said. “In essence, the machine augmented their natural control so that they could complete tasks without having to think about them.”
“By adding some artificial intelligence, we were able to offload this aspect of grasping to the prosthesis itself,” George said. “The end result is more intuitive and more dexterous control, which allows simple tasks to be simple again.”
Editor’s note: This story was adapted from materials provided by the University of Utah.
The open-access study, “Shared human-machine control of an intelligent bionic hand improves grasping and decreases cognitive burden for transradial amputees,” was published in Nature Communications.
To see a video of the technique, visit the University of Utah.
