An open-access research study published in Nature Scientific Data presented a multimodal dataset to investigate the inclusion of eye tracking and first-person video to provide more stable intent recognition for prosthetic control. The dataset contains surface electromyography and accelerometry of the forearm, and gaze, first-person video, and inertial measurements of the head recorded from 15 people with transradial amputations and 30 people without amputations performing grasping tasks. Computer vision, computer recognition of objects in the field of vision, can be used with eye tracking to partially automate prosthetic hands, according to the study’s authors.
The research was funded by the Swiss National Science Foundation, and the dataset has been made available to the scientific community.
“Our eyes move constantly in quick movements called saccades,” Henning Müller, PhD, a professor of business informatics at the University of Applied Sciences Western Switzerland and the University of Geneva, Switzerland, and a study author, told Medical Xpress. “But when you go to grasp an object, your eyes fixate on it for a few hundred milliseconds. Consequently, eye tracking provides valuable information about detecting both the object a person intends to grasp and the possible gestures required.”
To link common hand gestures with information from the muscles of the residual limb and the new data sources, Müller studied 15 people with upper-limb amputations and 30 people without amputations in an identical experimental setting. Each participant had 12 electrodes affixed to their forearm and acceleration sensors on their arm and head. Eye-tracking glasses recorded eye movements while the participants performed 10 common movements for grasping and manipulating objects, such as picking up a pencil or a fork, or playing with a ball. Computer modeling of the gestures enabled Müller and his team to build up a new multimodal dataset of hand movements comprising different types of data. The dataset includes information not only from the electrodes but also from recordings of forearm acceleration, eye tracking, computer vision and measurements of head movements.