For people living with upper-limb amputations, powered prosthetic hands capable of executing various grasp patterns are highly sought-after solutions.
A crucial requirement for such prosthetic hands, however, is the accurate identification of the intended grasp pattern and subsequent activation of the prosthetic digits accordingly. Vision-based grasp classification techniques offer improved coordination between those with upper-limb amputations and prosthetic hands without physical contact. While deep learning methods, particularly convolutional neural networks (CNNs), are utilized to process visual information for classification, one of the key challenges lies in developing a model that can effectively generalize across various object shapes and accurately classify grasp classes.
To address this, a paper published in Biomedical Physics& Engineering Express proposed a compact CNN model, GraspCNet, designed specifically for grasp classification in prosthetic hands. The use of separable convolutions reduces the computational burden, making it potentially suitable for real-time applications on embedded systems. The GraspCNet model is designed to learn and generalize from object shapes, allowing it to effectively classify unseen objects beyond those included in the training dataset.
The proposed model was trained and tested using various standard object data sets. A cross-validation strategy has been adopted to perform better in seen and unseen object class scenarios. The average accuracy achieved was 82.22 percent and 75.48 percent in the case of seen, and unseen object classes respectively. In computer-based real-time experiments, the GraspCNet model achieved an accuracy of nearly 70 percent.
A comparative analysis with state-of-the-art techniques revealed that the proposed GraspCNet model outperformed most benchmark techniques and demonstrated comparable performance with the DcnnGrasp method. The compact nature of the GraspCNet model suggests its potential for integration with other sensing modalities in prosthetic hands.
The paper, Vision-aided grasp classification: design and evaluation of compact CNN for prosthetic hands, was published in the journal, Biomedical Physics & Engineering Express.

