For lower arm amputees, prosthetic hands promise to restore most of physical interaction capabilities. This requires to accurately predict hand gestures capable of grabbing varying objects and execute them timely as intended by the user. Current approaches often rely on physiological signal inputs such as Electromyography (EMG) signal from residual limb muscles to infer the intended motion. However, limited signal quality, user diversity and high variability adversely affect the system robustness. Instead of solely relying on EMG signals, our work enables augmenting EMG intent inference with physical state probability through machine learning and computer vision method. To this end, we: (1) study state-of-the-art deep neural network architectures to select a performant source of knowledge transfer for the prosthetic hand, (2) use a dataset containing object images and probability distribution of grasp types as a new form of labeling where instead of using absolute values of zero and one as the conventional classification labels, our labels are a set of probabilities whose sum is 1. The proposed method generates probabilistic predictions which could be fused with EMG prediction of probabilities over grasps by using the visual information from the palm camera of a prosthetic hand. Our results demonstrate that InceptionV3 achieves highest accuracy with 0.95 angular similarity followed by 1.4 MobileNetV2 with 0.93 at ~20% the amount of operations.
翻译:对于低臂截肢者,假肢手承诺恢复大部分物理互动能力。 这要求准确预测能够抓住不同物体并按用户的预期及时执行的手势。 目前的方法往往依靠生理信号输入,例如从残余肢体肌肉发出的电子感测信号,以推断预期的动作。 但是,信号质量有限、用户多样性和高变异性对系统坚固度产生了不利影响。 我们的工作不完全依靠环球小组的信号,而是通过机器学习和计算机视觉方法,用物理状态概率来增强环球小组的意图推断。 为此,我们:(1) 研究最先进的深神经网络结构,以选择假肢知识转让的性源;(2) 使用包含对象图像和抓取类型概率分布的数据集,作为一种新的标签形式,在其中不使用绝对值零和一个绝对值作为常规分类标签的地方,我们的标签是一系列概率,其总数为1。 提议的方法产生概率预测,可以结合环球小组预测的概率,通过使用最精确的直观数据,利用最精确的棕榈3,在最精确的棕榈3 上,通过最精确的直观的照相机压20 显示我们最精确度的直径的直径20 。