Electromyogram (EMG) pattern recognition can be used to classify hand gestures and movements for human-machine interface and prosthetics applications, but it often faces reliability issues resulting from limb position change. One method to address this is dual-stage classification, in which the limb position is first determined using additional sensors to select between multiple position-specific gesture classifiers. While improving performance, this also increases model complexity and memory footprint, making a dual-stage classifier difficult to implement in a wearable device with limited resources. In this paper, we present sensor fusion of accelerometer and EMG signals using a hyperdimensional computing model to emulate dual-stage classification in a memory-efficient way. We demonstrate two methods of encoding accelerometer features to act as keys for retrieval of position-specific parameters from multiple models stored in superposition. Through validation on a dataset of 13 gestures in 8 limb positions, we obtain a classification accuracy of up to 93.34%, an improvement of 17.79% over using a model trained solely on EMG. We achieve this while only marginally increasing memory footprint over a single limb position model, requiring $8\times$ less memory than a traditional dual-stage classification architecture.
翻译:电磁图模式识别(EMG)可用于对人体机器界面和假肢应用的手势和运动进行分类,但往往面临四肢位置变化引起的可靠性问题。一种方法是双阶段分类,首先使用额外的传感器确定肢体位置,以便在多个特定位置的手势分类器中进行选择。在改进性能的同时,这也增加了模型的复杂性和记忆力,使双级分类难以在资源有限的磨损装置中实施。在本文中,我们用超维计算模型展示加速仪和EMG信号的感应聚合,以便以记忆效率的方式模仿双阶段分类。我们展示了两种编码加速仪特征,作为从存储在超位置的多个模型中检索特定位置参数的钥匙。通过对8个位置的13个手势数据集进行验证,我们获得了高达93.34%的分类精确度,比仅仅在EMG上培训的模型改进了17.79%。我们只略微增加了单肢位置模型的记忆力足迹,但比传统的双级结构少了8美元。