When humans socially interact with another agent (e.g., human, pet, or robot) through touch, they do so by applying varying amounts of force with different directions, locations, contact areas, and durations. While previous work on touch gesture recognition has focused on the spatio-temporal distribution of normal forces, we hypothesize that the addition of shear forces will permit more reliable classification. We present a soft, flexible skin with an array of tri-axial tactile sensors for the arm of a person or robot. We use it to collect data on 13 touch gesture classes through user studies and train a Convolutional Neural Network (CNN) to learn spatio-temporal features from the recorded data. The network achieved a recognition accuracy of 74% with normal and shear data, compared to 66% using only normal force data. Adding distributed shear data improved classification accuracy for 11 out of 13 touch gesture classes.
翻译:当人类通过触摸与另一物剂(如人类、宠物或机器人)进行社会互动时,他们通过在不同方向、地点、接触区域和持续时间应用不同程度的强度来这样做。虽然先前的触摸姿态识别工作侧重于正常力量的时空分布,但我们假设增加剪切力将允许更可靠的分类。我们为一个人或机器人的手臂提供软、灵活的皮肤,并配有一系列三轴触动感应器。我们用它来收集13个触摸手势等级的数据,通过用户研究来收集13个触摸手势等级的数据,并培训一个革命神经网络(CNN)从所记录的数据中学习神经神经网络(CNN)的时空特征。这个网络在正常和剪切数据方面实现了74%的识别精确度,而只有使用正常的力数据才达到66%。在13个触摸手势类别中,有11个采用了分布式的剪切数据,加上分布式数据提高了分类精度。