Teeth gestures become an alternative input modality for different situations and accessibility purposes. In this paper, we present TeethTap, a novel eyes-free and hands-free input technique, which can recognize up to 13 discrete teeth tapping gestures. TeethTap adopts a wearable 3D printed earpiece with an IMU sensor and a contact microphone behind both ears, which works in tandem to detect jaw movement and sound data, respectively. TeethTap uses a support vector machine to classify gestures from noise by fusing acoustic and motion data, and implements K-Nearest-Neighbor (KNN) with a Dynamic Time Warping (DTW) distance measurement using motion data for gesture classification. A user study with 11 participants demonstrated that TeethTap could recognize 13 gestures with a real-time classification accuracy of 90.9% in a laboratory environment. We further uncovered the accuracy differences on different teeth gestures when having sensors on single vs. both sides. Moreover, we explored the activation gesture under real-world environments, including eating, speaking, walking and jumping. Based on our findings, we further discussed potential applications and practical challenges of integrating TeethTap into future devices.
翻译:牙齿手势成为不同情况和无障碍目的的替代输入方式。 在本文中,我们介绍TeethTap, 一种无眼和无手的新型输入技术,可以识别多达13个离散的牙齿触摸手势。 牙齿TeethTap 采用了一个磨损的3D印刷耳机,带有IMU传感器,两耳后都有一个触动麦克风,可同时探测下巴运动和可靠数据。 牙齿塔普使用一个辅助矢量机,通过调音和运动数据对噪音的手势进行分类,并用动态时间扭曲(DTW)测量距离。 与11名参与者进行的一项用户研究显示,TeethTap 在实验室环境中可以识别13个手势,实时分类精确度为90.9%。 我们进一步发现在对两侧进行感应器时不同牙齿手势的准确性差异。 此外,我们探讨了现实环境中的激活手势,包括饮食、说话、行走和跳动。 根据我们的调查结果,我们进一步讨论了将TeethTap纳入未来装置的潜在应用和实际挑战。