Efficient and quick remote communication in search and rescue operations can be life-saving for the first responders. However, while operating on the field means of communication based on text, image and audio are not suitable for several disaster scenarios. In this paper, we present a smartwatch-based application, which utilizes a Deep Learning (DL) model, to recognize a set of predefined arm gestures, maps them into Morse code via vibrations enabling remote communication amongst first responders. The model performance was evaluated by training it using 4,200 gestures performed by 7 subjects (cross-validation) wearing a smartwatch on their dominant arm. Our DL model relies on convolutional pooling and surpasses the performance of existing DL approaches and common machine learning classifiers, obtaining gesture recognition accuracy above 95%. We conclude by discussing the results and providing future directions.
翻译:搜索和救援行动中的高效和快速远程通信可以拯救第一反应者的生命。然而,在以文字、图像和音频为基础的实地通信手段操作时,并不适合于几种灾害情景。在本文中,我们提出了一个基于智能的监视应用程序,它使用深层学习模式,以识别一套预先定义的手臂手势,通过震动将它们映射成摩斯码,使第一反应者之间能够进行远程通信。模型性能通过培训来评价,培训中使用了7个主体(交叉验证)的4 200个手势。我们的DL模型依赖于共聚,超过了现有的DL方法和通用机器学习分类师的性能,获得了95%以上的手势识别精度。我们通过讨论结果和提供未来方向来结束我们的工作。