Sign Language Recognition (SLR) is an essential yet challenging task since sign language is performed with the fast and complex movement of hand gestures, body posture, and even facial expressions. %Skeleton Aware Multi-modal Sign Language Recognition In this work, we focused on investigating two questions: how fine-tuning on datasets from other sign languages helps improve sign recognition quality, and whether sign recognition is possible in real-time without using GPU. Three different languages datasets (American sign language WLASL, Turkish - AUTSL, Russian - RSL) have been used to validate the models. The average speed of this system has reached 3 predictions per second, which meets the requirements for the real-time scenario. This model (prototype) will benefit speech or hearing impaired people talk with other trough internet. We also investigated how the additional training of the model in another sign language affects the quality of recognition. The results show that further training of the model on the data of another sign language almost always leads to an improvement in the quality of gesture recognition. We also provide code for reproducing model training experiments, converting models to ONNX format, and inference for real-time gesture recognition.
翻译:手势、身体姿势、甚至面部表情的快速和复杂的移动使得手势、身体姿势、甚至面部表情的移动成为手势,因此,手势识别是一项重要但具有挑战性的任务。%Skeleton Connew Multial-modal Signal Special Special Indentation in this work中,我们集中调查两个问题:对其他手势语言数据集的微调如何帮助改进手势识别质量,以及是否可在不使用GPU的情况下实时进行手势识别。已经使用了三种不同的语言数据集(美国手势语言WLASL、土耳其语-AUTSL、俄语-RSL)来验证模型。这个系统的平均速度已经达到每秒3次预测,这符合实时情景的要求。这个模型(原型)将有利于语言表达或听力受损者与其他互联网交谈。我们还调查了另一种手势语言模式的额外培训如何影响识别质量。结果显示,对另一种手势语言数据进行进一步培训几乎总是会提高手势识别质量。我们还提供了复制模型培训实验的代码,用于复制模型,将模型转换成ONX的姿态识别,并用于真实识别。