Automatic Cued Speech Recognition (ACSR) provides an intelligent human-machine interface for visual communications, where the Cued Speech (CS) system utilizes lip movements and hand gestures to code spoken language for hearing-impaired people. Previous ACSR approaches often utilize direct feature concatenation as the main fusion paradigm. However, the asynchronous modalities i.e., lip, hand shape and hand position) in CS may cause interference for feature concatenation. To address this challenge, we propose a transformer based cross-modal mutual learning framework to prompt multi-modal interaction. Compared with the vanilla self-attention, our model forces modality-specific information of different modalities to pass through a modality-invariant codebook, collating linguistic representations for tokens of each modality. Then the shared linguistic knowledge is used to re-synchronize multi-modal sequences. Moreover, we establish a novel large-scale multi-speaker CS dataset for Mandarin Chinese. To our knowledge, this is the first work on ACSR for Mandarin Chinese. Extensive experiments are conducted for different languages i.e., Chinese, French, and British English). Results demonstrate that our model exhibits superior recognition performance to the state-of-the-art by a large margin.
翻译:自动收音语音识别(ACSR)为视觉通信提供了一个智能的人体界面,在视觉通信中,封闭式演讲(CS)系统利用嘴唇运动和手势为听力障碍者编码口语。先前的ACSR方法经常将直接特征融合作为主要的聚合模式。然而,CS中的非同步模式,即嘴唇、手形和手姿势,可能会干扰功能凝聚。为了应对这一挑战,我们提议了一个基于超现代相互学习的变异器框架,以迅速的多模式互动。与香草自我注意相比,我们模型将不同模式的不同模式的信息通过模式-变异代码手册,对每种模式的象征进行拼贴。然后,共同语言知识将被用于对多模式序列进行重新同步。此外,我们为曼达林中文建立了一个新型的大型多语音CS数据集。我们的知识是中国曼达林的首项工作。 与中国曼达林模式的模型相比,我们的各种模式模式模式不同模式模式模式模式模式模式模式模式不同模式不同模式不同模式不同模式不同模式不同的模式不同模式不同模式不同模式不同模式不同模式不同模式不同模式不同模式不同模式不同的语言的超级实验。</s>