Sign language translation as a kind of technology with profound social significance has attracted growing researchers' interest in recent years. However, the existing sign language translation methods need to read all the videos before starting the translation, which leads to a high inference latency and also limits their application in real-life scenarios. To solve this problem, we propose SimulSLT, the first end-to-end simultaneous sign language translation model, which can translate sign language videos into target text concurrently. SimulSLT is composed of a text decoder, a boundary predictor, and a masked encoder. We 1) use the wait-k strategy for simultaneous translation. 2) design a novel boundary predictor based on the integrate-and-fire module to output the gloss boundary, which is used to model the correspondence between the sign language video and the gloss. 3) propose an innovative re-encode method to help the model obtain more abundant contextual information, which allows the existing video features to interact fully. The experimental results conducted on the RWTH-PHOENIX-Weather 2014T dataset show that SimulSLT achieves BLEU scores that exceed the latest end-to-end non-simultaneous sign language translation model while maintaining low latency, which proves the effectiveness of our method.
翻译:近些年来,研究人员对手语翻译作为一种具有深刻社会意义的技术越来越感兴趣。然而,现有的手语翻译方法在翻译开始之前需要阅读所有视频,这会导致高推力延迟度,并限制其在现实情景中的应用。为了解决这一问题,我们提议SimulSLT,这是第一个端到端手语同步手语翻译模型,可以同时将手语视频翻译成目标文本。SimulSLT由文本解码器、边界预测器和遮蔽编码器组成。我们1)使用等待-k战略同时翻译。(2)根据整合与火灾模块设计一个新的边界预测器,以输出光滑线边界,该模型用来模拟手语视频和光标之间的通信。(3)提出创新的重新编码方法,帮助模型获得更丰富的背景信息,使现有视频特征能够充分互动。RWTH-PHOENIX-Weather2014年数据设置的实验结果显示,SimulSLT模型达到BLEU值分数,该模型将超过我们最新终端方法的低端翻译。