Automating sign language translation (SLT) is a challenging real world application. Despite its societal importance, though, research progress in the field remains rather poor. Crucially, existing methods that yield viable performance necessitate the availability of laborious to obtain gloss sequence groundtruth. In this paper, we attenuate this need, by introducing an end-to-end SLT model that does not entail explicit use of glosses; the model only needs text groundtruth. This is in stark contrast to existing end-to-end models that use gloss sequence groundtruth, either in the form of a modality that is recognized at an intermediate model stage, or in the form of a parallel output process, jointly trained with the SLT model. Our approach constitutes a Transformer network with a novel type of layers that combines: (i) local winner-takes-all (LWTA) layers with stochastic winner sampling, instead of conventional ReLU layers, (ii) stochastic weights with posterior distributions estimated via variational inference, and (iii) a weight compression technique at inference time that exploits estimated posterior variance to perform massive, almost lossless compression. We demonstrate that our approach can reach the currently best reported BLEU-4 score on the PHOENIX 2014T benchmark, but without making use of glosses for model training, and with a memory footprint reduced by more than 70%.
翻译:自动手语翻译( SLT) 是一个具有挑战性的现实世界应用。 尽管它的社会重要性, 实地的研究进展仍然相当差。 关键是, 产生可行性能的现有方法要求提供艰苦的可得性, 以获得光滑序列地真真真真经。 在本文中, 我们通过引入一个端到端的 SLT 模型来缓解这一需求, 该模型不要求明确使用光滑; 模型只需要文本地面真真真伪。 这与现有的端到端模型形成鲜明对比, 该模型使用以在中间模型阶段得到承认的模式形式, 或以平行产出进程的形式, 与SLT 模型共同培训。 我们的方法是一个具有新型层次的变换网络, 包括:(一) 当地赢家- 通( LWTA) 层, 与随机优胜者取样, 而不是传统的 ReLU 模型层, (二) 与以后端分配方式估算的后端偏重重重, 以及 (三) 以不重力压缩技术的形式, 以目前报告的深度缩缩缩缩缩缩缩缩缩缩的表, 。