A true interpreting agent not only understands sign language and translates to text, but also understands text and translates to signs. Much of the AI work in sign language translation to date has focused mainly on translating from signs to text. Towards the latter goal, we propose a text-to-sign translation model, SignNet, which exploits the notion of similarity (and dissimilarity) of visual signs in translating. This module presented is only one part of a dual-learning two task process involving text-to-sign (T2S) as well as sign-to-text (S2T). We currently implement SignNet as a single channel architecture so that the output of the T2S task can be fed into S2T in a continuous dual learning framework. By single channel, we refer to a single modality, the body pose joints. In this work, we present SignNet, a T2S task using a novel metric embedding learning process, to preserve the distances between sign embeddings relative to their dissimilarity. We also describe how to choose positive and negative examples of signs for similarity testing. From our analysis, we observe that metric embedding learning-based model perform significantly better than the other models with traditional losses, when evaluated using BLEU scores. In the task of gloss to pose, SignNet performed as well as its state-of-the-art (SoTA) counterparts and outperformed them in the task of text to pose, by showing noteworthy enhancements in BLEU 1 - BLEU 4 scores (BLEU 1: 31->39; ~26% improvement and BLEU 4: 10.43->11.84; ~14\% improvement) when tested on the popular RWTH PHOENIX-Weather-2014T benchmark dataset
翻译:一个真正的翻译代理器,不仅理解手语,翻译文本,还理解文本,翻译符号。迄今为止,许多AI工作主要侧重于将手语翻译从手语翻译到文本。为了后一个目标,我们提议了一个文本到手语翻译模式,SignNet,它利用了视觉标志在翻译中的相似性(和不同性)的概念。这个模块只是双学制两个任务过程的一部分,包括文本到手语(T2S)以及手对字(S2T)。我们目前将SignNet作为一个单一频道结构,这样T2S任务的产出就可以在一个连续的双重学习框架中被输入到S2T。我们用一个单一频道,我们指的是一个单一模式,在翻译中,我们介绍SignNet,一个T2S的任务,使用新的标准嵌入学习过程,以保持信号嵌入(T2S)和文本(S2T)之间的距离。我们还描述了如何选择相似性测试的正面和负面例子。从我们的分析中,我们观察到了将T2S的输出输出输出到B的路径模型,在B的模型中,我们用B的进度测试了比B的进度要高得多。