This article presents an original method for Text-to-Sign Translation. It compensates data scarcity using a domain-specific parallel corpus of alignments between text and hierarchical formal descriptions of Sign Language videos in AZee. Based on the detection of similarities present in the source text, the proposed algorithm recursively exploits matches and substitutions of aligned segments to build multiple candidate translations for a novel statement. This helps preserving Sign Language structures as much as possible before falling back on literal translations too quickly, in a generative way. The resulting translations are in the form of AZee expressions, designed to be used as input to avatar synthesis systems. We present a test set tailored to showcase its potential for expressiveness and generation of idiomatic target language, and observed limitations. This work finally opens prospects on how to evaluate translation and linguistic aspects, such as accuracy and grammatical fluency.
翻译:本文介绍了一种原始的文本到签名翻译方法。 它使用特定域的文本和AZee手语视频的分级正式描述在文本和分级正式描述之间平行的组合来弥补数据稀缺。 根据对源文本中存在的相似之处的发现, 拟议的算法会回溯利用匹配和对齐部分的替代来为新语句建立多种候选翻译。 这有助于尽可能保存手语结构, 然后再以发型方式快速返回字面翻译。 由此产生的翻译以AZee 表达式的形式出现, 设计成用于向 vatar 合成系统输入。 我们展示了一套测试集, 以展示其表达性和生成语言目标语言的潜力, 并观察到了局限性。 这项工作最终为如何评价翻译和语言方面, 如准确性和语法流开辟了前景 。