Context-aware machine translation models are designed to leverage contextual information, but often fail to do so. As a result, they inaccurately disambiguate pronouns and polysemous words that require context for resolution. In this paper, we ask several questions: What contexts do human translators use to resolve ambiguous words? Are models paying large amounts of attention to the same context? What if we explicitly train them to do so? To answer these questions, we introduce SCAT (Supporting Context for Ambiguous Translations), a new English-French dataset comprising supporting context words for 14K translations that professional translators found useful for pronoun disambiguation. Using SCAT, we perform an in-depth analysis of the context used to disambiguate, examining positional and lexical characteristics of the supporting words. Furthermore, we measure the degree of alignment between the model's attention scores and the supporting context from SCAT, and apply a guided attention strategy to encourage agreement between the two.
翻译:符合环境的机器翻译模型旨在利用背景信息,但往往未能这样做。结果,它们错误地模糊了需要解决的背景问题的名词和多词词。在本文中,我们提出几个问题:人类翻译员用什么背景来解决模棱两可的词句?模型是否对同一背景给予了很大的关注?如果我们明确培训它们来这样做呢?为了回答这些问题,我们引入了SCAT(支持模糊翻译的背景),这是一个新的英文-法文数据集,由14K译文的支持语组成,专业翻译认为这14K译文对标语脱节有用。我们利用 SCAT,对用于淡化、检查支持词的位置和词汇特点的背景进行深入分析。此外,我们衡量模型的注意分数与SCAT的支持背景之间的一致程度,并采用指导关注策略鼓励双方达成一致。