Current sequence-to-sequence models are trained to minimize cross-entropy and use softmax to compute the locally normalized probabilities over target sequences. While this setup has led to strong results in a variety of tasks, one unsatisfying aspect is its length bias: models give high scores to short, inadequate hypotheses and often make the empty string the argmax -- the so-called cat got your tongue problem. Recently proposed entmax-based sparse sequence-to-sequence models present a possible solution, since they can shrink the search space by assigning zero probability to bad hypotheses, but their ability to handle word-level tasks with transformers has never been tested. In this work, we show that entmax-based models effectively solve the cat got your tongue problem, removing a major source of model error for neural machine translation. In addition, we generalize label smoothing, a critical regularization technique, to the broader family of Fenchel-Young losses, which includes both cross-entropy and the entmax losses. Our resulting label-smoothed entmax loss models set a new state of the art on multilingual grapheme-to-phoneme conversion and deliver improvements and better calibration properties on cross-lingual morphological inflection and machine translation for 6 language pairs.
翻译:当前序列到序列模型经过培训,以最大限度地减少交叉随机性,并使用软式模型来计算目标序列的本地标准化概率。 虽然这一设置导致在各种任务中取得显著结果, 但一个不满意的方面是其长度偏差: 模型给短短的、不充分的假设留下很高分数, 并经常使空字符串成为神经元变速器 -- 所谓的猫让你的舌头问题。 最近提出的基于元素的稀有序列到序列模型提供了一个可能的解决办法, 因为它们可以通过给坏假设分配零概率来缩小搜索空间, 但是它们处理变异器字级任务的能力从未被测试过。 在这项工作中, 我们显示基于元素的模型有效地解决了猫舌头问题, 消除了神经机翻译的主要模型错误源。 此外, 我们把标签平滑化、 一种关键的正规化技术, 推广到Fenchel- Young损失的大家庭, 其中包括跨式和进式语言损失, 包括跨式和进式语言语言损失。 我们由此而制作的标签- 移动式变形的变形变形变形模型, 交付了一个更好的变形变形的变形变形变形变形变形变形变形变形变形变形变形变形变形变形的变形变形变形变形变形变形变形变形变形的变形变形变形变形的节式的动的动式的动式的动动动式节压式。