Embedding matrices are key components in neural natural language processing (NLP) models that are responsible to provide numerical representations of input tokens.\footnote{In this paper words and subwords are referred to as \textit{tokens} and the term \textit{embedding} only refers to embeddings of inputs.} In this paper, we analyze the impact and utility of such matrices in the context of neural machine translation (NMT). We show that detracting syntactic and semantic information from word embeddings and running NMT systems with random embeddings is not as damaging as it initially sounds. We also show how incorporating only a limited amount of task-specific knowledge from fully-trained embeddings can boost the performance NMT systems. Our findings demonstrate that in exchange for negligible deterioration in performance, any NMT model can be run with partially random embeddings. Working with such structures means a minimal memory requirement as there is no longer need to store large embedding tables, which is a significant gain in industrial and on-device settings. We evaluated our embeddings in translating {English} into {German} and {French} and achieved a $5.3$x compression rate. Despite having a considerably smaller architecture, our models in some cases are even able to outperform state-of-the-art baselines.
翻译:嵌入矩阵是神经自然语言处理(NLP)模型的关键组成部分,这些模型负责提供输入符号的数值表示。\ footote{ 在本文的文字和子字被称为\ textit{tokens}, 而术语\ textit{emed} 仅指投入的嵌入。}在本文件中,我们分析这些矩阵在神经机器翻译(NMT)背景下的影响和作用。我们显示,从文字嵌入和随机嵌入运行NMT系统中减损合成和语义信息不会像最初的声音那样有害。我们还显示,从完全训练的嵌入中仅纳入有限的特定任务知识能够促进NMT系统。我们的调查结果表明,在换来微小的性能变质时,任何NMT模型都可以使用部分随机嵌入。与这些结构合作意味着最小的记忆要求,因为不再需要存储大型嵌入表,这是工业和封闭环境中的一个重大收益。我们评估了我们嵌入的嵌入量,在将5美元的模型转换成一个相当小的英国的模型中, {德国案例是相当小的。