Two popular types of machine translation (MT) are phrase-based and neural machine translation systems. Both of these types of systems are composed of multiple complex models or layers. Each of these models and layers learns different linguistic aspects of the source language. However, for some of these models and layers, it is not clear which linguistic phenomena are learned or how this information is learned. For phrase-based MT systems, it is often clear what information is learned by each model, and the question is rather how this information is learned, especially for its phrase reordering model. For neural machine translation systems, the situation is even more complex, since for many cases it is not exactly clear what information is learned and how it is learned. To shed light on what linguistic phenomena are captured by MT systems, we analyze the behavior of important models in both phrase-based and neural MT systems. We consider phrase reordering models from phrase-based MT systems to investigate which words from inside of a phrase have the biggest impact on defining the phrase reordering behavior. Additionally, to contribute to the interpretability of neural MT systems we study the behavior of the attention model, which is a key component in neural MT systems and the closest model in functionality to phrase reordering models in phrase-based systems. The attention model together with the encoder hidden state representations form the main components to encode source side linguistic information in neural MT. To this end, we also analyze the information captured in the encoder hidden state representations of a neural MT system. We investigate the extent to which syntactic and lexical-semantic information from the source side is captured by hidden state representations of different neural MT architectures.
翻译:两种流行的机器翻译(MT)系统都是基于文字和神经机的翻译系统。这两种类型的系统都由多个复杂的模型或层次组成。每种模型和层次都学习源语言的不同方面。 但是,对于其中的一些模型和层次,尚不清楚是学习了哪些语言现象,还是如何学习了这种信息。对于基于文字的MT系统来说,常常很清楚每个模型都学到了哪些信息,问题在于这些信息是如何学得的,特别是其短语的重新排序模型。对于神经机翻译系统来说,情况甚至更为复杂,因为对于许多案例来说,这些模型和层次并不十分清楚信息是如何学到的。为了说明哪些语言现象是源语言现象,我们对基于文字的MT系统的重要模型的行为进行了分析,我们考虑从基于文字的MT系统中重新排序模型中的哪个词对定义重新排序行为影响最大。对于神经机的重新排序,此外,对于神经机能模型的诠释程度而言,我们从隐藏的内线性模型系统从隐藏的学习了哪些信息,而我们用最深层的神经结构的表达方式,而我们用最深层的系统则用最关键的系统来进行。