Attention is an increasingly popular mechanism used in a wide range of neural architectures. The mechanism itself has been realized in a variety of formats. However, because of the fast-paced advances in this domain, a systematic overview of attention is still missing. In this article, we define a unified model for attention architectures in natural language processing, with a focus on those designed to work with vector representations of the textual data. We propose a taxonomy of attention models according to four dimensions: the representation of the input, the compatibility function, the distribution function, and the multiplicity of the input and/or output. We present the examples of how prior information can be exploited in attention models and discuss ongoing research efforts and open challenges in the area, providing the first extensive categorization of the vast body of literature in this exciting domain.
翻译:注意是各种神经结构中日益流行的一种机制,这一机制本身已经以多种形式实现,然而,由于这一领域的快速进展,仍缺乏对注意的系统全面审视,在本条中,我们界定了自然语言处理中注意结构的统一模式,重点是那些旨在与文字数据的矢量表达方式合作的模式。我们建议按四个方面对注意模式进行分类:投入的表述、兼容性功能、分配功能以及投入和/或产出的多重性。我们举例说明了如何在注意模式中利用先前的信息,并讨论了该领域正在进行的研究工作和公开的挑战,提供了这一令人振奋的领域大量文献的首次广泛分类。