Attention mechanisms are a central property of cognitive systems allowing them to selectively deploy cognitive resources in a flexible manner. Attention has been long studied in the neurosciences and there are numerous phenomenological models that try to capture its core properties. Recently attentional mechanisms have become a dominating architectural choice of machine learning and are the central innovation of Transformers. The dominant intuition and formalism underlying their development has drawn on ideas of keys and queries in database management systems. In this work, we propose an alternative Bayesian foundation for attentional mechanisms and show how this unifies different attentional architectures in machine learning. This formulation allows to to identify commonality across different attention ML architectures as well as suggest a bridge to those developed in neuroscience. We hope this work will guide more sophisticated intuitions into the key properties of attention architectures and suggest new ones.
翻译:注意机制是认知系统的中心属性,使它们能够以灵活的方式选择性地调用认知资源。注意力长期以来一直是神经科学研究的重点,存在着许多试图捕捉其核心特性的现象学模型。最近,注意力机制已成为机器学习的主要建筑选择,并成为Transformer的核心创新。其发展所依据的主要直觉和形式化是数据库管理系统中键和查询的思想。在这项工作中,我们提出了注意机制的另一种贝叶斯基础,并展示了如何统一机器学习中的不同注意结构。这种表述允许我们跨不同的注意机器学习结构进行共性识别,并提出与神经科学中发展的注意力机制之间的桥梁。我们希望这项工作能够引导对注意力结构的核心特性更为成熟的直觉,并提出新的想法。