Understanding the fundamental mechanism behind the success of transformer networks is still an open problem in the deep learning literature. Although their remarkable performance has been mostly attributed to the self-attention mechanism, the literature still lacks a solid analysis of these networks and interpretation of the functions learned by them. To this end, we study the training problem of attention/transformer networks and introduce a novel convex analytic approach to improve the understanding and optimization of these networks. Particularly, we first introduce a convex alternative to the self-attention mechanism and reformulate the regularized training problem of transformer networks with our alternative convex attention. Then, we cast the reformulation as a convex optimization problem that is interpretable and easier to optimize. Moreover, as a byproduct of our convex analysis, we reveal an implicit regularization mechanism, which promotes sparsity across tokens. Therefore, we not only improve the optimization of attention/transformer networks but also provide a solid theoretical understanding of the functions learned by them. We also demonstrate the effectiveness of our theory through several numerical experiments.
翻译:理解变压器网络成功背后的基本机制仍然是深层学习文献中的一个尚未解决的问题。虽然其显著的成绩主要归因于自省机制,但文献仍然缺乏对这些网络的可靠分析,也没有对其所学到的功能作出解释;为此,我们研究注意/变压器网络的培训问题,并采用新颖的细微分析方法来增进对这些网络的了解和优化。特别是,我们首先采用自省机制的软体替代,用我们另类的调控器关注重新界定变压器网络的正规化培训问题。然后,我们把改写作为可解释和易于优化的螺旋体优化问题。此外,作为我们变压器分析的副产品,我们揭示了一种内含的正规化机制,它能促进各种象征的松散。因此,我们不仅改进了对注意/变压器网络的优化,而且还提供了对所学到的功能的坚实理论理解。我们还通过若干数字实验展示了我们理论的有效性。