Deep learning models such as the Transformer are often constructed by heuristics and experience. To provide a complementary foundation, in this work we study the following problem: Is it possible to find an energy function underlying the Transformer model, such that descent steps along this energy correspond with the Transformer forward pass? By finding such a function, we can reinterpret Transformers as the unfolding of an interpretable optimization process across iterations. This unfolding perspective has been frequently adopted in the past to elucidate more straightforward deep models such as MLPs and CNNs; however, it has thus far remained elusive obtaining a similar equivalence for more complex models with self-attention mechanisms like the Transformer. To this end, we first outline several major obstacles before providing companion techniques to at least partially address them, demonstrating for the first time a close association between energy function minimization and deep layers with self-attention. This interpretation contributes to our intuition and understanding of Transformers, while potentially laying the ground-work for new model designs.
翻译:深层学习模型( 如变换器) 通常是由超自然学和经验构建的。 为了提供一个互补的基础,我们在此工作中研究以下问题: 能否找到一个能函数作为变换器模型的基础, 使这种能量沿变换器前行的阶梯与变换器前行相匹配? 通过找到这样一个函数, 我们可以将变换器重新解释为跨迭代可解释优化进程的发展过程。 过去,人们经常采用这种不断发展的视角来阐明更直截了当的深深层模型( 如MLPs和CNNs ) ; 但是, 至今为止, 仍无法找到类似更复杂的模型( 如变换器) 的等效。 为此, 我们首先概述了几个主要障碍, 然后再提供至少部分解决这些障碍的配套技术, 首次展示了能源功能最小化和深度层之间的紧密关联, 以及自我意识。 这种解释有助于我们对变换器的直觉和理解, 同时有可能为新的模型设计铺设地面工程。