In this paper we provide, to the best of our knowledge, the first comprehensive approach for incorporating various masking mechanisms into Transformers architectures in a scalable way. We show that recent results on linear causal attention (Choromanski et al., 2021) and log-linear RPE-attention (Luo et al., 2021) are special cases of this general mechanism. However by casting the problem as a topological (graph-based) modulation of unmasked attention, we obtain several results unknown before, including efficient d-dimensional RPE-masking and graph-kernel masking. We leverage many mathematical techniques ranging from spectral analysis through dynamic programming and random walks to new algorithms for solving Markov processes on graphs. We provide a corresponding empirical evaluation.
翻译:在本文中,我们据我们所知,提供了第一个以可扩展方式将各种遮盖机制纳入变形体结构的全面方法。我们表明,关于线性因果关注(Choromanski等人,2021年)和对线性RPE-注意(Luo等人,2021年)的最新结果是这一一般机制的特殊情况。然而,通过将这一问题作为无孔不入的注意的地形(基于地形)调制,我们取得了一些以前未知的结果,包括高效的d-二维RPE-造型和图形内核遮掩。我们利用了许多数学技术,从光谱分析、动态编程和随机行走到在图表上解决Markov进程的新算法,我们提供了相应的经验评估。