We propose FMMformers, a class of efficient and flexible transformers inspired by the celebrated fast multipole method (FMM) for accelerating interacting particle simulation. FMM decomposes particle-particle interaction into near-field and far-field components and then performs direct and coarse-grained computation, respectively. Similarly, FMMformers decompose the attention into near-field and far-field attention, modeling the near-field attention by a banded matrix and the far-field attention by a low-rank matrix. Computing the attention matrix for FMMformers requires linear complexity in computational time and memory footprint with respect to the sequence length. In contrast, standard transformers suffer from quadratic complexity. We analyze and validate the advantage of FMMformers over the standard transformer on the Long Range Arena and language modeling benchmarks. FMMformers can even outperform the standard transformer in terms of accuracy by a significant margin. For instance, FMMformers achieve an average classification accuracy of $60.74\%$ over the five Long Range Arena tasks, which is significantly better than the standard transformer's average accuracy of $58.70\%$.
翻译:我们建议FMMEXERT(FMMT),这是一组高效和灵活的变压器,受著名的快速多极法(FMMM)的启发,用于加速互动粒子模拟。FMM将粒子相互作用分解成近地和远地的部件,然后分别进行直接和粗粗的计算。同样,FMMEXDER将注意力分解成近地和远地的注意,通过带宽的矩阵和低层的矩阵对近地注意进行模拟。计算FMMMEXT的注意矩阵需要计算时间和序列长度的记忆足迹的线性复杂度。相比之下,标准变压器具有二次变压的复杂度。我们分析并验证FMMEXDERT在长地程和语言建模基准标准变压器的优势。FMMMEXTERD在精确度方面甚至可以大大超过标准变压器的精确度。例如FMMMMTERD在5 Long Arena任务上的平均分类精度为60.74 $70美元,这比标准变压器的平均精确度要好得多。