We propose Waveformer that learns attention mechanism in the wavelet coefficient space, requires only linear time complexity, and enjoys universal approximating power. Specifically, we first apply forward wavelet transform to project the input sequences to multi-resolution orthogonal wavelet bases, then conduct nonlinear transformations (in this case, a random feature kernel) in the wavelet coefficient space, and finally reconstruct the representation in input space via backward wavelet transform. We note that other non-linear transformations may be used, hence we name the learning paradigm Wavelet transformatIon for Sequence lEarning (WISE). We emphasize the importance of backward reconstruction in the WISE paradigm -- without it, one would be mixing information from both the input space and coefficient space through skip connections, which shall not be considered as mathematically sound. Compared with Fourier transform in recent works, wavelet transform is more efficient in time complexity and better captures local and positional information; we further support this through our ablation studies. Extensive experiments on seven long-range understanding datasets from the Long Range Arena benchmark and code understanding tasks demonstrate that (1) Waveformer achieves competitive and even better accuracy than a number of state-of-the-art Transformer variants and (2) WISE can boost accuracies of various attention approximation methods without increasing the time complexity. These together showcase the superiority of learning attention in a wavelet coefficient space over the input space.
翻译:我们提出在波浪系数空间中学习注意机制的波浪变异,只需要线性时间复杂性,并享有普遍接近的能量。具体地说,我们首先应用前波变换,将输入序列投进到多分辨率或直角波子基地,然后在波浪系数空间中进行非线性变换(这里是一个随机特性内核),最后通过向后波变换重建输入空间中的代表性。我们注意到,其他非线性变换可能被使用,因此,我们命名了Sequence larning(WISE)的学习范式Wavelet变换。我们强调WISE模式的后向重建的重要性 -- -- 没有前波变换,我们将通过跳过连接将输入空间和系数空间的信息混合到多分数上,这不应被视为数学上的声音。与最近工程中的四面变换相比,波变变更高效地捕捉到本地和位置信息;我们进一步支持这一变换研究,因此我们从远端变换模型中为SWIS(WArena)的远端变异性进度基准和代变更精度的变更精度上,从而可以实现SIS变更精确的SWSLLLI的SYSLILI(SY-SLV)的精确性任务。