Ever since their conception, Transformers have taken over traditional sequence models in many tasks, such as NLP, image classification, and video/audio processing, for their fast training and superior performance. Much of these merits result from positional encoding and multi-head attention. However, Transformers fall short in learning long-range dependencies mainly due to the quadratic complexity scaled with context length, in terms of both time and space. Consequently, over the past five years, a myriad of methods has been proposed to make Transformers more efficient. In this work, we first take a step back, study and compare existing solutions to long-sequence modeling in terms of their pure mathematical formulation. Specifically, we summarize them using a unified template, given their shared nature of token mixing. Through benchmarks, we then demonstrate that long context length does yield better performance, albeit application-dependent, and traditional Transformer models fall short in taking advantage of long-range dependencies. Next, inspired by emerging sparse models of huge capacity, we propose a machine learning system for handling million-scale dependencies. As a proof of concept, we evaluate the performance of one essential component of this system, namely, the distributed multi-head attention. We show that our algorithm can scale up attention computation by almost $40\times$ using four GeForce RTX 4090 GPUs, compared to vanilla multi-head attention mechanism. We believe this study is an instrumental step towards modeling million-scale dependencies.
翻译:自其构想以来,转型者在许多任务(如NLP、图像分类和视频/视听处理)中超越了传统的序列模式,以进行快速培训和优异性能,这些优点大多来自位置编码和多头关注,但是,转型者在学习长期依赖性方面落后,主要是因为时间和空间上随背景长度而缩小的四重复杂度,因此,在过去五年里,提出了多种方法,使变异者更加有效。在这项工作中,我们首先从纯数学公式的角度对现有解决方案进行后退一步,研究并将现有解决方案与长期顺序建模进行比较。具体地说,我们用统一模板总结这些解决方案,因为它们具有象征性混合的共性。然后,我们通过基准表明,长期背景长度确实产生更好的业绩,尽管依赖应用程序,而传统的变异模式在时间上也因时间长度的相互依赖性而缩小,因此在过去五年里,提出了多种方法使变异的变异型模式产生更有效率。我们提出一个处理百万级依赖性的机算学习系统。作为概念的证明,我们用一个统一的模板来评估它们的表现,我们用40个基本结构的模型来分析,即我们用40个方向的模型来分析。