Ever since their conception, Transformers have taken over traditional sequence models in many tasks, such as NLP, image classification, and video/audio processing, for their fast training and superior performance. Much of the merit is attributable to positional encoding and multi-head attention. However, Transformers fall short in learning long-range dependencies mainly due to the quadratic complexity scaled with context length, in terms of both time and space. Consequently, over the past five years, a myriad of methods has been proposed to make Transformers more efficient. In this work, we first take a step back, study and compare existing solutions to long-sequence modeling in terms of their pure mathematical formulation. Specifically, we summarize them using a unified template, given their shared nature of token mixing. Through benchmarks, we then demonstrate that long context length does yield better performance, albeit application-dependent, and traditional Transformer models fall short in taking advantage of long-range dependencies. Next, inspired by emerging sparse models of huge capacity, we propose a machine learning system for handling million-scale dependencies. As a proof of concept, we evaluate the performance of one essential component of this system, namely, the distributed multi-head attention. We show that our algorithm can scale up attention computation by almost $40\times$ using four GeForce RTX 4090 GPUs, compared to vanilla multi-head attention mechanism. We believe this study is an instrumental step towards modeling million-scale dependencies.
翻译:自其构想以来,转型者在许多任务中,如NLP、图像分类和视频/视听处理等,已经超越了传统的序列模式,因为其快速培训和优异性能。许多优点都归因于位置编码和多头目的关注。然而,转型者在学习长期依赖性方面落后于长期学习,这主要是因为在时间和空间方面,背景复杂程度随时间和空间的长度而缩小四面形,因此,在过去五年中,提出了多种方法,使变异者提高效率。在这项工作中,我们首先从纯数学的编制角度,对现有解决方案进行后退一步研究,并将现有解决方案与长期序列模型进行比较。具体地说,我们用统一模板总结这些解决方案,因为它们具有象征性混合的共性。然后,我们通过基准表明,长背景长度在利用长期依赖性依赖性和传统变异性模型的情况下,能够产生更好的性能。随后,在新的变异性能力模型的激励下,我们提出了处理百万级依赖性的机能的机能学习系统。作为概念的证明,我们用一个统一的模范式模型来评估40级模型的性模型的性模型的运行过程。我们通过四级G级的模型来展示了40级的注意。