Recurrent neural networks are effective models to process sequences. However, they are unable to learn long-term dependencies because of their inherent sequential nature. As a solution, Vaswani et al. introduced the Transformer, a model solely based on the attention mechanism that is able to relate any two positions of the input sequence, hence modelling arbitrary long dependencies. The Transformer has improved the state-of-the-art across numerous sequence modelling tasks. However, its effectiveness comes at the expense of a quadratic computational and memory complexity with respect to the sequence length, hindering its adoption. Fortunately, the deep learning community has always been interested in improving the models' efficiency, leading to a plethora of solutions such as parameter sharing, pruning, mixed-precision, and knowledge distillation. Recently, researchers have directly addressed the Transformer's limitation by designing lower-complexity alternatives such as the Longformer, Reformer, Linformer, and Performer. However, due to the wide range of solutions, it has become challenging for the deep learning community to determine which methods to apply in practice to meet the desired trade-off between capacity, computation, and memory. This survey addresses this issue by investigating popular approaches to make the Transformer faster and lighter and by providing a comprehensive explanation of the methods' strengths, limitations, and underlying assumptions.
翻译:经常神经网络是处理序列的有效模型,然而,它们无法了解长期依赖性,因为它们具有内在的相继性质。幸运的是,深层学习界一直有兴趣提高模型的效率,从而导致大量的解决办法,如参数共享、裁剪、混合精度和知识蒸馏。最近,研究人员直接解决了变异器的局限性,设计了低兼容性替代方法,如长古、改革者、Linfororent和表现者。然而,由于解决方案范围广泛,它的效力已变得对深层学习界具有挑战性,难以确定在实践中采用何种方法来满足理想的贸易效率,从而通过调查能力、计算、混合精度和知识蒸馏等要素共享、调整、混合精度和知识蒸馏等解决方案。最近,研究人员直接解决了变异器的局限性,设计了低兼容性替代方法,如长古、改革者、改革者、Linorent和表现者。然而,由于解决方案范围广泛,深层学习界难以确定在实践中采用哪种方法,以适应变革能力、计算和全面记忆度之间的快速解释方法。