Transformer model architectures have garnered immense interest lately due to their effectiveness across a range of domains like language, vision and reinforcement learning. In the field of natural language processing for example, Transformers have become an indispensable staple in the modern deep learning stack. Recently, a dizzying number of "X-former" models have been proposed - Reformer, Linformer, Performer, Longformer, to name a few - which improve upon the original Transformer architecture, many of which make improvements around computational and memory efficiency. With the aim of helping the avid researcher navigate this flurry, this paper characterizes a large and thoughtful selection of recent efficiency-flavored "X-former" models, providing an organized and comprehensive overview of existing work and models across multiple domains.
翻译:最近,由于在语言、视觉和强化学习等一系列领域的有效性,变形模型结构最近引起了极大的兴趣。例如,在自然语言处理领域,变形器已成为现代深层学习堆中不可或缺的主食。 最近,提出了数量惊人的“变形器”模型 — — 改革者、新手、表演者、长者等 — — 这些模型改进了原变形器结构,其中许多在计算和记忆效率方面有所改进。为了帮助活跃的研究人员浏览这一卷风,本文对最近节能的“变形器”“变形器”模型作了大量和深思广益的选择,对多个领域的现有工作和模型进行了有组织和全面的概述。