Recent advances in Transformers have come with a huge requirement on computing resources, highlighting the importance of developing efficient training techniques to make Transformer training faster, at lower cost, and to higher accuracy by the efficient use of computation and memory resources. This survey provides the first systematic overview of the efficient training of Transformers, covering the recent progress in acceleration arithmetic and hardware, with a focus on the former. We analyze and compare methods that save computation and memory costs for intermediate tensors during training, together with techniques on hardware/algorithm co-design. We finally discuss challenges and promising areas for future research.
翻译:最近变异器的进步带来了对计算资源的巨大需求,突出了开发高效培训技术的重要性,使变异器培训更快、成本更低,通过高效使用计算和记忆资源提高准确性。这项调查首次系统地概述了变异器的有效培训,涵盖了加速算术和硬件方面的最新进展,重点是前者。我们分析并比较了在培训中节省中导体计算和记忆成本的方法,以及硬件/酒精联合设计技术。我们最后讨论了未来研究的挑战和有希望的领域。