Transformers have become central to recent advances in computer vision. However, training a vision Transformer (ViT) model from scratch can be resource intensive and time consuming. In this paper, we aim to explore approaches to reduce the training costs of ViT models. We introduce some algorithmic improvements to enable training a ViT model from scratch with limited hardware (1 GPU) and time (24 hours) resources. First, we propose an efficient approach to add locality to the ViT architecture. Second, we develop a new image size curriculum learning strategy, which allows to reduce the number of patches extracted from each image at the beginning of the training. Finally, we propose a new variant of the popular ImageNet1k benchmark by adding hardware and time constraints. We evaluate our contributions on this benchmark, and show they can significantly improve performances given the proposed training budget. We will share the code in https://github.com/BorealisAI/efficient-vit-training.
翻译:然而,从零开始培训视觉变异器(VIT)模式可能会耗费大量资源和时间。在本文中,我们的目标是探索降低VIT模式培训成本的方法。我们引入了一些算法改进方法,以便能够利用有限的硬件(1 GPU)和时间(24小时)资源从零开始培训VIT模式。首先,我们建议了一种有效的方法,为VIT结构增加地点。第二,我们制定了新的图像规模课程学习战略,这样可以减少从培训开始时从每张图像中提取的补丁数量。最后,我们提出了一个受欢迎的图像网络1k基准的新变体,增加了硬件和时间限制。我们评估了我们在这个基准上的贡献,并表明根据拟议的培训预算,它们能够大大改进业绩。我们将在https://github.com/BorealisAI/appect-vit-training中分享该代码。