Transformer-based models have delivered impressive results on many tasks, particularly vision and language tasks. In many model training situations, conventional configurations are typically adopted. For example, we often set the base model with hidden dimensions (i.e. model width) to be 768 and the number of transformer layers (i.e. model depth) to be 12. In this paper, we revisit these conventional configurations. Through theoretical analysis and experimental evaluation, we show that the masked autoencoder is effective in alleviating the over-smoothing issue in deep transformer training. Based on this finding, we propose Bamboo, an idea of using deeper and narrower transformer configurations, for masked autoencoder training. On ImageNet, with such a simple change in configuration, re-designed model achieves 87.1% top-1 accuracy and outperforms SoTA models like MAE and BEiT. On language tasks, re-designed model outperforms BERT with default setting by 1.1 points on average, on GLUE datasets.
翻译:以变换器为基础的模型在许多任务上取得了令人印象深刻的成果,特别是视觉和语言任务。在许多示范培训情况下,通常会采用常规配置。例如,我们常常将隐藏维度(即模型宽度)的基础模型设置为768,变压器层(即模型深度)数目定为12。在本文件中,我们重新审视这些常规配置。通过理论分析和实验评估,我们显示蒙面自动编码器在深变压器培训中有效缓解过度移动的问题。基于这一发现,我们提议竹布,这是使用更深、更窄变压器配置的想法,用于掩码自动编码器培训。在图像网络上,经过如此简单的配置变化,重新设计的模型实现了87.1%的顶层-1精确度,并超越了像MAE和BeiT这样的SoT模型。在语言任务上,重新设计的模型以平均1.1点设定GLUE数据集的默认值来取代BERT。