The transformer has recently achieved impressive results on various tasks. To further improve the effectiveness and efficiency of the transformer, there are two trains of thought among existing works: (1) going wider by scaling to more trainable parameters; (2) going shallower by parameter sharing or model compressing along with the depth. However, larger models usually do not scale well when fewer tokens are available to train, and advanced parallelisms are required when the model is extremely large. Smaller models usually achieve inferior performance compared to the original transformer model due to the loss of representation power. In this paper, to achieve better performance with fewer trainable parameters, we propose a framework to deploy trainable parameters efficiently, by going wider instead of deeper. Specially, we scale along model width by replacing feed-forward network (FFN) with mixture-of-experts (MoE). We then share the MoE layers across transformer blocks using individual layer normalization. Such deployment plays the role to transform various semantic representations, which makes the model more parameter-efficient and effective. To evaluate our framework, we design WideNet and evaluate it on ImageNet-1K. Our best model outperforms Vision Transformer (ViT) by $1.46\%$ with $0.72 \times$ trainable parameters. Using $0.46 \times$ and $0.13 \times$ parameters, our WideNet can still surpass ViT and ViT-MoE by $0.83\%$ and $2.08\%$, respectively.
翻译:变压器最近在各种任务上取得了令人印象深刻的成果。为了进一步提高变压器的效能和效率,现有工程中有两列思维:(1) 通过推广到更多的可培训参数,扩大范围;(2) 通过参数共享或模型压缩与深度一起,使更浅的变压器在各种任务上最近取得了令人印象深刻的成果;然而,如果培训的标牌较少,而当模型规模极大时需要先进的平行,则较大的模型通常不那么大。较小型的模型通常比原始变压器模型的性能低,因为代表力的丧失。在本文中,为了以较少的可培训参数实现更好的性能,我们建议了一个框架,通过扩大而不是更深的参数来高效地部署可培训参数。特别是,我们通过以混合的专家(MOE)取代向前的网络(FFN),在模型的宽度上,然后使用单个的变压器的层结构。这种部署的作用是改变各种语义表达方式,使模型更具参数效率和有效性。为了评估我们的框架,我们设计宽网,并在图像网$VNet-1K上评估它。 我们用最好的模型比值的变压的变价模型,用我们的变压了我们的变压器。