The success of multi-head self-attentions (MSAs) for computer vision is now indisputable. However, little is known about how MSAs work. We present fundamental explanations to help better understand the nature of MSAs. In particular, we demonstrate the following properties of MSAs and Vision Transformers (ViTs): (1) MSAs improve not only accuracy but also generalization by flattening the loss landscapes. Such improvement is primarily attributable to their data specificity, not long-range dependency. On the other hand, ViTs suffer from non-convex losses. Large datasets and loss landscape smoothing methods alleviate this problem; (2) MSAs and Convs exhibit opposite behaviors. For example, MSAs are low-pass filters, but Convs are high-pass filters. Therefore, MSAs and Convs are complementary; (3) Multi-stage neural networks behave like a series connection of small individual models. In addition, MSAs at the end of a stage play a key role in prediction. Based on these insights, we propose AlterNet, a model in which Conv blocks at the end of a stage are replaced with MSA blocks. AlterNet outperforms CNNs not only in large data regimes but also in small data regimes. The code is available at https://github.com/xxxnell/how-do-vits-work.
翻译:多头计算机视觉自控(MSAs)的成功现在不容置疑,然而,对管理事务协议如何运作知之甚少。我们为更好地了解管理事务协议的性质提供了基本的解释,特别是,我们展示了管理事务协议和愿景变异(VITs)的以下特性:(1) 管理事务协议不仅提高了准确性,而且通过平整损失场面貌而提高了一般化程度。这种改进主要归因于其数据特性,而不是长期依赖性。另一方面,维特公司遭受了非康韦克斯损失。大型数据集和损失平滑地貌方法缓解了这一问题;(2) 管理事务协议和Convs表现出了相反的行为。例如,管理事务协议是低路过滤器,但Convs是高通的过滤器。因此,管理事务协议和愿景网络是相辅相成的;(3) 多阶段神经网络表现像一系列小的个体模型。此外,处于阶段末端的特派任务生活津贴在预测中扮演着关键角色。基于这些洞察,我们建议AlterNet,一个模式是CON-CUFS-FER系统在最大阶段中不只替换了数据系统。