To perform well on unseen and potentially out-of-distribution samples, it is desirable for machine learning models to have a predictable response with respect to transformations affecting the factors of variation of the input. Here, we study the relative importance of several types of inductive biases towards such predictable behavior: the choice of data, their augmentations, and model architectures. Invariance is commonly achieved through hand-engineered data augmentation, but do standard data augmentations address transformations that explain variations in real data? While prior work has focused on synthetic data, we attempt here to characterize the factors of variation in a real dataset, ImageNet, and study the invariance of both standard residual networks and the recently proposed vision transformer with respect to changes in these factors. We show standard augmentation relies on a precise combination of translation and scale, with translation recapturing most of the performance improvement -- despite the (approximate) translation invariance built in to convolutional architectures, such as residual networks. In fact, we found that scale and translation invariance was similar across residual networks and vision transformer models despite their markedly different architectural inductive biases. We show the training data itself is the main source of invariance, and that data augmentation only further increases the learned invariances. Notably, the invariances learned during training align with the ImageNet factors of variation we found. Finally, we find that the main factors of variation in ImageNet mostly relate to appearance and are specific to each class.
翻译:为了在看不见的和潜在的分配外的样本上取得良好表现,机器学习模型最好能够对影响投入变化因素的变异因素作出可预测的反应。在这里,我们研究对此类可预见行为的几种类型的诱导偏差的相对重要性:数据的选择、其增强和模型结构。差异通常是通过手工设计的数据增强实现的,但标准数据增强处理解释真实数据差异的变异。虽然以前的工作侧重于合成数据,但我们试图在这里对真实数据集(图像网)的变化因素作出可预测的反应,并研究标准备用网络和最近提议的视觉变异器在这些因素变化方面的异性。我们显示标准增强取决于精确的翻译和规模组合,而翻译则能够重新掌握大部分性能改进 -- -- 尽管(近似)将变异性转化为革命性结构,例如残余网络。事实上,我们发现,在剩余网络和视觉变异性模型中,规模和变异性是相似的,尽管它们明显不同的结构偏差。我们显示标准网的变异性和最近提出的变异性取决于这些因素。我们显示标准增长取决于翻译和规模的精确性,我们最后在学习的图像变异性因素中发现,我们所学到的数据本身。