Invariances to translations have imbued convolutional neural networks with powerful generalization properties. However, we often do not know a priori what invariances are present in the data, or to what extent a model should be invariant to a given symmetry group. We show how to \emph{learn} invariances and equivariances by parameterizing a distribution over augmentations and optimizing the training loss simultaneously with respect to the network parameters and augmentation parameters. With this simple procedure we can recover the correct set and extent of invariances on image classification, regression, segmentation, and molecular property prediction from a large space of augmentations, on training data alone.
翻译:翻译的易变性已经注入了具有强力一般化特性的连锁神经网络。 但是,我们常常不知道数据中存在哪些先验性差异,或者模型应该在多大程度上对特定对称组有差异。 我们通过将扩增的分布参数参数化来显示增量的变异性和等异性,并同时根据网络参数和扩增参数优化培训损失。 通过这一简单程序,我们可以从巨大的扩增空间中,单从数据培训中恢复关于图像分类、回归、分解和分子属性预测的正确数据集和差异程度。