Overfit is a fundamental problem in machine learning in general, and in deep learning in particular. In order to reduce overfit and improve generalization in the classification of images, some employ invariance to a group of transformations, such as rotations and reflections. However, since not all objects exhibit necessarily the same invariance, it seems desirable to allow the network to learn the useful level of invariance from the data. To this end, motivated by self-supervision, we introduce an architecture enhancement for existing neural network models based on input transformations, termed 'TransNet', together with a training algorithm suitable for it. Our model can be employed during training time only and then pruned for prediction, resulting in an equivalent architecture to the base model. Thus pruned, we show that our model improves performance on various data-sets while exhibiting improved generalization, which is achieved in turn by enforcing soft invariance on the convolutional kernels of the last layer in the base model. Theoretical analysis is provided to support the proposed method.
翻译:一般而言,在机器学习方面,特别是在深层学习方面,过分适用是一个根本问题。为了减少图像分类的过度适用和改进一般化,一些人对一组变异(如旋转和反射)采取惯用做法。然而,由于并非所有物体都表现出同样的变异,因此似乎应该允许网络从数据中学习有用的变异程度。为此,出于自我监督的动机,我们根据输入变换(称为“TransNet”)和适合这种变异的培训算法对现有神经网络模型进行结构改进。我们的模型只能在培训期间使用,然后用于预测,从而形成与基本模型相当的结构。因此,我们经调整后表明,我们的模型在改进各种数据集的性能的同时,通过在基础模型最后一个层的同层内实施软变换而实现。提供了理论分析,以支持拟议的方法。