Thanks to the use of convolution and pooling layers, convolutional neural networks were for a long time thought to be shift-invariant. However, recent works have shown that the output of a CNN can change significantly with small shifts in input: a problem caused by the presence of downsampling (stride) layers. The existing solutions rely either on data augmentation or on anti-aliasing, both of which have limitations and neither of which enables perfect shift invariance. Additionally, the gains obtained from these methods do not extend to image patterns not seen during training. To address these challenges, we propose adaptive polyphase sampling (APS), a simple sub-sampling scheme that allows convolutional neural networks to achieve 100% consistency in classification performance under shifts, without any loss in accuracy. With APS, the networks exhibit perfect consistency to shifts even before training, making it the first approach that makes convolutional neural networks truly shift-invariant.
翻译:由于使用进化层和集合层,长期认为进化神经网络是变换型的,但最近的工作表明,CNN的输出随着输入量的微小变化可以发生重大变化:一个由下层抽样(Stride)层造成的问题;现有的解决方案要么依靠数据增强,要么依靠反丑闻,两者都有局限性,两者都无法实现完全的变换。此外,从这些方法获得的收益并不延伸到培训期间所看不到的图像模式。为了应对这些挑战,我们建议采用适应性多阶段抽样(APS),这是一个简单的子抽样计划,允许同级神经网络在轮班中实现100%的分类性能一致性,不造成任何准确损失。与APS相比,这些网络在培训前就表现出完全的转变一致性,从而成为使进化神经网络真正变换型的第一个方法。