Convolutional neural networks lack shift equivariance due to the presence of downsampling layers. In image classification, adaptive polyphase downsampling (APS-D) was recently proposed to make CNNs perfectly shift invariant. However, in networks used for image reconstruction tasks, it can not by itself restore shift equivariance. We address this problem by proposing adaptive polyphase upsampling (APS-U), a non-linear extension of conventional upsampling, which allows CNNs with symmetric encoder-decoder architecture (for example U-Net) to exhibit perfect shift equivariance. With MRI and CT reconstruction experiments, we show that networks containing APS-D/U layers exhibit state of the art equivariance performance without sacrificing on image reconstruction quality. In addition, unlike prior methods like data augmentation and anti-aliasing, the gains in equivariance obtained from APS-D/U also extend to images outside the training distribution.
翻译:革命性神经网络由于存在下层抽样层而缺乏变化等值。 在图像分类中,最近提议使适应性聚相下层抽样(APS-D)使CNN完全变换。然而,在用于图像重建任务的网络中,它本身无法恢复变化等值。我们通过提出适应性多相上层抽样(APS-U)来解决这个问题,这是常规上层抽样的非线性扩展,使有对称编码-脱相结构(例如U-Net)的CNN能够展示完美的变化等值。在MRI和CT重建实验中,我们显示含有APS-D/U层艺术等同性表现的网络在不牺牲图像重建质量的情况下展示了艺术等值状态。此外,与以往的数据增强和反反反反反反变法的方法不同,从APS-D/U获得的等异性收益也延伸到培训分布以外的图像。