Few ideas have enjoyed as large an impact on deep learning as convolution. For any problem involving pixels or spatial representations, common intuition holds that convolutional neural networks may be appropriate. In this paper we show a striking counterexample to this intuition via the seemingly trivial coordinate transform problem, which simply requires learning a mapping between coordinates in (x,y) Cartesian space and one-hot pixel space. Although convolutional networks would seem appropriate for this task, we show that they fail spectacularly. We demonstrate and carefully analyze the failure first on a toy problem, at which point a simple fix becomes obvious. We call this solution CoordConv, which works by giving convolution access to its own input coordinates through the use of extra coordinate channels. Without sacrificing the computational and parametric efficiency of ordinary convolution, CoordConv allows networks to learn either perfect translation invariance or varying degrees of translation dependence, as required by the task. CoordConv solves the coordinate transform problem with perfect generalization and 150 times faster with 10--100 times fewer parameters than convolution. This stark contrast raises the question: to what extent has this inability of convolution persisted insidiously inside other tasks, subtly hampering performance from within? A complete answer to this question will require further investigation, but we show preliminary evidence that swapping convolution for CoordConv can improve models on a diverse set of tasks. Using CoordConv in a GAN produced less mode collapse as the transform between high-level spatial latents and pixels becomes easier to learn. A Faster R-CNN detection model trained on MNIST detection showed 24% better IOU when using CoordConv, and in the RL domain agents playing Atari games benefit significantly from the use of CoordConv layers.
翻译:很少有想法能够像变迁一样对深层学习产生巨大影响。 对于任何涉及像素或空间表达的任何问题, 共同直觉认为, 进化神经网络可能是合适的。 在本文中, 我们通过似乎微不足道的协调转变问题展示出一个惊人的反直觉。 这只需要学习( X,y) 卡尔泰西亚空间和一热像素空间之间坐标的映射即可。 虽然共变网络似乎适合这项任务, 但我们显示, 共变网络的失败惊人地失败了。 我们首先展示并仔细分析一个与玩具问题有关的失败, 在那点, 一个简单的修正变得很明显。 我们称之为Coord Convon, 通过使用额外的协调渠道让进化访问自己的输入坐标, 来展示出一个惊人的反向反向。 CoordonConvil允许网络按照任务的要求学习完全的翻译或不同程度的翻译依赖性。 IordondConvil解决了协调的变换问题, 以完美的通用和150倍的变现, 比变现的参数要快10-100倍。 这个鲜明的对比提出了这样的问题: 在内部的变变变变变化过程中, 将多少, 我们的变变变变变化过程中如何, 显示, 显示, 继续的变化过程的变变化过程的动作将显示, 更进一步的 更 更 更 更 更 更 更 更 更, 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更 更