In contrast to fully connected networks, Convolutional Neural Networks (CNNs) achieve efficiency by learning weights associated with local filters with a finite spatial extent. An implication of this is that a filter may know what it is looking at, but not where it is positioned in the image. In this paper, we first test this hypothesis and reveal that a surprising degree of absolute position information is encoded in commonly used CNNs. We show that zero padding drives CNNs to encode position information in their internal representations, while a lack of padding precludes position encoding. This gives rise to deeper questions about the role of position information in CNNs: (i) What boundary heuristics enable optimal position encoding for downstream tasks?; (ii) Does position encoding affect the learning of semantic representations?; (iii) Does position encoding always improve performance? To provide answers, we perform the largest case study to date on the role that padding and border heuristics play in CNNs. We design novel tasks which allow us to quantify boundary effects as a function of the distance to the border. Numerous semantic objectives reveal the effect of the border on semantic representations. Finally, we demonstrate the implications of these findings on multiple real-world tasks to show that position information can both help or hurt performance.
翻译:与完全连通的网络相反, 革命神经网络(CNNs)通过学习与局部空间空间范围内的本地过滤器相关的权重来提高效率。 这意味着过滤器可能知道它所看到的是什么, 但不是图像中的位置。 在本文中, 我们首先测试这一假设, 并揭示在常用的CNN 中编码了惊人程度的绝对位置信息。 我们显示, 零倾斜驱动CNN将信息输入内部表达器中的位置信息编码, 而没有划线则排除定位编码。 这导致对CNN 中位置信息的作用产生更深的疑问:(i) 哪些边界超常能为下游任务提供最佳位置编码? (ii) 位置编码是否影响语义表现的学习? (iii) 位置编码总是能提高性能? (iii) 为了提供答案, 我们进行了迄今为止最大的案例研究, 说明在CNN 中嵌入和边框中扮演的角色。 我们设计了新任务, 能够量化边界的功能, 边界影响。 无数的多重目标揭示了边界对任务的影响。