The effective receptive field of a fully convolutional neural network is an important consideration when designing an architecture, as it defines the portion of the input visible to each convolutional kernel. We propose a neural network module, extending traditional skip connections, called the translated skip connection. Translated skip connections geometrically increase the receptive field of an architecture with negligible impact on both the size of the parameter space and computational complexity. By embedding translated skip connections into a benchmark architecture, we demonstrate that our module matches or outperforms four other approaches to expanding the effective receptive fields of fully convolutional neural networks. We confirm this result across five contemporary image segmentation datasets from disparate domains, including the detection of COVID-19 infection, segmentation of aerial imagery, common object segmentation, and segmentation for self-driving cars.
翻译:完全进化神经网络的有效可接受领域是设计一个架构时的一个重要考虑因素,因为它界定了每个进化内核可见的输入部分。 我们提议了一个神经网络模块,扩展传统的跳过连接,称为翻译跳过连接。 转换了跳过连接, 以几何方式增加了一个架构的可接受领域, 对参数空间大小和计算复杂性的影响微乎其微。 通过嵌入将跳过连接转化为基准架构, 我们证明我们的模块与扩大完全进化神经网络的有效可接受领域的其他四种方法相匹配或优于其他四种方法。 我们确认来自不同领域的五个当代图像分割数据集的这一结果, 包括检测COVID-19感染、对空中图像的分割、共同对象分割以及自驾驶汽车的分割。