Dilated convolution is basically a convolution with a wider kernel created by regularly inserting spaces between the kernel elements. In this article, we present a new version of the dilated convolution in which the spacings are made learnable via backpropagation through an interpolation technique. We call this method "Dilated Convolution with Learnable Spacings" (DCLS) and we generalize its approach to the n-dimensional convolution case. However, our main focus here will be the 2D case for which we developed two implementations: a naive one that constructs the dilated kernel, suitable for small dilation rates, and a more time/memory efficient one that uses a modified version of the "im2col" algorithm. We then illustrate how this technique improves the accuracy of existing architectures on semantic segmentation task on Pascal Voc 2012 dataset via a simple drop-in replacement of the classical dilated convolutional layers by DCLS ones. Furthermore, we show that DCLS allows to reduce the number of learnable parameters of the depthwise convolutions used in the recent ConvMixer architecture by a factor 3 with no or very low reduction in accuracy and that by replacing large dense kernels with sparse DCLS ones. The code of the method is based on Pytorch and available at: https://github.com/K-H-Ismail/Dilated-Convolution-with-Learnable-Spacings-PyTorch.
翻译:熔化的混凝土基本上是由常规插入内核元素之间的空格创造的更大内核的混凝土。 在本篇文章中, 我们展示了一种新版本的膨胀式变压, 通过内插技术使间距可以通过反向换换换来学习。 我们称这种方法为“ 以可获取的间歇法( DCLS ) 的循环变异 ” ( DCLS ), 并推广它对于正维演动案例的处理方法。 然而, 我们这里的主要焦点将是我们为此开发了两个执行的 2D 案例: 一个天真的, 用来构建适合小变相率的膨胀内核, 以及一个更时/ 模拟有效的变相变法, 使用一个修改后的“ im2col” 算法。 我们然后说明这个方法如何通过简单滴换成由 DCLS 来取代经典的变相变相层层层层层。 我们显示, DCLS 允许减少深度变深的参数数量, 以最近的CRVIC 的低级法取代了最近的CRM- 的低级码, 。