Continuous convolution has recently gained prominence due to its ability to handle irregularly sampled data and model long-term dependency. Also, the promising experimental results of using large convolutional kernels have catalyzed the development of continuous convolution since they can construct large kernels very efficiently. Leveraging neural networks, more specifically multilayer perceptrons (MLPs), is by far the most prevalent approach to implementing continuous convolution. However, there are a few drawbacks, such as high computational costs, complex hyperparameter tuning, and limited descriptive power of filters. This paper suggests an alternative approach to building a continuous convolution without neural networks, resulting in more computationally efficient and improved performance. We present self-moving point representations where weight parameters freely move, and interpolation schemes are used to implement continuous functions. When applied to construct convolutional kernels, the experimental results have shown improved performance with drop-in replacement in the existing frameworks. Due to its lightweight structure, we are first to demonstrate the effectiveness of continuous convolution in a large-scale setting, e.g., ImageNet, presenting the improvements over the prior arts. Our code is available on https://github.com/sangnekim/SMPConv
翻译:近年来,由于其处理非规则采样数据和建模长期依赖性的能力,连续卷积已经变得越来越受关注。此外,使用大型卷积核的有望实验结果促进了连续卷积的发展,因为它们可以非常高效地构造大型卷积核。利用神经网络,更具体地说是多层感知机(MLPs),是目前实现连续卷积最普遍的方法。然而,存在一些缺点,如高计算成本,复杂的超参数调整和有限的滤波器描述能力。本文提出一种不使用神经网络构建连续卷积的替代方法,结果更具计算效率和改进性能。我们提出了自移动点表示,其中权重参数自由移动,并使用插值方案来实现连续函数。当应用于构建卷积核时,实验结果显示出更好的性能,并且可以用现有框架进行替换。由于其轻量级结构,我们首次在大规模环境下,如ImageNet,展示了连续卷积的有效性,表现出了优于先前工艺的改进。我们的代码可在https://github.com/sangnekim/SMPConv 上找到。