Transformers have quickly shined in the computer vision world since the emergence of Vision Transformers (ViTs). The dominant role of convolutional neural networks (CNNs) seems to be challenged by increasingly effective transformer-based models. Very recently, a couple of advanced convolutional models strike back with large kernels motivated by the local-window attention mechanism, showing appealing performance and efficiency. While one of them, i.e. RepLKNet, impressively manages to scale the kernel size to 31x31 with improved performance, the performance starts to saturate as the kernel size continues growing, compared to the scaling trend of advanced ViTs such as Swin Transformer. In this paper, we explore the possibility of training extreme convolutions larger than 31x31 and test whether the performance gap can be eliminated by strategically enlarging convolutions. This study ends up with a recipe for applying extremely large kernels from the perspective of sparsity, which can smoothly scale up kernels to 61x61 with better performance. Built on this recipe, we propose Sparse Large Kernel Network (SLaK), a pure CNN architecture equipped with sparse factorized 51x51 kernels that can perform on par with or better than state-of-the-art hierarchical Transformers and modern ConvNet architectures like ConvNeXt and RepLKNet, on ImageNet classification as well as a wide range of downstream tasks including semantic segmentation on ADE20K, object detection on PASCAL VOC 2007, and object detection/segmentation on MS COCO.
翻译:自视野变异器(View Greenerations)出现以来,计算机视野世界的变异器迅速发光。 变异神经网络(CNNs)的主导作用似乎受到日益有效的变异器模型的挑战。 最近,一些先进的变异模型以由本地窗口关注机制驱动的大型内核反弹,显示了令人兴奋的性能和效率。其中之一,即ReplKNet,随着性能的改善,将内核规模大幅控制到31x31,随着内核的内核规模继续增长,性能开始饱和,与Swin变异器等高级变异器的变异器的扩大趋势相比,这些变异网络网络似乎面临挑战。在本论文中,我们探索了对超过31x31的极端变异模型进行培训的可能性,并测试能否通过战略性的扩大变异器来消除性能差距。虽然其中之一,即RepLKNetNet(Regal)网络的内核内核内核内核内核内核内核内核内核内核的内核内核内核内核内核内核内核内核内核内核内核内核内核的内核内核内核结构,以及内核内核内核内核内核内核内核内核的内核内核内核内核内核内核内核内核内核内核内核内核内核内核内核内核内核内核内核内核内核内核内核内核内核内核内核内核内核内核内装的内核内核内核内核内核内核内核内核内核内核内核内核内核内核内核内核内核内核内核内核内核内核内核内核内核内核内核内核内核内核内核内核内核内核内,其内装的精性能的内核内核内核内核内核内核的内核内核内核内核内核内核内核内核内核内核内核内核的内核内核内核内部。