We present a multigrid-in-channels (MGIC) approach that tackles the quadratic growth of the number of parameters with respect to the number of channels in standard convolutional neural networks (CNNs). Thereby our approach addresses the redundancy in CNNs that is also exposed by the recent success of lightweight CNNs. Lightweight CNNs can achieve comparable accuracy to standard CNNs with fewer parameters; however, the number of weights still scales quadratically with the CNN's width. Our MGIC architectures replace each CNN block with an MGIC counterpart that utilizes a hierarchy of nested grouped convolutions of small group size to address this. Hence, our proposed architectures scale linearly with respect to the network's width while retaining full coupling of the channels as in standard CNNs. Our extensive experiments on image classification, segmentation, and point cloud classification show that applying this strategy to different architectures like ResNet and MobileNetV3 reduces the number of parameters while obtaining similar or better accuracy.
翻译:我们提出了一个多格网内通道(MGIC)方法,解决标准进化神经网络中频道数量参数数的四倍增长问题。 因此,我们的方法解决了CNN中的冗余问题,这个问题最近轻量级CNN的成功也暴露了出来。 轻量级CNN可以用较少的参数实现与标准CNN的类似准确性;然而,重量数量仍然与CNN的宽度成正比。 我们的MGIC结构用一个MGIC对口单位取代每个CNN块,而MGIC则使用一个小群规模的嵌套式组合群群群群的相对口来解决这个问题。 因此,我们提议的架构在网络宽度方面线性地扩大,同时保持标准CNN的频道全面连接。 我们在图像分类、分解和点云分类方面的广泛实验显示,将这一战略适用于ResNet和MiveNetV3等不同结构会减少参数数量,同时获得类似或更好的精确度。