Feature reuse has been a key technique in light-weight convolutional neural networks (CNNs) design. Current methods usually utilize a concatenation operator to keep large channel numbers cheaply (thus large network capacity) by reusing feature maps from other layers. Although concatenation is parameters- and FLOPs-free, its computational cost on hardware devices is non-negligible. To address this, this paper provides a new perspective to realize feature reuse via structural re-parameterization technique. A novel hardware-efficient RepGhost module is proposed for implicit feature reuse via re-parameterization, instead of using concatenation operator. Based on the RepGhost module, we develop our efficient RepGhost bottleneck and RepGhostNet. Experiments on ImageNet and COCO benchmarks demonstrate that the proposed RepGhostNet is much more effective and efficient than GhostNet and MobileNetV3 on mobile devices. Specially, our RepGhostNet surpasses GhostNet 0.5x by 2.5% Top-1 accuracy on ImageNet dataset with less parameters and comparable latency on an ARM-based mobile phone.
翻译:在轻量级神经网络(CNNs)设计中,地貌再利用一直是一项关键技术。目前的方法通常使用连接操作器,通过重复使用其他层的地貌地图,廉价地保持大型频道数量(因此具有巨大的网络能力)。虽然连接是无参数的,FLOPs是无参数的,但其硬件装置的计算成本是不可忽略的。为此,本文件提供了一个通过结构再分计技术实现特征再利用的新视角。提出了一个新的硬件效率高的ReGhost模块,用于通过再校准来隐含地段再利用,而不是使用连接操作器。基于RepsGhost模块,我们开发了高效的RegGhost瓶盖和RepGhostNet。图像网络和CO基准实验表明,拟议的RegGhostNet和移动装置的移动式移动式网络比GhostNet和移动式网络更有效和更高效得多。特别是,我们的RepGhostNet在图像网络数据集上使用较少的参数和可比的拉度。