Deploying convolutional neural networks (CNNs) on embedded devices is difficult due to the limited memory and computation resources. The redundancy in feature maps is an important characteristic of those successful CNNs, but has rarely been investigated in neural architecture design. This paper proposes a novel Ghost module to generate more feature maps from cheap operations. Based on a set of intrinsic feature maps, we apply a series of linear transformations with cheap cost to generate many ghost feature maps that could fully reveal information underlying intrinsic features. The proposed Ghost module can be taken as a plug-and-play component to upgrade existing convolutional neural networks. Ghost bottlenecks are designed to stack Ghost modules, and then the lightweight GhostNet can be easily established. Experiments conducted on benchmarks demonstrate that the proposed Ghost module is an impressive alternative of convolution layers in baseline models, and our GhostNet can achieve higher recognition performance (e.g. $75.7\%$ top-1 accuracy) than MobileNetV3 with similar computational cost on the ImageNet ILSVRC-2012 classification dataset. Code is available at https://github.com/huawei-noah/ghostnet
翻译:由于记忆和计算资源有限,难以在嵌入装置上部署进化神经网络(CNNs),因为内嵌装置上很难部署进化神经网络(CNNs),地貌图中的冗余是那些成功的CNN公司的一个重要特征,但在神经结构设计中却很少调查。本文件提议了一个新型的Ghost模块,以便从廉价操作中产生更多的地貌地图。根据一套内在地貌地图,我们采用一系列成本低廉的线性转换方法,以生成许多能够充分揭示内在特征背后的信息的幽灵特征地图。拟议的Ghost模块可以被视为升级现有进化神经网络的插件和播放部件。Ghost瓶颈是设计成堆积Ghost模块的,然后轻量的GhostNet可以很容易建立。根据基准进行的实验表明,拟议的Ghost模块是基线模型中令人印象深刻的进化层的替代物,我们的GhostNet可以比MiveNetV3在ILSVRC-2012分类数据集上具有类似的计算成本(例如75.7-7-$顶一美元)。代码可在http://github.com/huweiah-noah/ghessnetnetnet。代码上查阅。