The accuracy of deep convolutional neural networks (CNNs) generally improves when fueled with high resolution images. However, this often comes at a high computational cost and high memory footprint. Inspired by the fact that not all regions in an image are task-relevant, we propose a novel framework that performs efficient image classification by processing a sequence of relatively small inputs, which are strategically selected from the original image with reinforcement learning. Such a dynamic decision process naturally facilitates adaptive inference at test time, i.e., it can be terminated once the model is sufficiently confident about its prediction and thus avoids further redundant computation. Notably, our framework is general and flexible as it is compatible with most of the state-of-the-art light-weighted CNNs (such as MobileNets, EfficientNets and RegNets), which can be conveniently deployed as the backbone feature extractor. Experiments on ImageNet show that our method consistently improves the computational efficiency of a wide variety of deep models. For example, it further reduces the average latency of the highly efficient MobileNet-V3 on an iPhone XS Max by 20% without sacrificing accuracy. Code and pre-trained models are available at https://github.com/blackfeather-wang/GFNet-Pytorch.
翻译:深相神经神经网络(CNNs)的精密性通常在高分辨率图像的推动下得到提高。然而,这往往以高计算成本和高记忆足迹的形式出现。受并非所有图像区域都与任务相关这一事实的启发,我们提议了一个新框架,通过处理相对较小的投入进行高效的图像分类,这些投入从原始图像中战略性地从原始图像中选择,并进行强化学习。这种动态决策过程自然有利于测试时的适应性推断,即一旦模型对预测有充分信心,从而避免进一步的重复计算,就可以终止。值得注意的是,我们的框架是一般的和灵活的,因为它与大多数最先进的光重CNN(如移动网络、高效网络和RegNets)兼容,这些投入可以方便地作为主干特征提取器进行部署。图像网络实验显示,我们的方法在不断提高各种深层模型的计算效率。例如,一旦模型对它的预测具有足够信心,从而避免进一步的冗余计算。值得注意的是,我们的框架是通用和灵活的,因为它与大多数最先进的移动网络V3号(iPhone XS Max/Max-argemb a prestrain commogradustred commode) 20% 和MACmax-reval-rev-ragemental-fillyal-commaxlation。在 20/arg-rview-rview-rview-rvial-rviewmationality.