Deep convolutional neural networks (CNNs) are often of sophisticated design with numerous convolutional layers and learnable parameters for the accuracy reason. To alleviate the expensive costs of deploying them on mobile devices, recent works have made huge efforts for excavating redundancy in pre-defined architectures. Nevertheless, the redundancy on the input resolution of modern CNNs has not been fully investigated, i.e., the resolution of input image is fixed. In this paper, we observe that the smallest resolution for accurately predicting the given image is different using the same neural network. To this end, we propose a novel dynamic-resolution network (DRNet) in which the resolution is determined dynamically based on each input sample. Thus, a resolution predictor with negligible computational costs is explored and optimized jointly with the desired network. In practice, the predictor learns the smallest resolution that can retain and even exceed the original recognition accuracy for each image. During the inference, each input image will be resized to its predicted resolution for minimizing the overall computation burden. We then conduct extensive experiments on several benchmark networks and datasets. The results show that our DRNet can be embedded in any off-the-shelf network architecture to obtain a considerable reduction in computational complexity. For instance, DRNet achieves similar performance with an about 34% computation reduction, while gains 1.4% accuracy increase with 10% computation reduction compared to the original ResNet-50 on ImageNet.
翻译:深相心血管网络(CNNs)通常设计精密,设计时具有许多进化层,并且可以学习精确度原因的参数。为了降低在移动设备上部署这些网络的昂贵成本,最近的工作为在预定义的架构中挖掘冗余作出了巨大努力。然而,现代CNN的输入分辨率的冗余没有得到充分调查,即输入图像的分辨率是固定的。在本文中,我们观察到,精确预测给定图像的最微小分辨率使用相同的神经网络,是不同的。为此,我们提议建立一个新的动态分辨率网络(DRNet),在其中根据每个输入样本动态确定分辨率。因此,与理想的网络一起探索和优化了可忽略的计算成本冗余的解决方案。在实践中,预测者学习了最小的分辨率,可以保留甚至超过每个图像的初始识别准确度。在推断中,每种输入图像图像的缩放到原始分辨率,以尽量减少总体计算负担。我们随后对几个基准网络和数据集进行了广泛的实验。结果显示,在降低我们的30号网络的精确度计算中,可以实现类似的递减率。