Convolutional Neural Networks (CNNs) have achieved great success due to the powerful feature learning ability of convolution layers. Specifically, the standard convolution traverses the input images/features using a sliding window scheme to extract features. However, not all the windows contribute equally to the prediction results of CNNs. In practice, the convolutional operation on some of the windows (e.g., smooth windows that contain very similar pixels) can be very redundant and may introduce noises into the computation. Such redundancy may not only deteriorate the performance but also incur the unnecessary computational cost. Thus, it is important to reduce the computational redundancy of convolution to improve the performance. To this end, we propose a Content-aware Convolution (CAC) that automatically detects the smooth windows and applies a 1x1 convolutional kernel to replace the original large kernel. In this sense, we are able to effectively avoid the redundant computation on similar pixels. By replacing the standard convolution in CNNs with our CAC, the resultant models yield significantly better performance and lower computational cost than the baseline models with the standard convolution. More critically, we are able to dynamically allocate suitable computation resources according to the data smoothness of different images, making it possible for content-aware computation. Extensive experiments on various computer vision tasks demonstrate the superiority of our method over existing methods.
翻译:进化神经网络(CNNs)由于进化层的强大学习能力而取得了巨大成功。 具体地说, 标准进化过程使用滑动窗口图案来提取功能, 输入图像/ 功能的计算重叠。 但是, 并非所有窗口都同样有助于CNN的预测结果。 实际上, 某些窗口( 例如, 包含非常相似像素的光滑窗口) 的进化操作可能非常多余, 并可能会在计算中引入噪音。 这种冗余不仅会恶化性能, 还会产生不必要的计算成本。 因此, 减少进化的计算冗余对于改进性能十分重要。 为此, 我们建议一个内容觉悟变迁( CAC), 自动检测光滑的窗口, 并应用 1x1 的进化内核内核, 替换原有的大型像素。 从这个意义上说, 我们能够有效地避免类似的像素的冗余计算。 通过用 CAC 替换CNN的标准演算, 产生的结果模型的性能显著地降低计算成本, 来改进演算结果,, 而不是光度模型的计算,, 能够根据标准进式的进化的模型, 将它 的模型的计算, 以不同的计算方法, 向不同的进化, 向不同的计算方法, 。