Image restoration tasks have achieved tremendous performance improvements with the rapid advancement of deep neural networks. However, most prevalent deep learning models perform inference statically, ignoring that different images have varying restoration difficulties and lightly degraded images can be well restored by slimmer subnetworks. To this end, we propose a new solution pipeline dubbed ClassPruning that utilizes networks with different capabilities to process images with varying restoration difficulties. In particular, we use a lightweight classifier to identify the image restoration difficulty, and then the sparse subnetworks with different capabilities can be sampled based on predicted difficulty by performing dynamic N:M fine-grained structured pruning on base restoration networks. We further propose a novel training strategy along with two additional loss terms to stabilize training and improve performance. Experiments demonstrate that ClassPruning can help existing methods save approximately 40% FLOPs while maintaining performance.
翻译:随着深层神经网络的快速发展,图像恢复任务取得了巨大的性能改进。然而,大多数流行的深层学习模型都静态地进行推论,忽略了不同的图像有不同的恢复困难,微缩子网络可以很好地恢复轻微退化的图像。为此,我们提出一个新的解决方案管道,即所谓的“分类处理管道”,利用具有不同能力的网络处理具有不同恢复困难的图像。特别是,我们使用一个轻量级分类器来查明图像恢复困难,然后根据预测的困难,根据在基础恢复网络上进行动态N:M精细裁剪裁剪裁剪线,对具有不同能力的稀疏小小网络进行取样。我们进一步提出一个新的培训战略,同时增加两个损失条件,以稳定培训和改进性能。实验表明,分类处理有助于现有方法节省约40%的FLOP,同时保持性能。