Image restoration tasks have witnessed great performance improvement in recent years by developing large deep models. Despite the outstanding performance, the heavy computation demanded by the deep models has restricted the application of image restoration. To lift the restriction, it is required to reduce the size of the networks while maintaining accuracy. Recently, N:M structured pruning has appeared as one of the effective and practical pruning approaches for making the model efficient with the accuracy constraint. However, it fails to account for different computational complexities and performance requirements for different layers of an image restoration network. To further optimize the trade-off between the efficiency and the restoration accuracy, we propose a novel pruning method that determines the pruning ratio for N:M structured sparsity at each layer. Extensive experimental results on super-resolution and deblurring tasks demonstrate the efficacy of our method which outperforms previous pruning methods significantly. PyTorch implementation for the proposed methods will be publicly available at https://github.com/JungHunOh/SLS_CVPR2022.
翻译:近年来,通过开发大型深层模型,图像恢复任务取得了显著的改善。 尽管表现出色, 深层模型要求的大量计算限制了图像恢复的应用。 要取消限制, 就需要在保持准确性的同时缩小网络的大小。 最近, N:M结构化的修剪工作似乎是一种有效、 实用的修剪方法, 使模型在精确性限制下有效。 但是, 它没有考虑到图像恢复网络不同层次的不同计算复杂性和性能要求。 为了进一步优化效率和恢复准确性之间的取舍, 我们提议了一个新颖的修剪方法, 确定每个层N:M结构宽度的裁剪率。 有关超分辨率和分流任务的广泛实验结果显示了我们方法的功效, 大大超越了先前的裁剪裁方法。 在 https://github.com/JungHunHunO/SLVS_SVPR2022上, 将公布拟议方法的PyTorch实施情况。