Neural architecture search (NAS) has recently reshaped our understanding on various vision tasks. Similar to the success of NAS in high-level vision tasks, it is possible to find a memory and computationally efficient solution via NAS with highly competent denoising performance. However, the optimization gap between the super-network and the sub-architectures has remained an open issue in both low-level and high-level vision. In this paper, we present a novel approach to filling in this gap by connecting model-guided design with NAS (MoD-NAS) and demonstrate its application into image denoising. Specifically, we propose to construct a new search space under model-guided framework and develop more stable and efficient differential search strategies. MoD-NAS employs a highly reusable width search strategy and a densely connected search block to automatically select the operations of each layer as well as network width and depth via gradient descent. During the search process, the proposed MoG-NAS is capable of avoiding mode collapse due to the smoother search space designed under the model-guided framework. Experimental results on several popular datasets show that our MoD-NAS has achieved even better PSNR performance than current state-of-the-art methods with fewer parameters, lower number of flops, and less amount of testing time.
翻译:最近,我们重新认识了各种愿景任务。与NAS在高水平愿景任务中的成功一样,我们可以通过NAS找到一个记忆和计算高效的解决方案,而NAS的分流性能很强。然而,超级网络和亚结构结构之间的优化差距仍然是低层次和高层次愿景中的一个未决问题。在本文件中,我们提出了一个填补这一差距的新办法,将模型指导设计与NAS(MoD-NAS)连接起来,并展示其应用到图像解密中。具体地说,我们提议在模型指导框架下建造一个新的搜索空间,并制订更稳定有效的差异搜索战略。MOD-NAS采用高度可重复使用的宽度搜索战略和密集连接搜索区,以自动选择每个层的运行以及通过梯度下降的网络宽度和深度。在搜索过程中,拟议的MOG-NAS能够避免模式崩溃,因为模型指导框架下设计的搜索空间更加平滑。在模型指导框架下,我们几个大众数据库的实验结果显示,我们目前PNAS的运行量比MS低,显示我们目前测试MNAS的频率要小得多。