Modern solutions to the single image super-resolution (SISR) problem using deep neural networks aim not only at better performance accuracy but also at a lighter and computationally efficient model. To that end, recently, neural architecture search (NAS) approaches have shown some tremendous potential. Following the same underlying, in this paper, we suggest a novel trilevel NAS method that provides a better balance between different efficiency metrics and performance to solve SISR. Unlike available NAS, our search is more complete, and therefore it leads to an efficient, optimized, and compressed architecture. We innovatively introduce a trilevel search space modeling, i.e., hierarchical modeling on network-, cell-, and kernel-level structures. To make the search on trilevel spaces differentiable and efficient, we exploit a new sparsestmax technique that is excellent at generating sparse distributions of individual neural architecture candidates so that they can be better disentangled for the final selection from the enlarged search space. We further introduce the sorting technique to the sparsestmax relaxation for better network-level compression. The proposed NAS optimization additionally facilitates simultaneous search and training in a single phase, reducing search time and train time. Comprehensive evaluations on the benchmark datasets show our method's clear superiority over the state-of-the-art NAS in terms of a good trade-off between model size, performance, and efficiency.
翻译:使用深层神经网络的单一图像超分辨率(SISR)问题现代解决方案使用深层神经网络不仅着眼于提高性能准确性,而且着眼于更轻和计算效率的模型。为此目的,最近,神经结构搜索(NAS)方法显示了一些巨大的潜力。根据同样的基础,在本文件中,我们建议采用一种新的三维NAS方法,在不同的效率度量和性能之间提供更好的平衡,以便解决SIS(SISR)问题。与现有的NAS不同的是,我们的搜索更加完整,从而导致一个高效、优化和压缩的架构。我们创新地引入了三级搜索空间模型,即网络、细胞和内核结构的等级模型。为了让对三级空间的搜索变得不同和有效,我们采用了一种新的稀薄的NAS方法,在单个神经结构候选人的分散分布上提供了更好的平衡。我们的拟议NAS优化了同步的搜索与培训,在单一的阶段中,在时间定位上,在全面搜索和培训的高级性能标准上,在单一的州级标准上,在测试中,在精确的搜索和培训标准上,在单一阶段,在测试中,在测试中,在明确的时间标准上,在测试中进行同步的搜索和测试中,在测试中进行同步的测试。