Though recent years have witnessed remarkable progress in single image super-resolution (SISR) tasks with the prosperous development of deep neural networks (DNNs), the deep learning methods are confronted with the computation and memory consumption issues in practice, especially for resource-limited platforms such as mobile devices. To overcome the challenge and facilitate the real-time deployment of SISR tasks on mobile, we combine neural architecture search with pruning search and propose an automatic search framework that derives sparse super-resolution (SR) models with high image quality while satisfying the real-time inference requirement. To decrease the search cost, we leverage the weight sharing strategy by introducing a supernet and decouple the search problem into three stages, including supernet construction, compiler-aware architecture and pruning search, and compiler-aware pruning ratio search. With the proposed framework, we are the first to achieve real-time SR inference (with only tens of milliseconds per frame) for implementing 720p resolution with competitive image quality (in terms of PSNR and SSIM) on mobile platforms (Samsung Galaxy S20).
翻译:尽管近年来在单一图像超分辨率(SISR)任务方面取得了显著进展,深层神经网络(DNNs)得到了繁荣发展,但深层学习方法在实践中遇到了计算和记忆消耗问题,特别是在移动设备等资源有限的平台上。为了克服挑战,便利实时部署移动系统超分辨率任务,我们把神经结构搜索与剪裁搜索结合起来,并提议一个自动搜索框架,在满足实时推断要求的同时,产生高图像质量的稀薄超分辨率模型。为了降低搜索成本,我们利用权重共享战略,在移动平台(SNSNR和SSIM)上引入超级网络,将搜索问题分为三个阶段,包括超级网络建设、编译器-觉架构和运行搜索,以及编译器-觉调整比搜索。在拟议框架下,我们首先在移动平台(S20银河系统)上以竞争性图像质量(PSNR和SSIM)执行720p分辨率解决方案(每框架只有数十毫秒)实现实时SR推断。