To date, the best-performing blind super-resolution (SR) techniques follow one of two paradigms: A) generate and train a standard SR network on synthetic low-resolution - high-resolution (LR - HR) pairs or B) attempt to predict the degradations an LR image has suffered and use these to inform a customised SR network. Despite significant progress, subscribers to the former miss out on useful degradation information that could be used to improve the SR process. On the other hand, followers of the latter rely on weaker SR networks, which are significantly outperformed by the latest architectural advancements. In this work, we present a framework for combining any blind SR prediction mechanism with any deep SR network, using a metadata insertion block to insert prediction vectors into SR network feature maps. Through comprehensive testing, we prove that state-of-the-art contrastive and iterative prediction schemes can be successfully combined with high-performance SR networks such as RCAN and HAN within our framework. We show that our hybrid models consistently achieve stronger SR performance than both their non-blind and blind counterparts. Furthermore, we demonstrate our framework's robustness by predicting degradations and super-resolving images from a complex pipeline of blurring, noise and compression.
翻译:迄今为止,最佳的失明超级分辨率(SR)技术遵循两种模式之一:A)生成和培训关于合成低分辨率高分辨率(LR-HR)对配对或B的标准SR网络,试图预测LR图像的退化,并以此为定制的SR网络提供信息。尽管取得了显著进展,但前方的用户错过了有用的退化信息,可用于改进SR进程。另一方面,后方的追随者依赖较弱的SR网络,这些网络在最新的建筑进步下都大大超过前者。在这项工作中,我们提出了一个框架,将任何盲线性SR预测机制与任何深层SR网络结合起来,利用元数据插入块将预测矢量插入SR网络特征图。通过全面测试,我们证明最先进的对比性和迭接性预测计划可以成功地与高性能的SR网络相结合,如在我们框架内的RCAN和HAN。我们表明,我们的混合模型比其非盲人和盲人同行都持续取得更强的SR性能。此外,我们展示了我们的框架的稳健性,通过预测从不盲不透视不透视不透的压缩的图像而模糊性压低的循环。