Despite achieving remarkable progress in recent years, single-image super-resolution methods are developed with several limitations. Specifically, they are trained on fixed content domains with certain degradations (whether synthetic or real). The priors they learn are prone to overfitting the training configuration. Therefore, the generalization to novel domains such as drone top view data, and across altitudes, is currently unknown. Nonetheless, pairing drones with proper image super-resolution is of great value. It would enable drones to fly higher covering larger fields of view, while maintaining a high image quality. To answer these questions and pave the way towards drone image super-resolution, we explore this application with particular focus on the single-image case. We propose a novel drone image dataset, with scenes captured at low and high resolutions, and across a span of altitudes. Our results show that off-the-shelf state-of-the-art networks witness a significant drop in performance on this different domain. We additionally show that simple fine-tuning, and incorporating altitude awareness into the network's architecture, both improve the reconstruction performance.
翻译:尽管近年来取得了显著进展,但单一图像超分辨率方法在开发过程中还是取得了一些限制。 具体地说, 这些超分辨率方法在固定内容领域受到某些降解( 合成的或真实的) 的培训。 他们学习的前题容易过度配置培训。 因此, 目前尚不清楚如何推广到无人机顶视图数据和跨高度的新领域。 然而, 将无人机配对成适当的图像超分辨率具有巨大的价值。 它将使无人机能够高飞覆盖更大的视野领域, 同时保持高图像质量。 为了回答这些问题并铺平无人机图像超分辨率的道路, 我们特别以单一图像案例为重点来探索这一应用程序。 我们提议建立一个新的无人机图像数据集, 以低分辨率和高分辨率拍摄的场景, 跨越一个高度。 我们的结果表明, 近视点的网络在这个不同领域的性能显著下降。 我们还展示了简单的微调, 并将高度意识纳入网络架构, 两者都改善了重建绩效 。