The exposed cameras of UAV can shake, shift, or even malfunction under the influence of harsh weather, while the add-on devices (Dupont lines) are very vulnerable to damage. We can place a low-cost T-OLED overlay around the camera to protect it, but this would also introduce image degradation issues. In particular, the temperature variations in the atmosphere can create mist that adsorbs to the T-OLED, which can cause secondary disasters (i.e., more severe image degradation) during the UAV's filming process. To solve the image degradation problem caused by overlaying T-OLEDs, in this paper we propose a new method to enhance the visual experience by enhancing the texture and color of images. Specifically, our method trains a lightweight network to estimate a low-rank affine grid on the input image, and then utilizes the grid to enhance the input image at block granularity. The advantages of our method are that no reference image is required and the loss function is developed from visual experience. In addition, our model can perform high-quality recovery of images of arbitrary resolution in real time. In the end, the limitations of our model and the collected datasets (including the daytime and nighttime scenes) are discussed.
翻译:在恶劣天气的影响下,UAV的暴露相机可以摇动、移动甚至故障,而附加装置(Dupont线线)非常容易损坏。我们可以在相机周围放置低成本的T-OLED覆盖覆盖面来保护它,但这也会带来图像退化问题。特别是,大气中的温度变化可能造成迷雾,导致T-OLED吸附剂吸附到T-OLED,在UAV的拍摄过程中造成二次灾害(即更严重的图像降解)。为了解决由覆盖 T-OLEDs造成的图像退化问题,我们在本文件中提出一种新的方法,通过加强图像的纹理和颜色来增强视觉体验。具体地说,我们的方法是训练一个轻量网络来估计输入图像上的低端的扇形网格,然后利用电网来增强块颗粒度的输入图像。我们的方法的优点是不需要参考图像,而损失功能是从视觉经验中发展出来。此外,我们的模型可以对任意分辨率的图像进行高质量的恢复,包括实时时间和时间的模型。