Although the distortion correction of fisheye images has been extensively studied, the correction of fisheye videos is still an elusive challenge. For different frames of the fisheye video, the existing image correction methods ignore the correlation of sequences, resulting in temporal jitter in the corrected video. To solve this problem, we propose a temporal weighting scheme to get a plausible global optical flow, which mitigates the jitter effect by progressively reducing the weight of frames. Subsequently, we observe that the inter-frame optical flow of the video is facilitated to perceive the local spatial deformation of the fisheye video. Therefore, we derive the spatial deformation through the flows of fisheye and distorted-free videos, thereby enhancing the local accuracy of the predicted result. However, the independent correction for each frame disrupts the temporal correlation. Due to the property of fisheye video, a distorted moving object may be able to find its distorted-free pattern at another moment. To this end, a temporal deformation aggregator is designed to reconstruct the deformation correlation between frames and provide a reliable global feature. Our method achieves an end-to-end correction and demonstrates superiority in correction quality and stability compared with the SOTA correction methods.
翻译:尽管对鱼眼图像的扭曲纠正进行了广泛研究,但鱼眼视频的纠正仍然是一个难以捉摸的挑战。对于鱼眼视频的不同框架,现有的图像修正方法忽略了序列的关联性,导致纠正视频中的时间紧张。为了解决这个问题,我们提议了一个时间加权办法,以获得一个可信的全球光学流,通过逐步降低框架的重量来减轻松动效应。随后,我们观察到,该视频的跨框架光学流有助于看到鱼眼视频的局部空间变形。因此,我们通过鱼眼和扭曲的视频的流动来得出空间变形,从而提高预测结果的当地准确性。然而,对每个框架的独立修正会破坏时间相关性。由于鱼眼视频的特性,一个扭曲的移动物体可能在另一个时刻找到其扭曲的无影响模式。为此,一个时间变形聚合器旨在重建框架之间的变形关系,并提供可靠的全球特征。我们的方法实现了终端修正,并展示了与SOTA校正方法相比在纠正质量和稳定性方面的优势。