Existing homography and optical flow methods are erroneous in challenging scenes, such as fog, rain, night, and snow because the basic assumptions such as brightness and gradient constancy are broken. To address this issue, we present an unsupervised learning approach that fuses gyroscope into homography and optical flow learning. Specifically, we first convert gyroscope readings into motion fields named gyro field. Second, we design a self-guided fusion module (SGF) to fuse the background motion extracted from the gyro field with the optical flow and guide the network to focus on motion details. Meanwhile, we propose a homography decoder module (HD) to combine gyro field and intermediate results of SGF to produce the homography. To the best of our knowledge, this is the first deep learning framework that fuses gyroscope data and image content for both deep homography and optical flow learning. To validate our method, we propose a new dataset that covers regular and challenging scenes. Experiments show that our method outperforms the state-of-the-art methods in both regular and challenging scenes.
翻译:在雾、雨、夜和雪等具有挑战性的场景中,现有的同质和光学流方法是错误的,因为亮度和梯度等基本假设被打破。为了解决这个问题,我们提出了一个不受监督的学习方法,将陀螺仪结合到同质和光学流学学习中。具体地说,我们首先将陀螺仪读数转换成称为陀螺仪场的运动场。第二,我们设计了一个自导聚合模块(SGF),将从陀螺仪场提取的背景运动与光学流结合起来,并指导网络关注运动细节。与此同时,我们提议了一个同系解码模块(HD),将陀螺仪场和SGF的中间结果结合到同质学中来。据我们所知,这是第一个将陀螺仪数据和图像内容结合到深层同系和光学流学习的深深深层次学习框架。为了验证我们的方法,我们提出了一套新的数据集,涵盖常规和具有挑战性的场景。实验显示我们的方法超越了常规和富有挑战性的场景区的现状。