Existing optical flow methods are erroneous in challenging scenes, such as fog, rain, and night because the basic optical flow assumptions such as brightness and gradient constancy are broken. To address this problem, we present an unsupervised learning approach that fuses gyroscope into optical flow learning. Specifically, we first convert gyroscope readings into motion fields named gyro field. Second, we design a self-guided fusion module to fuse the background motion extracted from the gyro field with the optical flow and guide the network to focus on motion details. To the best of our knowledge, this is the first deep learning-based framework that fuses gyroscope data and image content for optical flow learning. To validate our method, we propose a new dataset that covers regular and challenging scenes. Experiments show that our method outperforms the state-of-art methods in both regular and challenging scenes. Code and dataset are available at https://github.com/megvii-research/GyroFlow.
翻译:在雾、雨和黑夜等富有挑战性的场景中,现有的光流方法是错误的,因为光和梯度等基本的光流假设被打破。为了解决这个问题,我们提出了一个未经监督的学习方法,将陀螺仪结合到光流学习中。具体地说,我们首先将陀螺仪读数转换成称为陀螺仪场的运动场。第二,我们设计了一个自导聚合模块,将从陀螺场提取的背景运动与光流结合起来,并指导网络关注运动细节。据我们所知,这是第一个以深层次学习为基础的框架,将陀螺仪数据和图像内容结合到光流学习中。为了验证我们的方法,我们提出了一套新的数据集,涵盖常规和具有挑战性的场景点。实验表明,我们的方法在常规场和具有挑战性的场点上都超越了最先进的方法。代码和数据集可以在 https://github.com/megvii-research/GyroFlow上查阅。