Mobile captured images can be aligned using their gyroscope sensors. Optical image stabilizer (OIS) terminates this possibility by adjusting the images during the capturing. In this work, we propose a deep network that compensates the motions caused by the OIS, such that the gyroscopes can be used for image alignment on the OIS cameras. To achieve this, first, we record both videos and gyroscopes with an OIS camera as training data. Then, we convert gyroscope readings into motion fields. Second, we propose a Fundamental Mixtures motion model for rolling shutter cameras, where an array of rotations within a frame are extracted as the ground-truth guidance. Third, we train a convolutional neural network with gyroscope motions as input to compensate for the OIS motion. Once finished, the compensation network can be applied for other scenes, where the image alignment is purely based on gyroscopes with no need for images contents, delivering strong robustness. Experiments show that our results are comparable with that of non-OIS cameras, and outperform image-based alignment results with a relatively large margin.
翻译:移动捕获的图像可以使用陀螺仪传感器对齐。 光学图像稳定器( OIS) 通过在抓取时调整图像来终止这种可能性。 在此工作中, 我们提议一个深网络来补偿 OIS 产生的动作, 这样可以使用陀螺仪对准 OIS 相机的图像。 为了实现这一点, 首先, 我们用 OIS 相机记录视频和陀螺镜作为训练数据。 然后, 我们将陀螺仪读数转换成运动场。 其次, 我们提议为滚动闭门相机提供一个基本混合运动模型, 在一个框内抽出一系列旋转作为地面真相指导。 第三, 我们训练一个带有陀螺仪动作的同导神经网络, 作为 OIS 动作的输入。 完成后, 补偿网络可以用于其他场, 那里的图像对齐纯粹基于陀螺仪, 不需要图像内容, 显示强力。 实验显示我们的结果与非 OIS 相机相仿的相近光谱图像比, 以及以相对大空间的图像校准结果 。