Existing video stabilization methods often generate visible distortion or require aggressive cropping of frame boundaries, resulting in smaller field of views. In this work, we present a frame synthesis algorithm to achieve full-frame video stabilization. We first estimate dense warp fields from neighboring frames and then synthesize the stabilized frame by fusing the warped contents. Our core technical novelty lies in the learning-based hybrid-space fusion that alleviates artifacts caused by optical flow inaccuracy and fast-moving objects. We validate the effectiveness of our method on the NUS, selfie, and DeepStab video datasets. Extensive experiment results demonstrate the merits of our approach over prior video stabilization methods.
翻译:现有视频稳定化方法往往产生可见的扭曲,或需要大力划定框架边界,从而缩小视野范围。在这项工作中,我们提出了一个框架合成算法,以实现全框架视频稳定化。我们首先从相邻框中估算密集的扭曲场,然后通过折叠内容合成稳定的框。我们的核心技术新颖之处在于以学习为基础的混合空间融合,这种混合空间可以减轻光学流动不准确和快速移动的天体造成的文物。我们验证了我们在NUS、自我和DeepStab视频数据集上的方法的有效性。广泛的实验结果表明我们的方法优于先前的视频稳定化方法。