Existing video stabilization methods either require aggressive cropping of frame boundaries or generate distortion artifacts on the stabilized frames. In this work, we present an algorithm for full-frame video stabilization by first estimating dense warp fields. Full-frame stabilized frames can then be synthesized by fusing warped contents from neighboring frames. The core technical novelty lies in our learning-based hybrid-space fusion that alleviates artifacts caused by optical flow inaccuracy and fast-moving objects. We validate the effectiveness of our method on the NUS and selfie video datasets. Extensive experiment results demonstrate the merits of our approach over prior video stabilization methods.
翻译:现有的视频稳定化方法要么要求对框架边界进行积极的裁剪,要么在稳定化框架上生成扭曲的文物。 在这项工作中,我们首先通过估算密集的扭曲场,提出一个全框架视频稳定化的算法。 然后,通过从相邻框中折叠成形的内装物,可以对全框架稳定化框架进行合成。 核心技术创新在于我们的学习型混合空间融合,它减少了光学流动不准确和快速移动的物体造成的文物。 我们在NUS和自定义视频数据集上验证了我们的方法的有效性。 广泛的实验结果表明,我们的方法优于先前的视频稳定化方法。