In this paper, we propose an algorithm to interpolate between a pair of images of a dynamic scene. While in the past years significant progress in frame interpolation has been made, current approaches are not able to handle images with brightness and illumination changes, which are common even when the images are captured shortly apart. We propose to address this problem by taking advantage of the existing optical flow methods that are highly robust to the variations in the illumination. Specifically, using the bidirectional flows estimated using an existing pre-trained flow network, we predict the flows from an intermediate frame to the two input images. To do this, we propose to encode the bidirectional flows into a coordinate-based network, powered by a hypernetwork, to obtain a continuous representation of the flow across time. Once we obtain the estimated flows, we use them within an existing blending network to obtain the final intermediate frame. Through extensive experiments, we demonstrate that our approach is able to produce significantly better results than state-of-the-art frame interpolation algorithms.
翻译:在本文中,我们提出一种算法,在动态场景的一对图像之间进行内插。虽然在过去几年中在框架内插方面已经取得了显著进展,但目前的方法无法处理光亮和光化变化的图像,即使图像被截取不久,这些变化也是常见的。我们提议通过利用现有的光学流动方法解决这一问题,这些光学流动方法对于照明的变异非常强大。具体地说,我们使用利用现有的预先训练的流量网络估计的双向流动,我们预测从中间框架流向两个输入图像的流量。为此,我们提议将双向流编码成一个以协调为基础的网络,由超网络驱动,以获得对不同时间流动的连续代表。我们一旦获得估计流量,我们就利用现有的混合网络使用这些流动获得最后中间框架。我们通过广泛的实验,证明我们的方法能够产生比状态框架内插算法更好的显著效果。