Video understanding calls for a model to learn the characteristic interplay between static scene content and its dynamics: Given an image, the model must be able to predict a future progression of the portrayed scene and, conversely, a video should be explained in terms of its static image content and all the remaining characteristics not present in the initial frame. This naturally suggests a bijective mapping between the video domain and the static content as well as residual information. In contrast to common stochastic image-to-video synthesis, such a model does not merely generate arbitrary videos progressing the initial image. Given this image, it rather provides a one-to-one mapping between the residual vectors and the video with stochastic outcomes when sampling. The approach is naturally implemented using a conditional invertible neural network (cINN) that can explain videos by independently modelling static and other video characteristics, thus laying the basis for controlled video synthesis. Experiments on four diverse video datasets demonstrate the effectiveness of our approach in terms of both the quality and diversity of the synthesized results. Our project page is available at https://bit.ly/3t66bnU.
翻译:视频理解要求一种模型来了解静态现场内容及其动态之间的特征相互作用:根据图像,模型必须能够预测描绘的场景的未来进展,反之,应当用其静态图像内容和初始框架没有显示的所有其余特征来解释视频。这自然意味着视频域与静态内容以及剩余信息之间的双向映射。与普通的随机图像合成相比,这种模型不仅仅是在初始图像上产生任意的视频。鉴于这一图像,它反而提供了残余矢量和视频之间的一对一映图,在取样时带有随机结果。该方法自然使用一个有条件的不可逆神经网络(cN)来解释视频,通过独立模拟静态和其他视频特征来解释视频,从而为受控的视频合成奠定基础。在四个不同的视频数据集上进行的实验表明我们的方法在综合结果的质量和多样性方面的有效性。我们的项目网页可在 https://bit.ly/366bnU上查阅。