Predicting the future frames of a video is a challenging task, in part due to the underlying stochastic real-world phenomena. Prior approaches to solve this task typically estimate a latent prior characterizing this stochasticity, however do not account for the predictive uncertainty of the (deep learning) model. Such approaches often derive the training signal from the mean-squared error (MSE) between the generated frame and the ground truth, which can lead to sub-optimal training, especially when the predictive uncertainty is high. Towards this end, we introduce Neural Uncertainty Quantifier (NUQ) - a stochastic quantification of the model's predictive uncertainty, and use it to weigh the MSE loss. We propose a hierarchical, variational framework to derive NUQ in a principled manner using a deep, Bayesian graphical model. Our experiments on four benchmark stochastic video prediction datasets show that our proposed framework trains more effectively compared to the state-of-the-art models (especially when the training sets are small), while demonstrating better video generation quality and diversity against several evaluation metrics.
翻译:预测视频的未来框架是一项具有挑战性的任务,其部分原因是潜在的随机真实世界现象。在完成这项任务之前,我们通常先估计这种随机性的潜在先质特征,但并不说明(深学习)模型的预测不确定性。这些方法往往从生成的框架和地面真相之间的平均差错(MSE)中得出培训信号,这可能导致亚最佳培训,特别是在预测不确定性高的情况下。为此,我们引入了神经不确定量化器(NUQ)——模型预测不确定性的随机量化,并用它来权衡MSE损失。我们提出了一个分级、变异框架,以便利用深层的Bayesian图形模型以有原则的方式生成NUQ。我们在四个基准的随机视频预测数据集上进行的实验表明,我们拟议的框架比最新模型(特别是当培训组规模小时)进行更有效的培训,同时用几种评价指标来展示更好的视频生成质量和多样性。