In this work we propose a simple unsupervised approach for next frame prediction in video. Instead of directly predicting the pixels in a frame given past frames, we predict the transformations needed for generating the next frame in a sequence, given the transformations of the past frames. This leads to sharper results, while using a smaller prediction model. In order to enable a fair comparison between different video frame prediction models, we also propose a new evaluation protocol. We use generated frames as input to a classifier trained with ground truth sequences. This criterion guarantees that models scoring high are those producing sequences which preserve discriminative features, as opposed to merely penalizing any deviation, plausible or not, from the ground truth. Our proposed approach compares favourably against more sophisticated ones on the UCF-101 data set, while also being more efficient in terms of the number of parameters and computational cost.
翻译:在这项工作中,我们为下一个视频框架预测提出了一个简单的、不受监督的方法。我们不直接预测给定的过去框架框架中的像素,而是根据过去框架的变异,预测产生下一个框架的顺序所需的变化。这导致更清晰的结果,同时使用一个较小的预测模型。为了能够对不同的视频框架预测模型进行公平的比较,我们还提出了一个新的评估协议。我们用生成的框架作为经过地面真相序列培训的分类师的投入。这个标准保证了模型的评分高是那些能够保存歧视特征的序列,而不是仅仅惩罚与地面真相的任何偏差,不管这种偏差是否可信。我们提议的方法优于铀转化系数-101数据集中较复杂的序列,同时在参数数量和计算成本方面效率更高。