We focus on the task of future frame prediction in video governed by underlying physical dynamics. We work with models which are object-centric, i.e., explicitly work with object representations, and propagate a loss in the latent space. Specifically, our research builds on recent work by Kipf et al. \cite{kipf&al20}, which predicts the next state via contrastive learning of object interactions in a latent space using a Graph Neural Network. We argue that injecting explicit inductive bias in the model, in form of general physical laws, can help not only make the model more interpretable, but also improve the overall prediction of model. As a natural by-product, our model can learn feature maps which closely resemble actual object positions in the image, without having any explicit supervision about the object positions at the training time. In comparison with earlier works \cite{jaques&al20}, which assume a complete knowledge of the dynamics governing the motion in the form of a physics engine, we rely only on the knowledge of general physical laws, such as, world consists of objects, which have position and velocity. We propose an additional decoder based loss in the pixel space, imposed in a curriculum manner, to further refine the latent space predictions. Experiments in multiple different settings demonstrate that while Kipf et al. model is effective at capturing object interactions, our model can be significantly more effective at localising objects, resulting in improved performance in 3 out of 4 domains that we experiment with. Additionally, our model can learn highly intrepretable feature maps, resembling actual object positions.
翻译:我们的重点是在由物理动态基础控制的视频中进行未来框架预测的任务。 我们与以物体为中心的模型合作,即明确与物体代表方合作,并在潜层空间中传播损失。 具体地说,我们的研究以Kipf et al.\cite{kipf&al20}最近的工作为基础,该研究通过利用图形神经网络在潜层空间对物体相互作用进行对比性学习而预测下一个状态。 我们主张,以一般物理法的形式在模型中注入明显的感应偏差不仅有助于使模型更容易解释,而且能够改进模型的总体预测。作为自然副产品,我们的模型可以学习与图像中的实际物体位置非常相似的特征地图,而没有对培训时的物体位置进行任何明确的监督。 与早期的工程相比,通过对物理引擎形式运动的动态的完全了解,我们只能依靠一般物理法学学的知识,例如由物体组成的世界,具有位置和速度的物体。 作为自然副产品,我们的模型可以学习地段,我们学习地貌特征的特征图,我们可以学习与图像模型中的其他高级的模型进行进一步的演化。 我们提议,在不同的空间实验模型中,在不同的空间模型中,在不同的模型中进行进一步的演化模型中进行进一步的演化, 将更多的演化,在空间实验中, 将更多的演化,在不同的演化,在不同的演进中,在不同的演化中,在不同的演化中,在不同的演化中可以进一步的演化,在空间的演化,在空间的演进中,在不同的演进中,在不同的演进中进行更多的演进。