Video prediction methods generally consume substantial computing resources in training and deployment, among which keypoint-based approaches show promising improvement in efficiency by simplifying dense image prediction to light keypoint prediction. However, keypoint locations are often modeled only as continuous coordinates, so noise from semantically insignificant deviations in videos easily disrupt learning stability, leading to inaccurate keypoint modeling. In this paper, we design a new grid keypoint learning framework, aiming at a robust and explainable intermediate keypoint representation for long-term efficient video prediction. We have two major technical contributions. First, we detect keypoints by jumping among candidate locations in our raised grid space and formulate a condensation loss to encourage meaningful keypoints with strong representative capability. Second, we introduce a 2D binary map to represent the detected grid keypoints and then suggest propagating keypoint locations with stochasticity by selecting entries in the discrete grid space, thus preserving the spatial structure of keypoints in the longterm horizon for better future frame generation. Extensive experiments verify that our method outperforms the state-ofthe-art stochastic video prediction methods while saves more than 98% of computing resources. We also demonstrate our method on a robotic-assisted surgery dataset with promising results. Our code is available at https://github.com/xjgaocs/Grid-Keypoint-Learning.
翻译:视频预测方法通常消耗大量培训和部署方面的计算资源,其中,基于关键点的方法显示通过简化密集图像预测来简化光关键点预测,效率大有改善;然而,关键点地点往往只以连续坐标为模范,因此视频中的语义上微不足道的偏差所产生的噪音很容易破坏学习稳定性,从而导致不准确的关键点建模。在本文中,我们设计了新的网格关键点学习框架,目的是为长期高效视频预测提供一个强有力和可解释的中间关键点代表。我们有两大技术贡献。首先,我们通过在我们高端的网格空间候选地点之间跳跃来探测关键点,并形成凝固损失,以鼓励具有强大代表性的有意义的关键点。第二,我们推出一个2D二进制地图,以代表所检测到的网格关键点,然后建议通过选择离散网格空间的条目来传播关键点位置,从而维护长期范围内关键点的空间结构结构,以更好地生成未来框架。广泛实验证实我们的方法超越了我们最先进的智能智能视频预测方法,从而鼓励具有很强的代表性关键点。我们用98 %的机算算算方法,同时以更可靠地显示我们现有的机器人/计算方法。