This paper tackles video prediction from a new dimension of predicting spacetime-varying motions that are incessantly changing across both space and time. Prior methods mainly capture the temporal state transitions but overlook the complex spatiotemporal variations of the motion itself, making them difficult to adapt to ever-changing motions. We observe that physical world motions can be decomposed into transient variation and motion trend, while the latter can be regarded as the accumulation of previous motions. Thus, simultaneously capturing the transient variation and the motion trend is the key to make spacetime-varying motions more predictable. Based on these observations, we propose the MotionRNN framework, which can capture the complex variations within motions and adapt to spacetime-varying scenarios. MotionRNN has two main contributions. The first is that we design the MotionGRU unit, which can model the transient variation and motion trend in a unified way. The second is that we apply the MotionGRU to RNN-based predictive models and indicate a new flexible video prediction architecture with a Motion Highway that can significantly improve the ability to predict changeable motions and avoid motion vanishing for stacked multiple-layer predictive models. With high flexibility, this framework can adapt to a series of models for deterministic spatiotemporal prediction. Our MotionRNN can yield significant improvements on three challenging benchmarks for video prediction with spacetime-varying motions.
翻译:本文从预测时空变换动议的新层面处理视频预测,这种变化在时空和时间上不断变化。 先前的方法主要捕捉时间性转变,但忽略了运动本身复杂的时空变异,使其难以适应不断变化的运动。 我们观察到,自然世界运动可以分解成短暂变异和运动趋势,而后者可以被视为以往运动的积累。 因此,同时捕捉瞬时变异和运动趋势是使时空变换运动更加可预测的关键。 根据这些观察,我们提出了移动RNNN框架,该框架可以捕捉运动内部的复杂变异,并适应时空变变幻情景。 运动NNNN有两个主要贡献。 第一,我们设计运动GRU单元,可以以统一的方式模拟瞬变动和运动趋势。 第二,我们将移动GRURU应用于基于移动变变变变变变变变的预测模型,并表明一个新的灵活视频预测结构,可以大大改善可变动运动的能力,并避免运动运动在时间性变变变变变变变变的情景上改变运动。 高的预测模型可以稳定地调整。