How would a static scene react to a local poke? What are the effects on other parts of an object if you could locally push it? There will be distinctive movement, despite evident variations caused by the stochastic nature of our world. These outcomes are governed by the characteristic kinematics of objects that dictate their overall motion caused by a local interaction. Conversely, the movement of an object provides crucial information about its underlying distinctive kinematics and the interdependencies between its parts. This two-way relation motivates learning a bijective mapping between object kinematics and plausible future image sequences. Therefore, we propose iPOKE - invertible Prediction of Object Kinematics - that, conditioned on an initial frame and a local poke, allows to sample object kinematics and establishes a one-to-one correspondence to the corresponding plausible videos, thereby providing a controlled stochastic video synthesis. In contrast to previous works, we do not generate arbitrary realistic videos, but provide efficient control of movements, while still capturing the stochastic nature of our environment and the diversity of plausible outcomes it entails. Moreover, our approach can transfer kinematics onto novel object instances and is not confined to particular object classes. Project page is available at https://bit.ly/3dJN4Lf
翻译:静态场景会如何对本地波纹发生反应? 静态场景会如何对本地波纹发生反应? 如果您可以本地推动它, 静态场景会对物体的其他部分产生什么影响? 将会有显著的移动, 尽管由于我们世界的随机性而存在明显的差异。 这些结果受由本地互动决定其总体运动的物体特征运动运动的调节。 相反, 物体的移动会提供有关其内在独特运动特征及其各部分之间相互依存关系的重要信息。 这种双向关系会鼓励学习物体运动运动与未来图像序列之间的双向映射。 因此, 我们提议iPOKE - 对象神经学不可忽略的预测 - 以初始框架和局部波纹为条件, 允许对物体运动进行抽样, 并针对相应的貌似视频建立一对一的通信。 与先前的作品不同, 我们并不产生任意的、现实的视频, 而是对运动的高效控制, 同时仍能捕捉到我们环境的随机性质及其真实结果的多样性。 此外, 我们的方法可以将 / JProgration 类 转到特定的例子 。