Understanding the dynamics of a physical scene involves reasoning about the diverse ways it can potentially change, especially as a result of local interactions. We present the Flow Poke Transformer (FPT), a novel framework for directly predicting the distribution of local motion, conditioned on sparse interactions termed "pokes". Unlike traditional methods that typically only enable dense sampling of a single realization of scene dynamics, FPT provides an interpretable directly accessible representation of multi-modal scene motion, its dependency on physical interactions and the inherent uncertainties of scene dynamics. We also evaluate our model on several downstream tasks to enable comparisons with prior methods and highlight the flexibility of our approach. On dense face motion generation, our generic pre-trained model surpasses specialized baselines. FPT can be fine-tuned in strongly out-of-distribution tasks such as synthetic datasets to enable significant improvements over in-domain methods in articulated object motion estimation. Additionally, predicting explicit motion distributions directly enables our method to achieve competitive performance on tasks like moving part segmentation from pokes which further demonstrates the versatility of our FPT. Code and models are publicly available at https://compvis.github.io/flow-poke-transformer.
翻译:理解物理场景的动态变化需要推理其可能发生的多种变化方式,特别是由局部相互作用引起的变化。我们提出了Flow Poke Transformer (FPT),这是一个新颖的框架,用于直接预测局部运动的分布,其条件是一组称为“戳动”的稀疏交互。与通常仅能对场景动态的单一实现进行密集采样的传统方法不同,FPT提供了一种可解释、可直接访问的多模态场景运动表示,这种表示包含了运动对物理交互的依赖关系以及场景动态固有的不确定性。我们还在多个下游任务上评估了我们的模型,以便与先前的方法进行比较,并突显我们方法的灵活性。在密集人脸运动生成任务中,我们通用的预训练模型超越了专门的基线方法。FPT可以在强分布外任务(如合成数据集)中进行微调,从而在铰接物体运动估计方面相比领域内方法实现显著提升。此外,直接预测显式的运动分布使我们的方法能够在诸如通过戳动进行运动部件分割等任务上取得有竞争力的性能,这进一步证明了我们FPT的多功能性。代码和模型已在 https://compvis.github.io/flow-poke-transformer 公开提供。