We present LARNet, a novel end-to-end approach for generating human action videos. A joint generative modeling of appearance and dynamics to synthesize a video is very challenging and therefore recent works in video synthesis have proposed to decompose these two factors. However, these methods require a driving video to model the video dynamics. In this work, we propose a generative approach instead, which explicitly learns action dynamics in latent space avoiding the need of a driving video during inference. The generated action dynamics is integrated with the appearance using a recurrent hierarchical structure which induces motion at different scales to focus on both coarse as well as fine level action details. In addition, we propose a novel mix-adversarial loss function which aims at improving the temporal coherency of synthesized videos. We evaluate the proposed approach on four real-world human action datasets demonstrating the effectiveness of the proposed approach in generating human actions. The code and models will be made publicly available.
翻译:我们介绍了制作人类行动视频的新颖端对端方法LARNet,这是制作人类行动视频的一种新颖的端对端方法。合成视频的外观和动态的联合基因模型非常富有挑战性,因此,视频合成中的近期工作提议对这两个因素进行分解。然而,这些方法需要驱动视频来模拟视频动态。在这项工作中,我们提议了一种基因模型,在潜在空间明确学习行动动态,从而避免在推断过程中需要驱动视频。所产生的行动动态与外观结合,使用一个经常性的等级结构,在不同尺度上引起运动,既注重粗糙又注重精细的行动细节。此外,我们提议了一个新的混合对抗损失功能,目的是改善合成视频的时间一致性。我们评估了四个真实世界人类行动数据集的拟议方法,以展示拟议方法在产生人类行动方面的有效性。代码和模型将公开提供。