To synthesize a realistic action sequence based on a single human image, it is crucial to model both motion patterns and diversity in the action video. This paper proposes an Action Conditional Temporal Variational AutoEncoder (ACT-VAE) to improve motion prediction accuracy and capture movement diversity. ACT-VAE predicts pose sequences for an action clips from a single input image. It is implemented as a deep generative model that maintains temporal coherence according to the action category with a novel temporal modeling on latent space. Further, ACT-VAE is a general action sequence prediction framework. When connected with a plug-and-play Pose-to-Image (P2I) network, ACT-VAE can synthesize image sequences. Extensive experiments bear out our approach can predict accurate pose and synthesize realistic image sequences, surpassing state-of-the-art approaches. Compared to existing methods, ACT-VAE improves model accuracy and preserves diversity.
翻译:为了根据单一的人类图像合成现实的行动序列,必须在动作视频中模拟运动模式和多样性。本文件提议了一个行动条件时间变化自动编码器(ACT-VAE),以提高运动预测准确性并捕捉运动多样性。ACT-VAE预测一个单一输入图像的动作片段的序列。它作为一种深层次的感化模型实施,该模型根据行动类别保持时间一致性,在潜质空间上进行新的时间模型。此外,ACT-VAE是一个一般的行动序列预测框架。当与插接和播放的脉冲到图像序列(P2I)网络(P2I)连接时,ACT-VAE可以合成图像序列。广泛的实验利用我们的方法可以预测准确的外形和综合现实的图像序列,超过最先进的方法。与现有方法相比,ACT-VAE提高了模型的准确性,并保护多样性。