Human mesh recovery (HMR) provides rich human body information for various real-world applications such as gaming, human-computer interaction, and virtual reality. Compared to single image-based methods, video-based methods can utilize temporal information to further improve performance by incorporating human body motion priors. However, many-to-many approaches such as VIBE suffer from motion smoothness and temporal inconsistency. While many-to-one approaches such as TCMR and MPS-Net rely on the future frames, which is non-causal and time inefficient during inference. To address these challenges, a novel Diffusion-Driven Transformer-based framework (DDT) for video-based HMR is presented. DDT is designed to decode specific motion patterns from the input sequence, enhancing motion smoothness and temporal consistency. As a many-to-many approach, the decoder of our DDT outputs the human mesh of all the frames, making DDT more viable for real-world applications where time efficiency is crucial and a causal model is desired. Extensive experiments are conducted on the widely used datasets (Human3.6M, MPI-INF-3DHP, and 3DPW), which demonstrated the effectiveness and efficiency of our DDT.
翻译:人体网格恢复(HMR)为各种实际应用(如游戏、人机交互和虚拟现实)提供了丰富的人体信息。与基于单张图像的方法相比,基于视频的方法可以利用时间信息进一步提高性能,通过结合人体运动先验信息。然而,类似 VIBE 这样的对多对多的方法存在运动平滑性和时间不一致性的问题。而类似 TCMR 和 MPS-Net 的对多对一的方法依赖于未来帧,这在推理过程中是不可避免的且时间效率低下。为解决这些问题,提出了一种新的基于扩散驱动的变形器框架(DDT),用于基于视频的人体网格恢复。DDT 旨在从输入序列中解码特定的运动模式,增强运动平滑性和时间一致性。作为一种对多对多的方法,DDT 的解码器会输出所有帧的人体网格,使 DDT 更适用于实际应用中时间效率至关重要且需要一种因果模型的场景。在广泛使用的数据集(Human3.6M、MPI-INF-3DHP 和 3DPW)上进行了大量实验,证明了我们 DDT 的有效性和效率。