Although synthetic training data has been shown to be beneficial for tasks such as human pose estimation, its use for RGB human action recognition is relatively unexplored. Our goal in this work is to answer the question whether synthetic humans can improve the performance of human action recognition, with a particular focus on generalization to unseen viewpoints. We make use of the recent advances in monocular 3D human body reconstruction from real action sequences to automatically render synthetic training videos for the action labels. We make the following contributions: (i) we investigate the extent of variations and augmentations that are beneficial to improving performance at new viewpoints. We consider changes in body shape and clothing for individuals, as well as more action relevant augmentations such as non-uniform frame sampling, and interpolating between the motion of individuals performing the same action; (ii) We introduce a new data generation methodology, SURREACT, that allows training of spatio-temporal CNNs for action classification; (iii) We substantially improve the state-of-the-art action recognition performance on the NTU RGB+D and UESTC standard human action multi-view benchmarks; Finally, (iv) we extend the augmentation approach to in-the-wild videos from a subset of the Kinetics dataset to investigate the case when only one-shot training data is available, and demonstrate improvements in this case as well.
翻译:虽然合成培训数据被证明有益于人类构成估计等任务,但用于RGB人类行动确认的合成培训数据相对来说是尚未探索的。我们这项工作的目标是回答合成人类能否提高人类行动确认的绩效,特别侧重于一般化和无形观点。我们利用单立立体人体重建的最新进展,从实际行动序列中自动将合成培训视频用于行动标签。我们做出以下贡献:(一)我们调查有利于提高新观点表现的变异和增益的程度。我们考虑个人身体形状和服装的变化,以及更多相关的行动增强,如非统一框架抽样,以及将执行相同行动的个人运动相互交织;(二)我们采用一种新的数据生成方法,即SURREACT,允许对单立体3D型CNN进行行动分类培训;(三)我们大幅改进NTU RGB+D和UESTC标准人类行动多视角基准的状态行动确认绩效。我们考虑的是个人身体形状和服装的变化,以及更多相关的行动增强行动,如非统一框架抽样,以及将执行相同行动的个人运动运动之间的运动;(二)我们采用了一种新的数据生成方法,我们把现有数据生成的升级的升级方法,只用于调查。