We address the problem of data augmentation for video action recognition. Standard augmentation strategies in video are hand-designed and sample the space of possible augmented data points either at random, without knowing which augmented points will be better, or through heuristics. We propose to learn what makes a good video for action recognition and select only high-quality samples for augmentation. In particular, we choose video compositing of a foreground and a background video as the data augmentation process, which results in diverse and realistic new samples. We learn which pairs of videos to augment without having to actually composite them. This reduces the space of possible augmentations, which has two advantages: it saves computational cost and increases the accuracy of the final trained classifier, as the augmented pairs are of higher quality than average. We present experimental results on the entire spectrum of training settings: few-shot, semi-supervised and fully supervised. We observe consistent improvements across all of them over prior work and baselines on Kinetics, UCF101, HMDB51, and achieve a new state-of-the-art on settings with limited data. We see improvements of up to 8.6% in the semi-supervised setting.
翻译:视频中的标准增强战略是随机设计,并抽样可能扩大的数据点的空间,要么是随机的,不知道哪些增强点会更好,要么是黑奴主义的。我们建议学习什么是良好的视频来进行行动识别,并只选择高质量的样本来进行增强。特别是,我们选择前景的视频组合和背景视频作为数据增强过程,产生多样化和现实的新样本。我们学习哪些配对的视频可以增加,而不必实际合成它们。这减少了可能扩大的数据点的空间,有两种好处:它节省计算成本,提高最后经过培训的分类器的准确性,因为加配对的质量高于平均水平。我们展示了整个培训环境的实验结果:几发、半监视和充分监督。我们观察了所有这些在以前关于基尼提茨、UCF101、HMDB51的工作和基线方面不断改进,并在有限的数据环境下实现了新的状态。我们看到了在半超级设置中达到8.6%的改进。