In recent years, a number of approaches based on 2D CNNs and 3D CNNs have emerged for video action recognition, achieving state-of-the-art results on several large-scale benchmark datasets. In this paper, we carry out an in-depth comparative analysis to better understand the differences between these approaches and the progress made by them. To this end, we develop a unified framework for both 2D-CNN and 3D-CNN action models, which enables us to remove bells and whistles and provides a common ground for a fair comparison. We then conduct an effort towards a large-scale analysis involving over 300 action recognition models. Our comprehensive analysis reveals that a) a significant leap is made in efficiency for action recognition, but not in accuracy; b) 2D-CNN and 3D-CNN models behave similarly in terms of spatio-temporal representation abilities and transferability. Our analysis also shows that recent action models seem to be able to learn data-dependent temporality flexibly as needed. Our codes and models are available on https://github.com/IBM/action-recognition-pytorch.
翻译:近年来,在2D-CNN和3D-CNN行动模式的基础上,出现了一些基于2DCNN和3D-CNN的视频行动识别方法,在几个大型基准数据集中取得了最新的最新结果,我们在本文件中进行了深入的比较分析,以更好地了解这些方法之间的差异及其取得的进展。为此,我们为2D-CNN和3D-CNN行动模式制定了一个统一框架,使我们能够去除钟声和哨声,并为公平比较提供一个共同基础。然后,我们努力进行涉及300多个行动识别模型的大规模分析。我们的全面分析表明,a)在行动识别效率方面有很大的飞跃,但没有准确性;b) 2D-CNN和3D-CNN模式在空间代表能力和可转移性方面表现相似。我们的分析还表明,最近的行动模型似乎能够根据需要学习依赖数据的耐用时间性。我们的代码和模型可在https://github.com/IBM/action-science-pytorch查阅。