Recently, convolutional neural networks with 3D kernels (3D CNNs) have been very popular in computer vision community as a result of their superior ability of extracting spatio-temporal features within video frames compared to 2D CNNs. Although there has been great advances recently to build resource efficient 2D CNN architectures considering memory and power budget, there is hardly any similar resource efficient architectures for 3D CNNs. In this paper, we have converted various well-known resource efficient 2D CNNs to 3D CNNs and evaluated their performance on three major benchmarks in terms of classification accuracy for different complexity levels. We have experimented on (1) Kinetics-600 dataset to inspect their capacity to learn, (2) Jester dataset to inspect their ability to capture motion patterns, and (3) UCF-101 to inspect the applicability of transfer learning. We have evaluated the run-time performance of each model on a single Titan XP GPU and a Jetson TX2 embedded system. The results of this study show that these models can be utilized for different types of real-world applications since they provide real-time performance with considerable accuracies and memory usage. Our analysis on different complexity levels shows that the resource efficient 3D CNNs should not be designed too shallow or narrow in order to save complexity. The codes and pretrained models used in this work are publicly available.
翻译:最近,与2DCNN相比,3D内核(3DCNN)的革命性神经网络在计算机视觉界非常受欢迎,因为它们在摄像框架中提取时空特征的能力优于2DCNN。尽管最近为3DCNN建立资源高效的2DCNN结构取得了巨大进展,但考虑到记忆和权力预算,3DCNN几乎没有类似的资源高效结构。在本文中,我们将各种著名的资源高效的2DCNN转换为3DCNN, 并评估了它们在不同复杂程度分类准确性三大基准方面的表现。我们实验了(1) 动因-600数据集来检查其学习能力,(2) 杰斯特数据集来检查其捕捉运动模式的能力,(3) UCF-101来检查转移学习的实用性。我们评估了每个模型在单个泰坦 XP GP PU和杰特森 TX2 嵌入系统中的运行时间性表现。我们的研究结果表明,这些模型可以用于不同类型的实体应用,因为它们提供了相当复杂的实时性表现,而不是以相当的深度的复杂程度来进行我们所设计的深层次的常规和记忆应用。我们对各种规则进行了分析。