Fine-grained action recognition is a challenging task in computer vision. As fine-grained datasets have small inter-class variations in spatial and temporal space, fine-grained action recognition model requires good temporal reasoning and discrimination of attribute action semantics. Leveraging on CNN's ability in capturing high level spatial-temporal feature representations and Transformer's modeling efficiency in capturing latent semantics and global dependencies, we investigate two frameworks that combine CNN vision backbone and Transformer Encoder to enhance fine-grained action recognition: 1) a vision-based encoder to learn latent temporal semantics, and 2) a multi-modal video-text cross encoder to exploit additional text input and learn cross association between visual and text semantics. Our experimental results show that both our Transformer encoder frameworks effectively learn latent temporal semantics and cross-modality association, with improved recognition performance over CNN vision model. We achieve new state-of-the-art performance on the FineGym benchmark dataset for both proposed architectures.
翻译:精细区分的动作识别是计算机愿景中一项艰巨的任务。 细细区分的数据集在空间和时间空间中具有小类间差异,细细区分的动作识别模型需要良好的时间推理和属性动作语义的区别。 利用CNN的能力捕捉高水平的空间时空特征表现和变异器模型在捕捉潜在语义和全球依赖性方面的效率,我们调查两个框架,将CNN的视觉骨干和变异器编码器组合起来,以加强微小区分的动作识别:1)一个基于愿景的编码器,学习潜伏的时间语义,2)一个多模式的视频文本交叉编码器,以利用额外的文字输入和学习视觉和文字语义之间的交叉关联。我们的实验结果表明,我们的变异器编码框架有效地学习了潜在的时间语义和跨模式的联系,并改进了CNN的视觉模型的识别性能。我们在两个拟议结构的FineGym基准数据集上实现了新的状态和艺术性表现。