In recent years, 2D Convolutional Networks-based video action recognition has encouragingly gained wide popularity; However, constrained by the lack of long-range non-linear temporal relation modeling and reverse motion information modeling, the performance of existing models is, therefore, undercut seriously. To address this urgent problem, we introduce a startling Temporal Transformer Network with Self-supervision (TTSN). Our high-performance TTSN mainly consists of a temporal transformer module and a temporal sequence self-supervision module. Concisely speaking, we utilize the efficient temporal transformer module to model the non-linear temporal dependencies among non-local frames, which significantly enhances complex motion feature representations. The temporal sequence self-supervision module we employ unprecedentedly adopts the streamlined strategy of "random batch random channel" to reverse the sequence of video frames, allowing robust extractions of motion information representation from inversed temporal dimensions and improving the generalization capability of the model. Extensive experiments on three widely used datasets (HMDB51, UCF101, and Something-something V1) have conclusively demonstrated that our proposed TTSN is promising as it successfully achieves state-of-the-art performance for action recognition.
翻译:近几年来,基于2D革命网络的视频行动承认取得了令人鼓舞的广泛流行; 然而,由于缺乏长距离非线性时间关系模型和反向运动信息模型的缺乏,现有模型的性能受到制约,因此严重削弱。为了解决这一紧迫问题,我们引入了一个具有自我监督功能的惊人的时空变异网络(TTSN),我们的高性能TTSN主要由一个时间变压器模块和一个时间序列自我监督模块组成。简洁地说,我们利用高效的时间变压器模块来模拟非本地框架的非线性时间依赖性,这大大加强了复杂的运动特征表现。时间序列自我监督模块,我们史无前例地采用了“随机批次随机通道”的简化战略,以扭转视频框架的顺序,允许从反时空层面大力提取运动信息,并提高模型的普遍化能力。关于三大广泛使用的数据集(HMDB51、UCF101和某些事物V1)的广泛实验,明确表明,我们拟议的TTSNSN行动有望成功实现状态。