One significant factor we expect the video representation learning to capture, especially in contrast with the image representation learning, is the object motion. However, we found that in the current mainstream video datasets, some action categories are highly related with the scene where the action happens, making the model tend to degrade to a solution where only the scene information is encoded. For example, a trained model may predict a video as playing football simply because it sees the field, neglecting that the subject is dancing as a cheerleader on the field. This is against our original intention towards the video representation learning and may bring scene bias on different dataset that can not be ignored. In order to tackle this problem, we propose to decouple the scene and the motion (DSM) with two simple operations, so that the model attention towards the motion information is better paid. Specifically, we construct a positive clip and a negative clip for each video. Compared to the original video, the positive/negative is motion-untouched/broken but scene-broken/untouched by Spatial Local Disturbance and Temporal Local Disturbance. Our objective is to pull the positive closer while pushing the negative farther to the original clip in the latent space. In this way, the impact of the scene is weakened while the temporal sensitivity of the network is further enhanced. We conduct experiments on two tasks with various backbones and different pre-training datasets, and find that our method surpass the SOTA methods with a remarkable 8.1% and 8.8% improvement towards action recognition task on the UCF101 and HMDB51 datasets respectively using the same backbone.
翻译:我们期望通过视频代表学习来拍摄视频,特别是与图像代表学习形成对照,这是一个重要因素。然而,我们发现,在当前的主流视频数据集中,有些行动类别与行动发生地的场景高度相关,使模型倾向于退化为仅对场景信息进行编码的解决方案。例如,一个经过培训的模型可能仅仅因为看到场面而将视频作为足球播放,而忽略了该主题是在实地作为啦啦队长跳舞。这有悖我们最初对视频代表学习的意向,并可能导致无法忽视的不同数据集的场景偏差。为了解决这一问题,我们建议用两个简单的操作来拆解场景和运动(DSM)的场景和动作(DSM),这样,模型对运动信息的注意就会被降低到只有场景信息的编码。具体地,我们为每个视频制作了一个正面的剪辑和负面的剪辑。与最初的视频相比,积极/负面的动作是由空间局和Tevoal Construal B进一步发现,我们的目标是分别将原始的动作和历史动作推向更深处推进。