Understanding temporal dynamics of video is an essential aspect of learning better video representations. Recently, transformer-based architectural designs have been extensively explored for video tasks due to their capability to capture long-term dependency of input sequences. However, we found that these Video Transformers are still biased to learn spatial dynamics rather than temporal ones, and debiasing the spurious correlation is critical for their performance. Based on the observations, we design simple yet effective self-supervised tasks for video models to learn temporal dynamics better. Specifically, for debiasing the spatial bias, our method learns the temporal order of video frames as extra self-supervision and enforces the randomly shuffled frames to have low-confidence outputs. Also, our method learns the temporal flow direction of video tokens among consecutive frames for enhancing the correlation toward temporal dynamics. Under various video action recognition tasks, we demonstrate the effectiveness of our method and its compatibility with state-of-the-art Video Transformers.
翻译:了解视频的时间动态是学习更好的视频演示所必不可少的一个方面。 最近,变压器的建筑设计由于能够捕捉输入序列的长期依赖性而为视频任务进行了广泛的探索。 然而,我们发现这些变压器仍然偏向于学习空间动态而不是时间动态,而贬低虚假的关联性对其性能至关重要。 根据观察,我们为视频模型设计简单而有效的自我监督任务,以更好地学习时间动态。具体地说,为了降低空间偏差,我们的方法是学习视频框架的时间顺序,作为额外的自上而下的视野,并随机调整框架,以便产生低自信的结果。此外,我们的方法还在连续的框架中学习视频符号的时间流方向,以加强与时间动态的关联性。在各种视频动作识别任务中,我们展示了我们的方法的有效性及其与最先进的视频变换器的兼容性。