3D convolutional neural networks have achieved promising results for video tasks in computer vision, including video saliency prediction that is explored in this paper. However, 3D convolution encodes visual representation merely on fixed local spacetime according to its kernel size, while human attention is always attracted by relational visual features at different time of a video. To overcome this limitation, we propose a novel Spatio-Temporal Self-Attention 3D Network (STSANet) for video saliency prediction, in which multiple Spatio-Temporal Self-Attention (STSA) modules are employed at different levels of 3D convolutional backbone to directly capture long-range relations between spatio-temporal features of different time steps. Besides, we propose an Attentional Multi-Scale Fusion (AMSF) module to integrate multi-level features with the perception of context in semantic and spatio-temporal subspaces. Extensive experiments demonstrate the contributions of key components of our method, and the results on DHF1K, Hollywood-2, UCF, and DIEM benchmark datasets clearly prove the superiority of the proposed model compared with all state-of-the-art models.
翻译:3D进化神经网络在计算机视觉的视频任务方面取得了可喜的成果,包括本文所探讨的视频显著预测。然而,3D进化编码仅根据内核大小在固定的当地空间时段进行视觉显示,而人类的注意力总是在视频的不同时段被关联视觉特征所吸引。为了克服这一限制,我们提议为视频显著预测建立一个新型Spatio-Temporal-Temporal自控 3D网络(STSANet),其中多个空间自控模块被用于3D进化主干柱的不同级别,以直接捕捉不同时段的阵列特征之间的长距离关系。此外,我们提议一个多层次的多层次组合模块,结合对语系和阵列子空间环境的看法。广泛的实验显示了我们方法的关键组成部分的贡献,以及DHF1K、好莱坞-2、UCF和DIEM基准模型的所有结果,清楚地证明了所有拟议模型的优越性,与所有模型的对比优势。