Thanks to the rapid advances in deep learning techniques and the wide availability of large-scale training sets, the performance of video saliency detection models has been improving steadily and significantly. However, deep learning-based visualaudio fixation prediction is still in its infancy. At present, only a few visual-audio sequences have been furnished, with real fixations being recorded in real visual-audio environments. Hence, it would be neither efficient nor necessary to recollect real fixations under the same visual-audio circumstances. To address this problem, this paper promotes a novel approach in a weakly supervised manner to alleviate the demand of large-scale training sets for visual-audio model training. By using only the video category tags, we propose the selective class activation mapping (SCAM) and its upgrade (SCAM+). In the spatial-temporal-audio circumstance, the former follows a coarse-to-fine strategy to select the most discriminative regions, and these regions are usually capable of exhibiting high consistency with the real human-eye fixations. The latter equips the SCAM with an additional multi-granularity perception mechanism, making the whole process more consistent with that of the real human visual system. Moreover, we distill knowledge from these regions to obtain complete new spatial-temporal-audio (STA) fixation prediction (FP) networks, enabling broad applications in cases where video tags are not available. Without resorting to any real human-eye fixation, the performances of these STA FP networks are comparable to those of fully supervised networks. The code and results are publicly available at https://github.com/guotaowang/STANet.
翻译:由于深层学习技术的迅速进步和大规模培训的普及,视频显著检测模型的性能一直在稳步和显著地改善,但是,深层学习基础的视觉稳定预测仍处于萌芽阶段。目前,仅提供了几部视觉-视觉序列,在真实的视觉-视觉环境中记录了真实的固定状态。因此,在相同的视觉-视觉环境下重新收集真实固定状态既无效率,也无必要。为解决这一问题,本文件以监督不力的方式推广一种新颖方法,以缓解对视觉-视听模型网络的大规模培训组合的需求。我们仅使用视频分类标签,建议进行选择性的课堂启动映像(SCAM)及其升级(SCAM+)。在空间-时间-视觉-视觉环境环境中,前者采用粗略到松动的战略选择最具有歧视的区域,而这些地区通常能够表现出与真实的人类-视觉固定状态的高度一致。 后者为SCAM提供了额外的多层次的视觉-视觉-视觉-网络,使得这些视觉-视觉-视觉-视觉-稳定的系统更加一致。这些视觉-图像-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直观-直图-直图-直观-直图-直观-直观-直观-直观-直观-直观-直观-直观-