Predicting saliency in videos is a challenging problem due to complex modeling of interactions between spatial and temporal information, especially when ever-changing, dynamic nature of videos is considered. Recently, researchers have proposed large-scale datasets and models that take advantage of deep learning as a way to understand what's important for video saliency. These approaches, however, learn to combine spatial and temporal features in a static manner and do not adapt themselves much to the changes in the video content. In this paper, we introduce Gated Fusion Network for dynamic saliency (GFSalNet), the first deep saliency model capable of making predictions in a dynamic way via gated fusion mechanism. Moreover, our model also exploits spatial and channel-wise attention within a multi-scale architecture that further allows for highly accurate predictions. We evaluate the proposed approach on a number of datasets, and our experimental analysis demonstrates that it outperforms or is highly competitive with the state of the art. Importantly, we show that it has a good generalization ability, and moreover, exploits temporal information more effectively via its adaptive fusion scheme.
翻译:由于对空间和时间信息之间的相互作用进行复杂的建模,特别是在考虑视频不断变化的动态性质时,预测视频中的显著性是一个具有挑战性的问题。最近,研究人员提出了大规模数据集和模型,利用深层学习来了解对视频显著性的重要性。然而,这些方法以静态的方式学习将空间和时间特征结合起来,而不适应视频内容的变化。在本文中,我们引入了动态显著性Gated聚合网络(GFSalNet),这是第一个能够通过闭门融合机制以动态方式作出预测的深层显著模型。此外,我们的模型还在一个能够进一步进行高度准确预测的多尺度结构中利用空间和频道关注。我们评估了关于一些数据集的拟议方法,我们的实验分析表明,它超越了艺术状态,或与艺术状态竞争非常激烈。重要的是,我们表明它具有良好的概括能力,而且通过适应性融合计划更有效地利用时间信息。