We tackle the problem of predicting saliency maps for videos of dynamic scenes. We note that the accuracy of the maps reconstructed from the gaze data of a fixed number of observers varies with the frame, as it depends on the content of the scene. This issue is particularly pressing when a limited number of observers are available. In such cases, directly minimizing the discrepancy between the predicted and measured saliency maps, as traditional deep-learning methods do, results in overfitting to the noisy data. We propose a noise-aware training (NAT) paradigm that quantifies and accounts for the uncertainty arising from frame-specific gaze data inaccuracy. We show that NAT is especially advantageous when limited training data is available, with experiments across different models, loss functions, and datasets. We also introduce a video game-based saliency dataset, with rich temporal semantics, and multiple gaze attractors per frame. The dataset and source code are available at https://github.com/NVlabs/NAT-saliency.
翻译:我们处理为动态场景的视频预测显要地图的问题。我们注意到,从固定观察者的凝视数据中重建的地图的准确性因框架而异,因为它取决于现场的内容。当有数量有限的观察者时,这一问题特别紧迫。在这种情况下,像传统的深层学习方法一样,直接将预测和测量显要地图之间的差异缩小到与噪音数据不相适应的程度。我们建议采用一个有噪音的培训模式,对特定框架的凝视数据不准确性进行量化和说明。我们表明,当有有限的培训数据时,通过不同的模型、损失功能和数据集进行实验,NAT特别有利。我们还采用了一个基于游戏的显要数据集,具有丰富的时间精度和每个框架的多个吸引器。数据集和源代码可在https://github.com/NVlabs/NAT-saliency查阅。