Shadow detection in a single image has received significant research interest in recent years. However, much fewer works have been explored in shadow detection over dynamic scenes. The bottleneck is the lack of a well-established dataset with high-quality annotations for video shadow detection. In this work, we collect a new video shadow detection dataset, which contains 120 videos with 11, 685 frames, covering 60 object categories, varying lengths, and different motion/lighting conditions. All the frames are annotated with a high-quality pixel-level shadow mask. To the best of our knowledge, this is the first learning-oriented dataset for video shadow detection. Furthermore, we develop a new baseline model, named triple-cooperative video shadow detection network (TVSD-Net). It utilizes triple parallel networks in a cooperative manner to learn discriminative representations at intra-video and inter-video levels. Within the network, a dual gated co-attention module is proposed to constrain features from neighboring frames in the same video, while an auxiliary similarity loss is introduced to mine semantic information between different videos. Finally, we conduct a comprehensive study on ViSha, evaluating 12 state-of-the-art models (including single image shadow detectors, video object segmentation, and saliency detection methods). Experiments demonstrate that our model outperforms SOTA competitors.
翻译:近些年来,单一图像中的影子探测工作引起了重要的研究兴趣。然而,在动态场景的影子探测中探索的作品却少得多。瓶颈在于缺乏一个完善的、具有高品质视频影子探测说明的数据集。在这项工作中,我们收集了一个新的视频影子探测数据集,其中包含120个视频影子探测数据集,包含11,685个框架,涵盖60个对象类别、不同长度和不同的运动/亮光条件。所有框架都配有高质量的像素水平的影子遮罩附加说明。根据我们的最佳知识,这是第一个以学习为导向的视频影子探测数据集。此外,我们开发了一个新的基线模型,名为三重合作性视频影子探测网络(TVSD-Net)。我们以合作的方式利用三重平行网络学习在视频内部和视频间层面的歧视性表现。在网络中,建议一个双关共同使用模块来限制同一视频中相邻框架的特征,同时对不同视频的语义信息进行辅助性相似性损失。最后,我们开展了关于VISSA,评估12个图像的州级测试,包括SOIS级测试。