Audio-Visual Video Parsing is a task to predict the events that occur in video segments for each modality. It often performs in a weakly supervised manner, where only video event labels are provided, i.e., the modalities and the timestamps of the labels are unknown. Due to the lack of densely annotated labels, recent work attempts to leverage pseudo labels to enrich the supervision. A commonly used strategy is to generate pseudo labels by categorizing the known event labels for each modality. However, the labels are still limited to the video level, and the temporal boundaries of event timestamps remain unlabeled. In this paper, we propose a new pseudo label generation strategy that can explicitly assign labels to each video segment by utilizing prior knowledge learned from the open world. Specifically, we exploit the CLIP model to estimate the events in each video segment based on visual modality to generate segment-level pseudo labels. A new loss function is proposed to regularize these labels by taking into account their category-richness and segmentrichness. A label denoising strategy is adopted to improve the pseudo labels by flipping them whenever high forward binary cross entropy loss occurs. We perform extensive experiments on the LLP dataset and demonstrate that our method can generate high-quality segment-level pseudo labels with the help of our newly proposed loss and the label denoising strategy. Our method achieves state-of-the-art audio-visual video parsing performance.
翻译:视频视频视频剖析是一项任务,用于预测每种模式在视频段段发生的事件。 它通常以监督不力的方式运行, 仅提供视频事件标签, 即标签的模式和时间标记未知。 由于缺乏高密度附加注释标签, 最近试图利用假标签来丰富监管。 常用策略是生成假标签, 将已知事件标签分类为每种模式, 以生成假标签。 但是, 标签仍然局限于视频级别, 事件时间戳的时间界限仍然未加标记 。 在本文中, 我们提出一个新的假标签生成战略, 通过利用从开放世界学到的知识, 明确为每个视频段分配标签。 具体地说, 我们利用 CLIP 模型来根据视觉模式估算每个视频段段的事件以丰富监管内容。 提议一个新的损失函数, 以考虑到这些标签的类别丰富性和分解性能, 采用标签解析策略来改进假标签。 只要高前置质量标签, 我们就会在高前端测试时, 以高端标签为我们的标准, 就会使用高端标签的跨级标签, 实现我们高端标签的模拟损失等级。</s>