In this paper we present a novel approach to the Audio-visual video parsing task that takes into cognizance how event categories bind to audio and visual modalities. The proposed parsing approach simultaneously detects the temporal boundaries in terms of start and end times of such events. This task can be naturally formulated as a Multimodal Multiple Instance Learning (MMIL) problem. We show how the MMIL task can benefit from the following techniques geared toward self and cross modal learning: (i) self-supervised pre-training based on highly aligned task audio-video grounding, (ii) global context aware attention and (iii) adversarial training. As for pre-training, we boostrap on the Uniter (style) %\todo{add citation} transformer architecture using a self-supervised objective audio-video grounding over the relatively large AudioSet dataset. This pretrained model is fine-tuned on an architectural variant of the state-of-the-art Hybrid Attention Network (HAN) %\todo{Add citation} that uses global context aware attention and adversarial training objectives for audio visual video parsing. %Further, we use a hybrid attention network and adversarial training to improve self and cross modal learning. Attentive MMIL pooling method is leveraged to adaptively explore useful audio and visual signals from different temporal segments and modalities. We present extensive experimental evaluations on the Look, Listen, and Parse (LLP) dataset and compare it against HAN. We also present several ablation tests to validate the effect of pre-training, attention and adversarial training.
翻译:在本文中,我们展示了对视听视频分析任务的一种新颖方法,该方法认识到事件类别如何与视听模式联系在一起; 拟议的分析方法同时从此类事件的开始和结束时间的角度探测时间界限。 这项任务可以自然地形成为多模式多实例学习(MMIL)问题。 我们展示了MMIL任务如何受益于以下自我和跨模式学习技术:(一) 在高度一致的视听任务地面定位基础上自我监督的预培训;(二) 全球环境意识关注和(三) 对抗性培训。 至于培训前,我们用自上而下的客观视听多实例学习(MMIL)来检测此类活动的起始和结束时间界限。 这一预先培训模式对状态-艺术混合关注网络(HAN)的建筑变体进行了精细的调整,(HAN) ⁇ todo{Add引述 } 利用全球背景的注意和对抗性培训目标, 用于视听培训(sylovey Plodo{ad Jrealalal) 和跨轨道的自我学习模式。 我们使用一种自我观察性测试, 我们使用一种自我调整和摩变式的网络和摩变式的自我理解性测试。