Event classification is inherently sequential and multimodal. Therefore, deep neural models need to dynamically focus on the most relevant time window and/or modality of a video. In this study, we propose the Multi-level Attention Fusion network (MAFnet), an architecture that can dynamically fuse visual and audio information for event recognition. Inspired by prior studies in neuroscience, we couple both modalities at different levels of visual and audio paths. Furthermore, the network dynamically highlights a modality at a given time window relevant to classify events. Experimental results in AVE (Audio-Visual Event), UCF51, and Kinetics-Sounds datasets show that the approach can effectively improve the accuracy in audio-visual event classification. Code is available at: https://github.com/numediart/MAFnet
翻译:因此,深神经模型需要动态地聚焦于最相关的时间窗口和/或视频模式。在本研究中,我们提议建立多层次关注融合网络(MAFnet),这是一个能够动态地将视觉和音频信息结合到事件识别的架构。在先前对神经科学的研究的启发下,我们在不同层次的视觉和音频路径上将两种模式结合起来。此外,该网络动态地突出显示与分类事件相关的特定时间窗口的一种模式。AVE(视觉实验活动)、UCF51和动因-声音数据集的实验结果显示,该方法能够有效地提高视听活动分类的准确性。代码可在以下网址查阅:https://github.com/numedart/MAFnet。