State of the art architectures for untrimmed video Temporal Action Localization (TAL) have only considered RGB and Flow modalities, leaving the information-rich audio modality totally unexploited. Audio fusion has been explored for the related but arguably easier problem of trimmed (clip-level) action recognition. However, TAL poses a unique set of challenges. In this paper, we propose simple but effective fusion-based approaches for TAL. To the best of our knowledge, our work is the first to jointly consider audio and video modalities for supervised TAL. We experimentally show that our schemes consistently improve performance for state of the art video-only TAL approaches. Specifically, they help achieve new state of the art performance on large-scale benchmark datasets - ActivityNet-1.3 (54.34 mAP@0.5) and THUMOS14 (57.18 mAP@0.5). Our experiments include ablations involving multiple fusion schemes, modality combinations and TAL architectures. Our code, models and associated data will be made available.
翻译:未剪接的视频时空行动本地化(TAL)艺术结构现状仅考虑了 RGB 和 流动模式,使得信息丰富的音频模式完全没有被开发。 已经为相关但可能比较容易的剪接( clipp-level)行动识别问题探索了音频聚合。 然而, TAL 提出了一套独特的挑战。 在本文中,我们为TAL 提出了简单而有效的基于聚合的方法。 根据我们的最佳知识,我们的工作是首先共同考虑监督TAL 的音频和视频模式。 我们实验性地表明,我们的计划不断改进了仅以视频方式进行艺术状态的TAL 方法的性能。 具体地说, 它们有助于在大型基准数据集- 活动Net-1.3 (54.34 mAP@0.5) 和 THUMOS14 (57.18 mAP@0.5) 上实现艺术性能的新状态。 我们的实验包括涉及多个集成计划、 模式组合和 TAL 结构的布局。 我们的代码、 模型和相关数据将会被提供。