With the increasing demand for video understanding, video moment and highlight detection (MHD) has emerged as a critical research topic. MHD aims to localize all moments and predict clip-wise saliency scores simultaneously. Despite progress made by existing DETR-based methods, we observe that these methods coarsely fuse features from different modalities, which weakens the temporal intra-modal context and results in insufficient cross-modal interaction. To address this issue, we propose MH-DETR (Moment and Highlight Detection Transformer) tailored for MHD. Specifically, we introduce a simple yet efficient pooling operator within the uni-modal encoder to capture global intra-modal context. Moreover, to obtain temporally aligned cross-modal features, we design a plug-and-play cross-modal interaction module between the encoder and decoder, seamlessly integrating visual and textual features. Comprehensive experiments on QVHighlights, Charades-STA, Activity-Net, and TVSum datasets show that MH-DETR outperforms existing state-of-the-art methods, demonstrating its effectiveness and superiority. Our code is available at https://github.com/YoucanBaby/MH-DETR.
翻译:随着对视频理解的需求不断增加,视频片段和精彩时刻检测(MHD)已成为关键研究主题。 MHD旨在同时定位所有片段并预测剪辑级别的显着性分数。尽管现有的基于DETR的方法取得了进展,但我们观察到这些方法粗略地融合了来自不同模态的特征,这削弱了时态内部上下文并导致跨模态交互不足。为了解决这个问题,我们提出了MH-DETR(Moment and Highlight Detection Transformer),专门为MHD定制。具体而言,在单模编码器内引入了一种简单而有效的池化运算符,以捕获全局内部模态上下文。此外,为了获得时间上对齐的跨模态特征,我们设计了一个插拔式跨模态交互模块,将视觉和文本特征无缝集成在编码器和解码器之间。对QVHighlights、Charades-STA、Activity-Net和TVSum数据集的全面实验表明,MH-DETR优于现有的最先进方法,展示了其有效性和优越性。我们的代码可在https://github.com/YoucanBaby/MH-DETR上获得。