Integrating audio and visual data for training multimodal foundational models remains a challenge. The Audio-Video Vector Alignment (AVVA) framework addresses this by considering AV scene alignment beyond mere temporal synchronization, and leveraging Large Language Models (LLMs) for data curation. AVVA implements a scoring mechanism for selecting aligned training data segments. It integrates Whisper, a speech-based foundation model, for audio and DINOv2 for video analysis in a dual-encoder structure with contrastive learning on AV pairs. Evaluations on AudioCaps, VALOR, and VGGSound demonstrate the effectiveness of the proposed model architecture and data curation approach. AVVA achieves a significant improvement in top-k accuracies for video-to-audio retrieval on all datasets compared to DenseAV, while using only 192 hrs of curated training data. Furthermore, an ablation study indicates that the data curation process effectively trades data quality for data quantity, yielding increases in top-k retrieval accuracies on AudioCaps, VALOR, and VGGSound, compared to training on the full spectrum of uncurated data.
翻译:整合音频与视觉数据以训练多模态基础模型仍面临挑战。音频-视频向量对齐(AVVA)框架通过超越单纯时间同步的AV场景对齐,并利用大语言模型(LLM)进行数据筛选,以应对此挑战。AVVA实现了一种用于选择对齐训练数据片段的评分机制。它集成了基于语音的基础模型Whisper进行音频分析,以及DINOv2进行视频分析,采用双编码器结构,在AV对上进行对比学习。在AudioCaps、VALOR和VGGSound上的评估证明了所提出的模型架构和数据筛选方法的有效性。与DenseAV相比,AVVA在所有数据集上的视频到音频检索的top-k准确率均有显著提升,而仅使用了192小时的筛选后训练数据。此外,消融研究表明,数据筛选过程有效地以数据质量换取数据数量,与在未经筛选的完整数据上训练相比,在AudioCaps、VALOR和VGGSound上的top-k检索准确率均有所提高。