Video-based dialog task is a challenging multimodal learning task that has received increasing attention over the past few years with state-of-the-art obtaining new performance records. This progress is largely powered by the adaptation of the more powerful transformer-based language encoders. Despite this progress, existing approaches do not effectively utilize visual features to help solve tasks. Recent studies show that state-of-the-art models are biased toward textual information rather than visual cues. In order to better leverage the available visual information, this study proposes a new framework that combines 3D-CNN network and transformer-based networks into a single visual encoder to extract more robust semantic representations from videos. The visual encoder is jointly trained end-to-end with other input modalities such as text and audio. Experiments on the AVSD task show significant improvement over baselines in both generative and retrieval tasks.
翻译:视频对话任务是一项具有挑战性的多式学习任务,过去几年来,随着最先进的网络获得新的业绩记录,这种学习任务日益受到越来越多的关注。这一进展主要得益于更强大的变压器语言编码器的改造。尽管取得了这一进展,但现有方法并没有有效地利用视觉特征来帮助解决问题。最近的研究表明,最先进的模型偏向文字信息而不是视觉提示。为了更好地利用现有的视觉信息,本研究报告提议了一个新的框架,将3D-CNN网络和变压器网络合并成一个单一的视觉编码器,从视频中提取更强有力的语义表达方式。视觉编码器与文本和音频等其他输入方式一起经过联合培训。关于AVSD任务的实验显示,在基因和检索任务方面,基线都有很大改进。