The objective of this paper is an automatic Audio Description (AD) model that ingests movies and outputs AD in text form. Generating high-quality movie AD is challenging due to the dependency of the descriptions on context, and the limited amount of training data available. In this work, we leverage the power of pretrained foundation models, such as GPT and CLIP, and only train a mapping network that bridges the two models for visually-conditioned text generation. In order to obtain high-quality AD, we make the following four contributions: (i) we incorporate context from the movie clip, AD from previous clips, as well as the subtitles; (ii) we address the lack of training data by pretraining on large-scale datasets, where visual or contextual information is unavailable, e.g. text-only AD without movies or visual captioning datasets without context; (iii) we improve on the currently available AD datasets, by removing label noise in the MAD dataset, and adding character naming information; and (iv) we obtain strong results on the movie AD task compared with previous methods.
翻译:本文的目标是构建一个自动化的音频描述模型,该模型能够输入电影并以文本形式输出音频描述。由于描述的依赖于上下文,并且可用的训练数据量有限,因此生成高质量电影音频描述具有挑战性。在这项工作中,我们利用了预训练基础模型(例如GPT和CLIP)的能力,并仅训练一个映射网络来连接两个模型,以进行受视觉条件限制的文本生成。为了获得高质量的音频描述,我们进行了以下四项贡献:(i)我们将电影剪辑的上下文、先前剪辑的音频描述以及字幕作为上下文进行了合并;(ii)我们解决了数据训练量不足的问题,通过在大规模数据集中进行预训练,其中无法利用视觉或上下文信息,例如仅有音频而没有电影的音频描述数据,或者只有视觉描述而没有上下文的视觉字幕数据集;(iii)我们改进了目前可用的AD数据集,通过去除MAD数据集中的标签噪声,并添加角色命名信息;(iv)与先前的方法相比,我们取得了强大的电影AD结果。