In this paper, we test the hypothesis that interesting events in unstructured videos are inherently audiovisual. We combine deep image representations for object recognition and scene understanding with representations from an audiovisual affect recognition model. To this set, we include content agnostic audio-visual synchrony representations and mel-frequency cepstral coefficients to capture other intrinsic properties of audio. These features are used in a modular supervised model. We present results from two experiments: efficacy study of single features on the task, and an ablation study where we leave one feature out at a time. For the video summarization task, our results indicate that the visual features carry most information, and including audiovisual features improves over visual-only information. To better study the task of highlight detection, we run a pilot experiment with highlights annotations for a small subset of video clips and fine-tune our best model on it. Results indicate that we can transfer knowledge from the video summarization task to a model trained specifically for the task of highlight detection.
翻译:在本文中,我们测试了非结构化视频中有趣的事件本质上是视听性的假设。 我们将物体识别和场景理解的深层图像表示与视听影响识别模型的表示结合起来。 对于这一组,我们包括内容不可知的视听同步表象和中频阴部系数,以捕捉音音的其他内在特性。 这些特征用于模块监督模型。 我们介绍两个实验的结果: 任务单个特征的效果研究, 以及一次留下一个特征的熔化研究。 关于视频总结任务, 我们的结果表明, 视觉特征包含着大多数信息, 包括视听特征, 而不是视觉识别信息。 为了更好地研究亮点探测任务, 我们进行了一个实验, 展示了一小部分视频剪辑的亮点, 并微调了我们最好的模型。 结果表明, 我们可以将视频合成任务的知识传输给一个专门为亮点探测任务训练的模型。