Automatically identifying harmful content in video is an important task with a wide range of applications. However, there is a lack of professionally labeled open datasets available. In this work VidHarm, an open dataset of 3589 video clips from film trailers annotated by professionals, is presented. An analysis of the dataset is performed, revealing among other things the relation between clip and trailer level annotations. Audiovisual models are trained on the dataset and an in-depth study of modeling choices conducted. The results show that performance is greatly improved by combining the visual and audio modality, pre-training on large-scale video recognition datasets, and class balanced sampling. Lastly, biases of the trained models are investigated using discrimination probing. VidHarm is openly available, and further details are available at this webpage: \url{https://vidharm.github.io/}
翻译:自动识别视频中有害内容是一项重要任务,应用范围很广,但缺乏专业标签的开放数据集; Vidharm在这项工作中展示了由专业人员附加说明的电影拖车的3589个视频片段的开放数据集;对数据集进行了分析,除其他外,揭示了剪辑和拖车级说明之间的关系;视听模型在数据集方面接受了培训,并深入研究了模式选择;结果显示,通过将视觉和音频模式、大规模视频识别数据集预培训以及类平衡抽样相结合,业绩大为改善;最后,利用歧视调查对经过培训的模型的偏向进行了调查;Vidharm公开提供,该网页提供了更多详情:https://vidharm.github.io/}