Vision-language (V+L) pretraining models have achieved great success in supporting multimedia applications by understanding the alignments between images and text. While existing vision-language pretraining models primarily focus on understanding objects in images or entities in text, they often ignore the alignment at the level of events and their argument structures. In this work, we propose a contrastive learning framework to enforce vision-language pretraining models to comprehend events and associated argument (participant) roles. To achieve this, we take advantage of text information extraction technologies to obtain event structural knowledge, and utilize multiple prompt functions to contrast difficult negative descriptions by manipulating event structures. We also design an event graph alignment loss based on optimal transport to capture event argument structures. In addition, we collect a large event-rich dataset (106,875 images) for pretraining, which provides a more challenging image retrieval benchmark to assess the understanding of complicated lengthy sentences. Experiments show that our zero-shot CLIP-Event outperforms the state-of-the-art supervised model in argument extraction on Multimedia Event Extraction, achieving more than 5% absolute F-score gain in event extraction, as well as significant improvements on a variety of downstream tasks under zero-shot settings.
翻译:视觉语言( V+L) 预培训模式通过理解图像和文字的校正,在支持多媒体应用方面取得了巨大成功; 虽然现有的视觉语言预培训模式主要侧重于理解图像或文字实体中的物体,但往往忽视事件及其论证结构层面的校正; 在这项工作中,我们提出一个对比式学习框架,以强制实施视觉语言预培训模式,以理解事件和相关论据(参与者)的作用; 为了实现这一目标,我们利用文本信息提取技术获取事件结构知识,并利用多种快速功能来通过操纵事件结构来对比困难的负面描述; 我们还根据最佳运输来捕捉事件论证结构,设计事件图形调整损失。 此外,我们收集了大型事件丰富数据集(106 875张)用于预培训,为评估复杂长句的理解提供了更具挑战性的图像检索基准。 实验显示,我们的零弹 CLIP- Event 将多介质事件提取中受监管的状态模型转化为争议性模型,在事件提取过程中实现了超过5%的绝对F核心收益,在事件提取过程中实现了显著的下游变。