Recently, increasing efforts have been focused on Weakly Supervised Scene Graph Generation (WSSGG). The mainstream solution for WSSGG typically follows the same pipeline: they first align text entities in the weak image-level supervisions (e.g., unlocalized relation triplets or captions) with image regions, and then train SGG models in a fully-supervised manner with aligned instance-level "pseudo" labels. However, we argue that most existing WSSGG works only focus on object-consistency, which means the grounded regions should have the same object category label as text entities. While they neglect another basic requirement for an ideal alignment: interaction-consistency, which means the grounded region pairs should have the same interactions (i.e., visual relations) as text entity pairs. Hence, in this paper, we propose to enhance a simple grounding module with both object-aware and interaction-aware knowledge to acquire more reliable pseudo labels. To better leverage these two types of knowledge, we regard them as two teachers and fuse their generated targets to guide the training process of our grounding module. Specifically, we design two different strategies to adaptively assign weights to different teachers by assessing their reliability on each training sample. Extensive experiments have demonstrated that our method consistently improves WSSGG performance on various kinds of weak supervision.
翻译:最近,越来越多的努力集中于薄弱监督的景象图谱生成(WSSGG)。WSSGG的主流解决方案通常遵循同样的管道:它们首先将图像层面监管薄弱的文本实体(例如未定位的三重关系或字幕)与图像区域统一起来,然后以完全监督的方式将SGG模型与匹配的“假名”标签统一起来。然而,我们争辩说,大多数现有的WSSGG仅侧重于目标一致性,这意味着受禁区域应当拥有与文本实体相同的对象类别标签。它们忽视了理想调整的另一个基本要求:互动一致性,这意味着受禁区域配对应当与文本实体配对具有相同的互动(即视觉关系)。因此,我们在本文件中提议加强一个简单的基础模块,既包括目标认知,也包括互动认知知识,以获得更可靠的假名标签。为了更好地利用这两种知识,我们认为这些区域应当拥有与文本实体相同的目标类别标签。尽管它们忽视了另一个基本目标类别,即互动一致性,这意味着受禁区对等区域配对应具有相同的互动(即视觉关系)与文本实体对配对等。因此,我们设计了两种不同程度的测试方法,从而持续地改进了对等方法。