Scene graph generation aims to construct a semantic graph structure from an image such that its nodes and edges respectively represent objects and their relationships. One of the major challenges for the task lies in the presence of distracting objects and relationships in images; contextual reasoning is strongly distracted by irrelevant objects or backgrounds and, more importantly, a vast number of irrelevant candidate relations. To tackle the issue, we propose the Selective Quad Attention Network (SQUAT) that learns to select relevant object pairs and disambiguate them via diverse contextual interactions. SQUAT consists of two main components: edge selection and quad attention. The edge selection module selects relevant object pairs, i.e., edges in the scene graph, which helps contextual reasoning, and the quad attention module then updates the edge features using both edge-to-node and edge-to-edge cross-attentions to capture contextual information between objects and object pairs. Experiments demonstrate the strong performance and robustness of SQUAT, achieving the state of the art on the Visual Genome and Open Images v6 benchmarks.
翻译:场景图生成的目标是从图像中构建语义图结构,使其节点和边分别表示对象及其关系。这项任务的一个主要挑战在于图像中存在分散注意力的干扰对象和关系。背景或大量无关候选关系强烈干扰了语境推理。为应对这一问题,我们提出了选择性四重注意力网络(SQUAT),学习选择相关的对象组和通过多样化的上下文交互消除歧义。SQUAT由两个主要组成部分组成:边选择和四重注意力。边选择模块选择相关的对象组,即场景图中的边,这有助于语境推理。四重注意力模块使用边到节点和边到边的交叉注意力更新边缘特征,捕捉对象和对象组之间的上下文信息。实验表明SQUAT具有强大的性能和鲁棒性,在Visual Genome和Open Images v6基准测试中均达到了最先进水平。