Scene graph generation refers to the task of automatically mapping an image into a semantic structural graph, which requires correctly labeling each extracted objects and their interaction relationships. Despite the recent successes in object detection using deep learning techniques, inferring complex contextual relationships and structured graph representations from visual data remains a challenging topic. In this study, we propose a novel Attentive Relational Network that consists of two key modules with an object detection backbone to approach this problem. The first module is a semantic transformation module used to capture semantic embedded relation features, by translating visual features and linguistic features into a common semantic space. The other module is a graph self-attention module introduced to embed a joint graph representation through assigning various importance weights to neighboring nodes. Finally, accurate scene graphs are produced with the relation inference module by recognizing all entities and the corresponding relations. We evaluate our proposed method on the widely-adopted Visual Genome Dataset, and the results demonstrate the effectiveness and superiority of our model.
翻译:光谱图形生成是指将图像自动映射成语义结构图的任务,这要求正确标记每个提取的物体及其互动关系。尽管最近利用深层学习技术在物体探测方面取得了成功,但从视觉数据推断出复杂的背景关系和结构化图示仍是一个具有挑战性的主题。在本研究中,我们提议建立一个由两个带有物体探测主干柱的关键性模块组成的新型“强化关系网络”来解决这一问题。第一个模块是一个语义转换模块,用来通过将视觉特征和语言特征转换成共同的语义空间来捕捉语义内嵌关系特征。另一个模块是一个图形自我注意模块,通过向相邻节点分配各种重要重量来嵌入一个共同的图形表示。最后,通过识别所有实体和相应关系,与相关推论模块一起制作了准确的场景图。我们评估了广泛采用的视觉基因组数据集的拟议方法,并展示了模型的有效性和优越性。