People's looking at each other or mutual gaze is ubiquitous in our daily interactions, and detecting mutual gaze is of great significance for understanding human social scenes. Current mutual gaze detection methods focus on two-stage methods, whose inference speed is limited by the two-stage pipeline and the performance in the second stage is affected by the first one. In this paper, we propose a novel one-stage mutual gaze detection framework called Mutual Gaze TRansformer or MGTR to perform mutual gaze detection in an end-to-end manner. By designing mutual gaze instance triples, MGTR can detect each human head bounding box and simultaneously infer mutual gaze relationship based on global image information, which streamlines the whole process with simplicity. Experimental results on two mutual gaze datasets show that our method is able to accelerate mutual gaze detection process without losing performance. Ablation study shows that different components of MGTR can capture different levels of semantic information in images. Code is available at https://github.com/Gmbition/MGTR
翻译:我们的日常互动中,人们互相看对方或互相看对方是无处不在的,发现相互看视对于了解人类的社会场景具有重大意义。当前相互看视的探测方法侧重于两阶段方法,其推论速度受两阶段输油管和第二阶段的性能受第一阶段的影响。在本文中,我们提出一个名为“相互凝视TRansfor或MGTR”的新的一阶段相互望探测框架,以以最终到终端的方式进行相互凝视探测。通过设计相互凝视实例的三倍,MGTR能够检测每个人的头盘,同时根据全球图像信息推断出相互凝视关系,从而简化整个过程。两个相互凝视数据集的实验结果显示,我们的方法能够加快相互凝视的探测过程,同时又不丧失性能。Abllog研究显示,MGTR的不同组成部分可以在图像中捕捉不同层次的语义信息。代码可在http://github.com/Gmbify/MGTR上查阅。