In this paper, we address the challenge of unsupervised mistake detection in egocentric procedural video through the analysis of gaze signals. Traditional supervised mistake detection methods rely on manually labeled mistakes, and hence suffer from domain-dependence and scalability issues. We introduce an unsupervised method for detecting mistakes in videos of human activities, overcoming the challenges of domain-specific requirements and the need for annotated data. We postulate that, when a subject is making a mistake in the execution of a procedure, their attention patterns will deviate from normality. We hence propose to detect mistakes by comparing gaze trajectories predicted from input video with ground truth gaze signals collected through a gaze tracker. Since predicting gaze in video is characterized by high uncertainty, we propose a novel \textit{gaze completion task}, which aims to predict gaze from visual observations and partial gaze trajectories. We further contribute a \textit{gaze completion approach} based on a Gaze-Frame Correlation module to explicitly model the correlation between gaze information and each local visual token. Inconsistencies between the predicted and observed gaze trajectories act as an indicator for identifying mistakes. Experiments on the EPIC-Tent, HoloAssist and IndustReal datasets showcase the effectiveness of the proposed approach as compared to unsupervised and one-class techniques. Our method is ranked first on the HoloAssist Mistake Detection challenge.
翻译:暂无翻译