Appearance-based gaze estimation has been very successful with the use of deep learning. Many following works improved domain generalization for gaze estimation. However, even though there has been much progress in domain generalization for gaze estimation, most of the recent work have been focused on cross-dataset performance -- accounting for different distributions in illuminations, head pose, and lighting. Although improving gaze estimation in different distributions of RGB images is important, near-infrared image based gaze estimation is also critical for gaze estimation in dark settings. Also there are inherent limitations relying solely on supervised learning for regression tasks. This paper contributes to solving these problems and proposes GazeCWL, a novel framework for gaze estimation with near-infrared images using contrastive learning. This leverages adversarial attack techniques for data augmentation and a novel contrastive loss function specifically for regression tasks that effectively clusters the features of different samples in the latent space. Our model outperforms previous domain generalization models in infrared image based gaze estimation and outperforms the baseline by 45.6\% while improving the state-of-the-art by 8.6\%, we demonstrate the efficacy of our method.
翻译:利用深层次的学习,以视觉为基础的视觉估计非常成功。许多随后的作品改进了视觉估计的域域性概略。然而,尽管在视觉估计的域性一般化方面取得了很大进展,但最近的多数工作侧重于交叉数据集性能 -- -- 考虑在照明、头部姿势和照明方面不同分布的不同分布情况。虽然在不同分布的RGB图像中改善视觉估计很重要,但以视觉为基础的近红外图像估计对于在黑暗环境中的视觉估计也至关重要。还有完全依赖监督的回归任务学习的内在局限性。本文有助于解决这些问题,并提出了GazeCWL,这是一个利用对比性学习用近红外图像进行视觉估计的新框架。这利用了对抗性攻击技术来增强数据,以及一种新的对比性损失功能,具体是为了将潜藏空间不同样品的特征有效地组合回归性任务。我们的模型比红外图像以前的域性一般化模型要优于红外观估计,比基准高出45.6 ⁇ 。我们展示了我们的方法的功效。