Self-supervised learning (SSL) has become prevalent for learning representations in computer vision. Notably, SSL exploits contrastive learning to encourage visual representations to be invariant under various image transformations. The task of gaze estimation, on the other hand, demands not just invariance to various appearances but also equivariance to the geometric transformations. In this work, we propose a simple contrastive representation learning framework for gaze estimation, named Gaze Contrastive Learning (GazeCLR). GazeCLR exploits multi-view data to promote equivariance and relies on selected data augmentation techniques that do not alter gaze directions for invariance learning. Our experiments demonstrate the effectiveness of GazeCLR for several settings of the gaze estimation task. Particularly, our results show that GazeCLR improves the performance of cross-domain gaze estimation and yields as high as 17.2% relative improvement. Moreover, the GazeCLR framework is competitive with state-of-the-art representation learning methods for few-shot evaluation. The code and pre-trained models are available at https://github.com/jswati31/gazeclr.
翻译:自我监督学习(SSL)已成为计算机视觉中学习表现的常见现象。 值得注意的是, SSL利用对比学习鼓励视觉表现在各种图像转换中是无差异的。 另一方面, 凝视估计的任务不仅要求不同外观的变化,而且要求对几何变异也有等差。 在这项工作中,我们提出了一个简单的对比学习框架,用于视觉估计,名为“Gaze Contrastition Learning”(GazeCLR) 。 GazeCLR利用多视图数据促进变异性,并依靠某些不会改变视向的增强数据技术来学习不易变异性。我们的实验显示GazeCLR对若干视觉估计任务环境的有效性。 特别是,我们的结果显示,GazeCLR提高了跨面视觉估计的性能,并取得了高达17.2%的相对改进率。 此外, GazeCLR框架在微光谱评价方面与最先进的代表学习方法具有竞争力。 代码和预先训练过的模型可在 https://github.com/jwarti31/gazecl。