Despite recent advances in appearance-based gaze estimation techniques, the need for training data that covers the target head pose and gaze distribution remains a crucial challenge for practical deployment. This work examines a novel approach for synthesizing gaze estimation training data based on monocular 3D face reconstruction. Unlike prior works using multi-view reconstruction, photo-realistic CG models, or generative neural networks, our approach can manipulate and extend the head pose range of existing training data without any additional requirements. We introduce a projective matching procedure to align the reconstructed 3D facial mesh to the camera coordinate system and synthesize face images with accurate gaze labels. We also propose a mask-guided gaze estimation model and data augmentation strategies to further improve the estimation accuracy by taking advantage of the synthetic training data. Experiments using multiple public datasets show that our approach can significantly improve the estimation performance on challenging cross-dataset settings with non-overlapping gaze distributions.
翻译:尽管基于外观的视觉估计技术最近有所进步,但需要包含目标头部的训练数据以及眼部分布仍然是实际部署的重大挑战。这项工作审视了一种基于单眼3D面部重建的视觉估计培训数据合成新颖方法。与以前使用多视角重建、照片现实化CG模型或基因神经网络的工程不同,我们的方法可以操纵和扩大现有训练数据的头部构成范围,而不需要额外的要求。我们引入了一种预测匹配程序,将重建后的3D面部网与相机协调系统相匹配,并以准确的眼部标签合成面部图像。我们还提出了一个蒙面的视觉估计模型和数据增强战略,以便利用合成培训数据进一步提高估计的准确性。使用多个公共数据集的实验表明,我们的方法可以大大改进挑战交叉数据集的预测性能,而没有重叠的眼部分布。