Despite recent advances in appearance-based gaze estimation techniques, the need for training data that covers the target head pose and gaze distribution remains a crucial challenge for practical deployment. This work examines a novel approach for synthesizing gaze estimation training data based on monocular 3D face reconstruction. Unlike prior works using multi-view reconstruction, photo-realistic CG models, or generative neural networks, our approach can manipulate and extend the head pose range of existing training data without any additional requirements. We introduce a projective matching procedure to align the reconstructed 3D facial mesh with the camera coordinate system and synthesize face images with accurate gaze labels. We also propose a mask-guided gaze estimation model and data augmentation strategies to further improve the estimation accuracy by taking advantage of synthetic training data. Experiments using multiple public datasets show that our approach significantly improves the estimation performance on challenging cross-dataset settings with non-overlapping gaze distributions.
翻译:尽管基于外观的视觉估计技术最近有所进步,但对包含目标头部的成像和目视分布的培训数据的需求仍然是实际部署的一项关键挑战。本项工作审视了一种基于单眼3D面部重建的视觉估计培训数据合成新颖方法。与以前使用多视角重建、摄影现实化CG模型或基因神经网络的工作不同,我们的方法可以操纵和扩大现有训练数据的头部构成范围,而无需增加任何额外要求。我们引入了一种预测匹配程序,使重建的3D面部网与相机协调系统相匹配,并将面部图像与准确的眼部标签相合成。我们还提出了一个蒙面视觉估计模型和数据增强战略,以便利用合成培训数据进一步提高估计的准确性。使用多个公共数据集的实验表明,我们的方法大大改进了挑战交叉数据集的预测业绩,不重叠的眼部分布。