Synthetic data is a powerful tool in training data hungry deep learning algorithms. However, to date, camera-based physiological sensing has not taken full advantage of these techniques. In this work, we leverage a high-fidelity synthetics pipeline for generating videos of faces with faithful blood flow and breathing patterns. We present systematic experiments showing how physiologically-grounded synthetic data can be used in training camera-based multi-parameter cardiopulmonary sensing. We provide empirical evidence that heart and breathing rate measurement accuracy increases with the number of synthetic avatars in the training set. Furthermore, training with avatars with darker skin types leads to better overall performance than training with avatars with lighter skin types. Finally, we discuss the opportunities that synthetics present in the domain of camera-based physiological sensing and limitations that need to be overcome.
翻译:合成数据是培训数据缺乏深层学习算法的有力工具,然而,迄今为止,基于相机的生理感测尚未充分利用这些技术。在这项工作中,我们利用高不忠合成管道制作有忠实血液流和呼吸模式的面孔视频。我们提出系统实验,表明如何利用基于生理的合成数据来培训基于相机的多参数心肺感应器。我们提供了经验证据,证明心脏和呼吸率测量精确度随着成套培训中合成异种数量的增加而增加。此外,与皮肤种类更深的异种相比,与皮肤种类较轻的异种人相比,与培训相比,使用皮肤种类更深色异种的异种的异种培训可以提高总体性能。最后,我们讨论了合成合成在基于相机的生理感测领域存在的机会以及需要克服的局限性。