Video-based remote physiological measurement utilizes face videos to measure the blood volume change signal, which is also called remote photoplethysmography (rPPG). Supervised methods for rPPG measurements achieve state-of-the-art performance. However, supervised rPPG methods require face videos and ground truth physiological signals for model training. In this paper, we propose an unsupervised rPPG measurement method that does not require ground truth signals for training. We use a 3DCNN model to generate multiple rPPG signals from each video in different spatiotemporal locations and train the model with a contrastive loss where rPPG signals from the same video are pulled together while those from different videos are pushed away. We test on five public datasets, including RGB videos and NIR videos. The results show that our method outperforms the previous unsupervised baseline and achieves accuracies very close to the current best supervised rPPG methods on all five datasets. Furthermore, we also demonstrate that our approach can run at a much faster speed and is more robust to noises than the previous unsupervised baseline. Our code is available at https://github.com/zhaodongsun/contrast-phys.
翻译:以视频为基础的远程生理测量利用脸部视频测量血液数量变化信号,也称为远程光谱成像仪(rPPG) 。 受监督的 RPPG测量方法达到最新性能。 但是, 受监督的 RPPG 方法需要面对面视频和地面真相生理信号来进行模型培训。 在本文中, 我们提出了一个未经监督的 PRPG 测量方法, 不需要地面真相信号来进行培训。 我们使用一个 3DCNN 模型来从不同空间时空位置的每部视频生成多个 RPPG 信号, 并且以对比性损失的方式对模型进行培训, 即同一视频的 RPPG 信号被拉在一起, 而不同视频的信号被推走。 我们用五个公共数据集测试, 包括 RGB 视频和 NIR 视频。 结果表明, 我们的方法比先前未监督的基线都快, 并且在所有五个数据集中都非常接近当前最受监督的 RPPPG 方法 。 此外, 我们还证明, 我们的方法可以以更快捷的速度运行, 并且比前一个未受监督的血管/ 正在 。 我们的代码可以在 http/ 。