The expanding usage of complex machine learning methods like deep learning has led to an explosion in human activity recognition, particularly applied to health. In particular, as part of a larger body sensor network system, face and full-body analysis is becoming increasingly common for evaluating health status. However, complex models which handle private and sometimes protected data, raise concerns about the potential leak of identifiable data. In this work, we focus on the case of a deep network model trained on images of individual faces. Full-face video recordings taken from 493 individuals undergoing an eye-tracking based evaluation of neurological function were used. Outputs, gradients, intermediate layer outputs, loss, and labels were used as inputs for a deep network with an added support vector machine emission layer to recognize membership in the training data. The inference attack method and associated mathematical analysis indicate that there is a low likelihood of unintended memorization of facial features in the deep learning model. In this study, it is showed that the named model preserves the integrity of training data with reasonable confidence. The same process can be implemented in similar conditions for different models.
翻译:深入学习等复杂机器学习方法的使用日益扩大,导致人类活动认知的爆炸性,特别是在健康方面。特别是,作为更大的身体传感器网络系统的一部分,面部和全体分析正日益成为评价健康状况的常见现象。然而,处理私人数据和有时受保护数据的复杂模型,引起对可识别数据潜在泄漏的关切。在这项工作中,我们侧重于一个以个人脸部图像为培训的深网络模型的案例。从正在对神经功能进行视跟踪评估的493人中拍摄的全面视频记录被使用。产出、梯度、中间层输出、损失和标签被用作深层网络的投入,并增加了一个支持矢量机排放层,以确认培训数据中的成员资格。推断方法和相关数学分析表明,深层学习模型中面部特征的意外记忆化可能性很小。在这项研究中显示,所命名的模型以合理信任的方式维护了培训数据的完整性。不同模型在类似条件下也可以实施同样的程序。