Semi-supervised learning (SSL) leverages both labeled and unlabeled data to train machine learning (ML) models. State-of-the-art SSL methods can achieve comparable performance to supervised learning by leveraging much fewer labeled data. However, most existing works focus on improving the performance of SSL. In this work, we take a different angle by studying the training data privacy of SSL. Specifically, we propose the first data augmentation-based membership inference attacks against ML models trained by SSL. Given a data sample and the black-box access to a model, the goal of membership inference attack is to determine whether the data sample belongs to the training dataset of the model. Our evaluation shows that the proposed attack can consistently outperform existing membership inference attacks and achieves the best performance against the model trained by SSL. Moreover, we uncover that the reason for membership leakage in SSL is different from the commonly believed one in supervised learning, i.e., overfitting (the gap between training and testing accuracy). We observe that the SSL model is well generalized to the testing data (with almost 0 overfitting) but ''memorizes'' the training data by giving a more confident prediction regardless of its correctness. We also explore early stopping as a countermeasure to prevent membership inference attacks against SSL. The results show that early stopping can mitigate the membership inference attack, but with the cost of model's utility degradation.
翻译:由半监督监督的学习(SSL)利用标签和未标记的数据来培训机器学习模式。 先进的SSL 方法可以通过利用少得多的标签数据实现与受监督的学习的可比业绩。 然而,大多数现有工作侧重于改进SSL的绩效。 在这项工作中,我们从不同的角度研究SSL的培训数据隐私。 具体地说,我们提议对SSL培训的ML模型进行首次基于数据增强的成员推断攻击。 鉴于数据抽样和黑盒访问模型,成员推断攻击的目标是确定数据样本是否属于模型的培训数据集。 我们的评估表明,拟议的攻击可以持续超过现有的成员推断攻击,并比SSL培训模型所培训的模型取得最佳业绩。 此外,我们发现,SLSL成员流失的原因不同于通常相信的监督学习,即过度(培训和测试准确性之间的差距),因此SLSL模型与测试数据的测试数据非常普遍化(近0的效用样本属于该模型的培训数据集,而我们则通过早期的稳定性来降低攻击的稳定性。