Differentially private training algorithms provide protection against one of the most popular attacks in machine learning: the membership inference attack. However, these privacy algorithms incur a loss of the model's classification accuracy, therefore creating a privacy-utility trade-off. The amount of noise that differential privacy requires to provide strong theoretical protection guarantees in deep learning typically renders the models unusable, but authors have observed that even lower noise levels provide acceptable empirical protection against existing membership inference attacks. In this work, we look for alternatives to differential privacy towards empirically protecting against membership inference attacks. We study the protection that simply following good machine learning practices (not designed with privacy in mind) offers against membership inference. We evaluate the performance of state-of-the-art techniques, such as pre-training and sharpness-aware minimization, alone and with differentially private training algorithms, and find that, when using early stopping, the algorithms without differential privacy can provide both higher utility and higher privacy than their differentially private counterparts. These findings challenge the belief that differential privacy is a good defense to protect against existing membership inference attacks
翻译:不同的私人培训算法可以提供保护,防止机器学习中最受欢迎的攻击之一:会籍推断攻击。然而,这些隐私算法导致模型分类准确性丧失,从而造成隐私效用的权衡。在深层学习中提供强有力的理论保护保障需要不同隐私的噪音数量通常使模型无法使用,但作者认为,即使低噪音水平也为现有会籍推断攻击提供了可接受的实证保护。在这项工作中,我们寻找替代办法,将隐私区别于实际保护,防止会籍推断攻击。我们研究的是仅仅遵循良好机器学习做法(不是以隐私为目的的)所提供的保护,从而防止会籍推断。我们评估了诸如训练前和敏锐度最小化等先进技术的性能,单独和以差别化的私人培训算法,并发现,在使用早期停止时,没有差别隐私权的算法可以提供比其差异性私人同行更高的效用和更高的隐私。这些发现,不同隐私的算法是对以下信念的质疑,即差异隐私权是保护现有会籍攻击的好的辩护。