In this work, we study the trade-off between differential privacy and adversarial robustness under L2-perturbations in the context of learning halfspaces. We prove nearly tight bounds on the sample complexity of robust private learning of halfspaces for a large regime of parameters. A highlight of our results is that robust and private learning is harder than robust or private learning alone. We complement our theoretical analysis with experimental results on the MNIST and USPS datasets, for a learning algorithm that is both differentially private and adversarially robust.
翻译:在这项工作中,我们在学习半空空间的背景下,研究L2-扰动下的隐私差异和对抗性强度之间的权衡问题。我们证明,在大型参数体系中,强力私人学习半空空间的抽样复杂性几乎是紧凑的。我们的一个突出结果就是,强力私人学习比强力私人学习更难,而单是强力私人学习就更难。我们用MNIST和USPS数据集的实验结果来补充我们的理论分析,以形成一种有差异私人和对抗性强力的学习算法。