We study the difficulties in learning that arise from robust and differentially private optimization. We first study convergence of gradient descent based adversarial training with differential privacy, taking a simple binary classification task on linearly separable data as an illustrative example. We compare the gap between adversarial and nominal risk in both private and non-private settings, showing that the data dimensionality dependent term introduced by private optimization compounds the difficulties of learning a robust model. After this, we discuss what parts of adversarial training and differential privacy hurt optimization, identifying that the size of adversarial perturbation and clipping norm in differential privacy both increase the curvature of the loss landscape, implying poorer generalization performance.
翻译:我们研究强力和差别化的私人优化所产生的学习困难。我们首先研究基于梯度下降的对抗性培训与不同隐私的合并,对线性可分离的数据进行简单的二进制分类,作为示例。我们比较私人和非私人环境中的对抗性和名义风险之间的差距,表明私人优化引入的数据维度依赖性术语增加了学习强势模式的困难。之后,我们讨论对抗性培训和差异性隐私伤害优化的哪些部分,确定差异性隐私中的对抗性扰动和剪裁剪规范的大小都增加了损失场景的曲线,意味着一般化表现较差。