While many solutions for privacy-preserving convex empirical risk minimization (ERM) have been developed, privacy-preserving nonconvex ERM remains a challenge. We study nonconvex ERM, which takes the form of minimizing a finite-sum of nonconvex loss functions over a training set. We propose a new differentially private stochastic gradient descent algorithm for nonconvex ERM that achieves strong privacy guarantees efficiently, and provide a tight analysis of its privacy and utility guarantees, as well as its gradient complexity. Our algorithm reduces gradient complexity while improves the best previous utility guarantee given by Wang et al. (NeurIPS 2017). Our experiments on benchmark nonconvex ERM problems demonstrate superior performance in terms of both training cost and utility gains compared with previous differentially private methods using the same privacy budgets.
翻译:虽然已经制定了许多保护隐私的隐性隐性实验风险最小化(ERM)的解决方案,但保护隐私的非隐性机构风险管理仍是一项挑战。我们研究了非隐性机构风险管理,其形式是将非隐性损失功能的有限总和与一组培训相匹配。我们为非隐性机构风险管理提出了一种新的有差别的私人随机梯度梯度下行算法,该算法可有效实现强力的隐私保障,对隐私和公用事业保障及其梯度复杂性进行严格分析。我们的算法降低了梯度复杂性,同时改进了Wang等人先前提供的最佳公用事业保障(NeurIPS 2017)。我们关于非隐性机构风险管理问题基准的实验表明,在培训成本和效用收益方面,与以前使用相同隐私预算的有差别的私人方法相比,其业绩优异。