We consider the problem of minimizing a non-convex objective while preserving the privacy of the examples in the training data. Building upon the previous variance-reduced algorithm SpiderBoost, we introduce a new framework that utilizes two different kinds of gradient oracles. The first kind of oracles can estimate the gradient of one point, and the second kind of oracles, less precise and more cost-effective, can estimate the gradient difference between two points. SpiderBoost uses the first kind periodically, once every few steps, while our framework proposes using the first oracle whenever the total drift has become large and relies on the second oracle otherwise. This new framework ensures the gradient estimations remain accurate all the time, resulting in improved rates for finding second-order stationary points. Moreover, we address a more challenging task of finding the global minima of a non-convex objective using the exponential mechanism. Our findings indicate that the regularized exponential mechanism can closely match previous empirical and population risk bounds, without requiring smoothness assumptions for algorithms with polynomial running time. Furthermore, by disregarding running time considerations, we show that the exponential mechanism can achieve a good population risk bound and provide a nearly matching lower bound.
翻译:我们考虑如何在保留培训数据中示例隐私的同时最大限度地减少非隐形目标,同时保留培训数据中的隐秘性。基于先前的变差算法 SpiderBoost, 我们引入了一个新的框架, 利用两种不同的梯度或触角。 第一种神器可以估计一个点的梯度, 而第二种神器, 更不精确和更具成本效益, 可以估计两个点之间的梯度差异。 SpiderBoost 定期使用第一种机制, 每隔几个步骤一次, 而我们的框架则提议, 当总漂移变得大, 依赖第二个神器时, 使用第一个神器。 这个新框架可以确保梯度估计始终保持准确性, 导致寻找第二阶定点的速率提高。 此外, 我们处理一项更具挑战性的任务, 即利用指数机制找到一个非convex目标的全球迷你。 我们的研究结果表明, 常规化的指数机制可以与先前的经验和人口风险捆绑在一起, 而不需要对多诺基运行时间的算法进行平稳的假设。 此外, 我们通过忽略时间因素, 显示, 指数机制可以几乎可以匹配的人口约束。