We study stochastic convex optimization with heavy-tailed data under the constraint of differential privacy. Most prior work on this problem is restricted to the case where the loss function is Lipschitz. Instead, as introduced by Wang, Xiao, Devadas, and Xu, we study general convex loss functions with the assumption that the distribution of gradients has bounded $k$-th moments. We provide improved upper bounds on the excess population risk under approximate differential privacy of $\tilde{O}\left(\sqrt{\frac{d}{n}}+\left(\frac{d}{\epsilon n}\right)^{\frac{k-1}{k}}\right)$ and $\tilde{O}\left(\frac{d}{n}+\left(\frac{d}{\epsilon n}\right)^{\frac{2k-2}{k}}\right)$ for convex and strongly convex loss functions, respectively. We also prove nearly-matching lower bounds under the constraint of pure differential privacy, giving strong evidence that our bounds are tight.
翻译:在限制隐私的情况下,我们用重整数据来研究重整的 convex 优化。 这个问题以前的大部分工作仅限于损失功能为 Lipschitz 的情况。 相反,正如Wang、 Xiao、 Devadas 和 Xu 所介绍的那样, 我们研究一般的 convex 损失功能, 假设梯度分布已经约束了 $k$- 秒。 我们根据 $\ tilde{ Offleft (sqrt k2k- k ⁇ right) 和 $\ tilde{ left (\ frac{d- lepsilon n ⁇ right) 的近似差异性隐私( $\ telft) 和 $\ telft (\\\ frac{ d ⁇ n ⁇ n ⁇ left (\\\ frac{d- left)\ d- left (d- left (d-epslon n ⁇ right) {frac{2k- 2 ⁇ k- k ⁇ right) 提供了超重的超重的超重人口风险。 我们还也证明在纯粹差异隐私限制下, 我们的更接近更接近更接近更接近的下, 的下也证明了更接近更接近于更接近于更接近的界限。