Stochastic convex optimization over an $\ell_1$-bounded domain is ubiquitous in machine learning applications such as LASSO but remains poorly understood when learning with differential privacy. We show that, up to logarithmic factors the optimal excess population loss of any $(\varepsilon,\delta)$-differentially private optimizer is $\sqrt{\log(d)/n} + \sqrt{d}/\varepsilon n.$ The upper bound is based on a new algorithm that combines the iterative localization approach of~\citet{FeldmanKoTa20} with a new analysis of private regularized mirror descent. It applies to $\ell_p$ bounded domains for $p\in [1,2]$ and queries at most $n^{3/2}$ gradients improving over the best previously known algorithm for the $\ell_2$ case which needs $n^2$ gradients. Further, we show that when the loss functions satisfy additional smoothness assumptions, the excess loss is upper bounded (up to logarithmic factors) by $\sqrt{\log(d)/n} + (\log(d)/\varepsilon n)^{2/3}.$ This bound is achieved by a new variance-reduced version of the Frank-Wolfe algorithm that requires just a single pass over the data. We also show that the lower bound in this case is the minimum of the two rates mentioned above.
翻译:在 $\ ell_ 1 $ 绑定域的 Stochax convex 优化 $@ ell_ 1$ 绑定域 。 在 LASSO 这样的机器学习应用程序中, 上限是无处不在的, 但是在以不同隐私学习时仍然不易理解 。 我们显示, 到对调整的私人镜底下降进行新分析的对数因素为止, 任何美元( varepsilon,\delta) $( valta) 的最大超额人口损失是 $\ qrt=log( d) / n} +\ ell_ 2$( varepld} d} / varrepregn2) 。 此外, 我们显示当损失功能满足额外的平滑度假设时, 超额( feld_ manKoTa20} 和 私人正序镜底镜底缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩图。