We propose a new framework for differentially private optimization of convex functions which are Lipschitz in an arbitrary norm $\|\cdot\|$. Our algorithms are based on a regularized exponential mechanism which samples from the density $\propto \exp(-k(F+\mu r))$ where $F$ is the empirical loss and $r$ is a regularizer which is strongly convex with respect to $\|\cdot\|$, generalizing a recent work of [Gopi, Lee, Liu '22] to non-Euclidean settings. We show that this mechanism satisfies Gaussian differential privacy and solves both DP-ERM (empirical risk minimization) and DP-SCO (stochastic convex optimization) by using localization tools from convex geometry. Our framework is the first to apply to private convex optimization in general normed spaces and directly recovers non-private SCO rates achieved by mirror descent as the privacy parameter $\epsilon \to \infty$. As applications, for Lipschitz optimization in $\ell_p$ norms for all $p \in (1, 2)$, we obtain the first optimal privacy-utility tradeoffs; for $p = 1$, we improve tradeoffs obtained by the recent works [Asi, Feldman, Koren, Talwar '21, Bassily, Guzman, Nandi '21] by at least a logarithmic factor. Our $\ell_p$ norm and Schatten-$p$ norm optimization frameworks are complemented with polynomial-time samplers whose query complexity we explicitly bound.
翻译:我们建议一个新的框架, 用于在任意规范下对 convex 函数进行有区别的私人优化, 即 lipschitz, 在任意规范 $ cdot $ 。 我们的算法基于一个常规化指数机制, 通过使用 Convex 测量的本地化工具, 从密度 $\ propto\ exm (-k (F ⁇ mu r) 中提取 $@ k (F ⁇ mu r) 的样本, 而 $ 是 美元 的经验性损失, $ 是一个普通的常规化, 并且直接回收通过镜式血统实现的[Gopi, Lee, Liu'22] 至非欧洲标准性。 我们显示, 这个机制满足了高氏差异性隐私, 并解决了 DP- ERM( 最大限度地减少风险) 和 DP- SCO (tochacretal convex 优化) 。 我们的框架是第一个应用Liscial ral $_ ral dealalal exlental exlentalalal exlations, by we paltimealtime, 我们获得了 $ extimeal $ exlateal $ 2。