We propose a new framework for differentially private optimization of convex functions which are Lipschitz in an arbitrary norm $\normx{\cdot}$. Our algorithms are based on a regularized exponential mechanism which samples from the density $\propto \exp(-k(F+\mu r))$ where $F$ is the empirical loss and $r$ is a regularizer which is strongly convex with respect to $\normx{\cdot}$, generalizing a recent work of \cite{GLL22} to non-Euclidean settings. We show that this mechanism satisfies Gaussian differential privacy and solves both DP-ERM (empirical risk minimization) and DP-SCO (stochastic convex optimization), by using localization tools from convex geometry. Our framework is the first to apply to private convex optimization in general normed spaces, and directly recovers non-private SCO rates achieved by mirror descent, as the privacy parameter $\eps \to \infty$. As applications, for Lipschitz optimization in $\ell_p$ norms for all $p \in (1, 2)$, we obtain the first optimal privacy-utility tradeoffs; for $p = 1$, we improve tradeoffs obtained by the recent works \cite{AsiFKT21, BassilyGN21} by at least a logarithmic factor. Our $\ell_p$ norm and Schatten-$p$ norm optimization frameworks are complemented with polynomial-time samplers whose query complexity we explicitly bound.
翻译:我们建议一个新的框架, 用于对 convex 函数进行有区别的私人优化, 在任意规范 $\ normx\ cdot} 中, 即 Lipschitz 。 我们的算法基于一个常规化指数机制, 以密度 $\ propto\ exm(- k( F ⁇ mu r) 美元) 取样, 其中美元为经验损失, 美元为常规值, 美元为常规值, 将最近一项 \ cite{ GLLL22} (cite{ GLLL22} ) 的工作推广到非欧洲的设置。 我们显示, 这个机制满足了高斯的差别化隐私, 并解决了 DP- ERM( 最小风险最小化) 和 DP- SARM( 软化convex 优化) 。 我们的框架首先适用于私人的 convex 在一般规范空间优化, 直接回收以反光源法实现的非私人的 $21 美元, 因为隐私参数 $\ to in intriental crudeal crudeal crudeal rules rel=nal ex fal creal.