We consider the lower bounds of differentially private empirical risk minimization for general convex functions in this paper. For convex generalized linear models (GLMs), the well-known tight bound of DP-ERM in the constrained case is $\tilde{\Theta}(\frac{\sqrt{p}}{\epsilon n})$, while recently, \cite{sstt21} find the tight bound of DP-ERM in the unconstrained case is $\tilde{\Theta}(\frac{\sqrt{\text{rank}}}{\epsilon n})$ where $p$ is the dimension, $n$ is the sample size and $\text{rank}$ is the rank of the feature matrix of the GLM objective function. As $\text{rank}\leq \min\{n,p\}$, a natural and important question arises that whether we can evade the curse of dimensionality for over-parameterized models where $n\ll p$, for more general convex functions beyond GLM. We answer this question negatively by giving the first and tight lower bound of unconstrained private ERM for the general convex function, matching the current upper bound $\tilde{O}(\frac{\sqrt{p}}{n\epsilon})$ for unconstrained private ERM. We also give an $\Omega(\frac{p}{n\epsilon})$ lower bound for unconstrained pure-DP ERM which recovers the result in the constrained case.
翻译:我们认为本文中将普通 convex 函数的不同私人经验风险最小化的下限范围为 $\ talde} 。 对于 comvex 通用线性模型( GLM ), 在受限制的情况下, DP- ERM 的已知严格约束范围是$\ tilde $( gLM ) (\ frac \ sqrt{p\\\ epsilon n} 美元), 而最近,\ cite{ stt} 发现 DP- ERM 在不受限制的情况下, DP- ERM 的严格约束范围是$( franc) $( franc) 美元 (freac) 美元, 美元(freenn\ crtrt= 美元), 美元(n\ cluslusionx 范围是 范围, 美元) 美元(nrequestal relatenlational rlational) 。