We prove a tight lower bound (up to constant factors) on the sample complexity of any non-interactive local differentially private protocol for optimizing a linear function over the simplex. This lower bound also implies a tight lower bound (again, up to constant factors) on the sample complexity of any non-interactive local differentially private protocol implementing the exponential mechanism. These results reveal that any local protocol for these problems has exponentially worse dependence on the dimension than corresponding algorithms in the central model. Previously, Kasiviswanathan et al. (FOCS 2008) proved an exponential separation between local and central model algorithms for PAC learning the class of parity functions. In contrast, our lower bound are quantitatively tight, apply to a simple and natural class of linear optimization problems, and our techniques are arguably simpler.
翻译:事实证明,在任何非互动的本地不同私人协议中,优化简单线性功能的精度的精度(最多是常数因素)在任何非互动的本地不同私人协议中,其精度较低(最多是常数因素 ) 。 这一下限还意味着在任何非互动的本地不同私人协议中,对执行指数机制的精度的精度(同样是常数因素 ) 的精度较低(同样是常数因素 ) 。 这些结果表明,与中央模型中相应的算法相比,任何针对这些问题的本地协议都明显地更加依赖其维度。 Kasiviswanathan 等人(FOCS,2008年) 证明当地和中央模型算法在学习等同功能的精度之间存在着指数分化的分化。 相比之下,我们较低的约束在数量上是紧凑的,适用于简单和自然的线性优化问题,我们的技术可能比较简单。