Although robust learning and local differential privacy are both widely studied fields of research, combining the two settings is just starting to be explored. We consider the problem of estimating a discrete distribution in total variation from $n$ contaminated data batches under a local differential privacy constraint. A fraction $1-\epsilon$ of the batches contain $k$ i.i.d. samples drawn from a discrete distribution $p$ over $d$ elements. To protect the users' privacy, each of the samples is privatized using an $\alpha$-locally differentially private mechanism. The remaining $\epsilon n $ batches are an adversarial contamination. The minimax rate of estimation under contamination alone, with no privacy, is known to be $\epsilon/\sqrt{k}+\sqrt{d/kn}$, up to a $\sqrt{\log(1/\epsilon)}$ factor. Under the privacy constraint alone, the minimax rate of estimation is $\sqrt{d^2/\alpha^2 kn}$. We show that combining the two constraints leads to a minimax estimation rate of $\epsilon\sqrt{d/\alpha^2 k}+\sqrt{d^2/\alpha^2 kn}$ up to a $\sqrt{\log(1/\epsilon)}$ factor, larger than the sum of the two separate rates. We provide a polynomial-time algorithm achieving this bound, as well as a matching information theoretic lower bound.
翻译:虽然严格的学习和地方差异隐私都是广泛研究的研究领域,但这两个设置的结合才刚刚开始探索。 我们考虑估算离散分布的问题, 与本地有差异的隐私限制下受污染的美元数据批量完全不同。 分批的1美元- epsilon$包含美元i. id。 从离散分配中提取的样本超过美元元素的美元。 保护用户隐私, 每一个样本都使用美元/ 美元/ 地方差异的私人机制进行私有化。 剩下的美元/ epsilon n 批量是对抗性的污染。 光是污染下的微缩估算率( 没有隐私) 已知是$\ epsilon/\ qrt{ kqqron{ d/ kn} 美元, 最高为 $qqrqrq_ ral_ ral_ krassal_ ral_ kral_ ral_ ral_ ral_ kral_ lax_ lax_ kral_ lax_ kral_ kral_ lax_ suil_ lax_ laxx_ ral_ lax_ suil_ krx_ kral_ laxxxxxxxxxxxxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx