We study the problem of Differentially Private Stochastic Convex Optimization (DP-SCO) with heavy-tailed data. Specifically, we focus on the $\ell_1$-norm linear regression in the $\epsilon$-DP model. While most of the previous work focuses on the case where the loss function is Lipschitz, here we only need to assume the variates has bounded moments. Firstly, we study the case where the $\ell_2$ norm of data has bounded second order moment. We propose an algorithm which is based on the exponential mechanism and show that it is possible to achieve an upper bound of $\tilde{O}(\sqrt{\frac{d}{n\epsilon}})$ (with high probability). Next, we relax the assumption to bounded $\theta$-th order moment with some $\theta\in (1, 2)$ and show that it is possible to achieve an upper bound of $\tilde{O}(({\frac{d}{n\epsilon}})^\frac{\theta-1}{\theta})$. Our algorithms can also be extended to more relaxed cases where only each coordinate of the data has bounded moments, and we can get an upper bound of $\tilde{O}({\frac{d}{\sqrt{n\epsilon}}})$ and $\tilde{O}({\frac{d}{({n\epsilon})^\frac{\theta-1}{\theta}}})$ in the second and $\theta$-th moment case respectively.
翻译:我们研究的是不同私人的软盘优化(DP- SCO) 的问题, 其数据是重的 数据 。 具体地说, 我们侧重于$\ epsilon$- DP 模式中的 $\ ell_ 1$- norm 线性回归 。 虽然前大部分工作侧重于损失函数为 Lipschitz 的情况, 我们只需要假设变差时间是交错的 。 首先, 我们研究的是 $\ ell_ 2$ 标准 约束第二个顺序时刻 。 我们提议了一个基于指数机制的算法, 并显示 $\ $\ $\ $\ 美元\ {\ {\ 美元\ 美元\ 美元} (\\\ 美元\ 美元=\ 美元=\ 美元=\ 美元=\ 美元} 显示有可能达到 $\\\\ 美元=\ 美元=\ 美元=\ 美元=\ 美元=\ 美元=\ 美元=\ 美元\ 美元=\ 美元=\ 美元=\ 美元=\\\\ 美元\ 美元\ 美元\ 美元\ 美元\ 美元\ 美元\\ 开始调整的上限, 我们的上值 数据也只能 \ \\\\ \\\\\\\\\\\ \\\\\\\ \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ \\\\\\\\\\\\\\\\ \\\\\\\\\\\\ \ \ \ 美元\ \\\\\\\\\ \\\\\ 美元\\ \ \ \ \ \ \ \ \ \ \ \ \ \\\\\\\\\\\\\\\\\ \\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\\\\ \ \ \\\\\\\\\\\