We study locally differentially private (LDP) bandits learning in this paper. First, we propose simple black-box reduction frameworks that can solve a large family of context-free bandits learning problems with LDP guarantee. Based on our frameworks, we can improve previous best results for private bandits learning with one-point feedback, such as private Bandits Convex Optimization, and obtain the first result for Bandits Convex Optimization (BCO) with multi-point feedback under LDP. LDP guarantee and black-box nature make our frameworks more attractive in real applications compared with previous specifically designed and relatively weaker differentially private (DP) context-free bandits algorithms. Further, we extend our $(\varepsilon, \delta)$-LDP algorithm to Generalized Linear Bandits, which enjoys a sub-linear regret $\tilde{O}(T^{3/4}/\varepsilon)$ and is conjectured to be nearly optimal. Note that given the existing $\Omega(T)$ lower bound for DP contextual linear bandits (Shariff & Sheffe, 2018), our result shows a fundamental difference between LDP and DP contextual bandits learning.
翻译:在本文中,我们研究了当地差异私人强盗(LDP)的学习。首先,我们建议简单的黑盒减少框架,通过LDP保证解决大型无背景土匪学习问题。根据我们的框架,我们可以通过一点反馈,如私人强盗Convex优化化,改善以前私人强盗学习的最佳结果,并获得第一结果,根据LDP保证和黑盒性质,通过多点反馈,使我们的框架在实际应用中更具吸引力,而以前专门设计的和相对较弱的无背景强盗(DP)的无背景强盗算法。此外,我们将我们的美元(varepsilon,\delta) $-LDP算法推广到通用的班迪茨,后者享有亚线性遗憾$\tilde{O}(T ⁇ 3/4}/\varepslon),并被推测为近乎最佳的。请注意,鉴于现有的美元(Omega)在实际应用中,用于DP相关线性强盗(Sharif & Shemahe, 2018) 之间的基本学习差异,我们的结果显示了LDP。