Differential privacy (DP) has been recently introduced to linear contextual bandits to formally address the privacy concerns in its associated personalized services to participating users (e.g., recommendations). Prior work largely focus on two trust models of DP: the central model, where a central server is responsible for protecting users sensitive data, and the (stronger) local model, where information needs to be protected directly on user side. However, there remains a fundamental gap in the utility achieved by learning algorithms under these two privacy models, e.g., $\tilde{O}(\sqrt{T})$ regret in the central model as compared to $\tilde{O}(T^{3/4})$ regret in the local model, if all users are unique within a learning horizon $T$. In this work, we aim to achieve a stronger model of trust than the central model, while suffering a smaller regret than the local model by considering recently popular shuffle model of privacy. We propose a general algorithmic framework for linear contextual bandits under the shuffle trust model, where there exists a trusted shuffler in between users and the central server, that randomly permutes a batch of users data before sending those to the server. We then instantiate this framework with two specific shuffle protocols: one relying on privacy amplification of local mechanisms, and another incorporating a protocol for summing vectors and matrices of bounded norms. We prove that both these instantiations lead to regret guarantees that significantly improve on that of the local model, and can potentially be of the order $\tilde{O}(T^{3/5})$ if all users are unique. We also verify this regret behavior with simulations on synthetic data. Finally, under the practical scenario of non-unique users, we show that the regret of our shuffle private algorithm scale as $\tilde{O}(T^{2/3})$, which matches that the central model could achieve in this case.
翻译:最近对线性背景型土匪引入了不同隐私 (DP), 以正式解决对参与用户的相关个人化服务(例如建议) 中的隐私问题。 先前的工作主要侧重于DP的两个信任模式: 中央模式, 中央服务器负责保护用户敏感数据, 和( 更强的) 本地模式, 信息需要直接保护用户。 然而, 在这两个隐私模式下学习算法实现的效用方面, 仍然存在着根本性差距, 例如 $\ tilde{ O} (\ qqrt{T} ) 在中央模式中, 与 $\ tdelde{ O} (T\ 3\ 4/4} ) 相比, 中央模式的隐私问题。 如果所有用户在学习地平线中独有, 中央模式的不可靠地平流规则, 我们的目标是建立比中央模型更强的信任模式, 而在本地的服务器中, 我们也可以在内部协议中, 向本地的用户提供另一种协议。