Bayesian optimization (BO) has recently been extended to the federated learning (FL) setting by the federated Thompson sampling (FTS) algorithm, which has promising applications such as federated hyperparameter tuning. However, FTS is not equipped with a rigorous privacy guarantee which is an important consideration in FL. Recent works have incorporated differential privacy (DP) into the training of deep neural networks through a general framework for adding DP to iterative algorithms. Following this general DP framework, our work here integrates DP into FTS to preserve user-level privacy. We also leverage the ability of this general DP framework to handle different parameter vectors, as well as the technique of local modeling for BO, to further improve the utility of our algorithm through distributed exploration (DE). The resulting differentially private FTS with DE (DP-FTS-DE) algorithm is endowed with theoretical guarantees for both the privacy and utility and is amenable to interesting theoretical insights about the privacy-utility trade-off. We also use real-world experiments to show that DP-FTS-DE achieves high utility (competitive performance) with a strong privacy guarantee (small privacy loss) and induces a trade-off between privacy and utility.
翻译:最近的工作通过将DP添加到迭代算法的总框架,将差异隐私(DP)纳入深神经网络的培训中。根据这个总DP框架,我们在这里的工作将DP纳入FTS, 以保护用户一级的隐私。我们还利用这个通用DP-FTS框架的能力处理不同的参数矢量,以及当地BO模型技术,通过分布式探索(DE)进一步提高我们算法的效用。由此产生的有差异性的私人FTS与DE(DP-FTS-DE)算法具有对隐私和效用的理论保障,并有利于对隐私权和效用交易的有趣的理论洞察。我们还利用现实世界的实验来显示DP-FTS-DE在强大的隐私保障(小额隐私损失)和引导通用公用软件贸易(小额隐私损失)之间实现了高效用(竞争性业绩)。