In federated learning collaborative learning takes place by a set of clients who each want to remain in control of how their local training data is used, in particular, how can each client's local training data remain private? Differential privacy is one method to limit privacy leakage. We provide a general overview of its framework and provable properties, adopt the more recent hypothesis based definition called Gaussian DP or $f$-DP, and discuss Differentially Private Stochastic Gradient Descent (DP-SGD). We stay at a meta level and attempt intuitive explanations and insights \textit{in this book chapter}.
翻译:在联合学习协作学习中,由一组客户进行,他们都希望继续掌握如何使用当地培训数据,特别是每个客户的当地培训数据如何保持隐私?不同的隐私是限制隐私渗漏的一种方法。我们对其框架和可证实的属性进行总体概述,采用最近基于假设的定义,称为Gaussian DP或$f-$-DP,并讨论不同私人的软体渐变源(DP-SGD)。我们坚持一个元水平,试图直观解释和洞察本书篇中的文字。</s>