Recent studies of distributed computation with formal privacy guarantees, such as differentially private (DP) federated learning, leverage random sampling of clients in each round (privacy amplification by subsampling) to achieve satisfactory levels of privacy. Achieving this however requires strong assumptions which may not hold in practice, including precise and uniform subsampling of clients, and a highly trusted aggregator to process clients' data. In this paper, we explore a more practical protocol, shuffled check-in, to resolve the aforementioned issues. The protocol relies on client making independent and random decision to participate in the computation, freeing the requirement of server-initiated subsampling, and enabling robust modelling of client dropouts. Moreover, a weaker trust model known as the shuffle model is employed instead of using a trusted aggregator. To this end, we introduce new tools to characterize the R\'enyi differential privacy (RDP) of shuffled check-in. We show that our new techniques improve at least three times in privacy guarantee over those using approximate DP's strong composition at various parameter regimes. Furthermore, we provide a numerical approach to track the privacy of generic shuffled check-in mechanism including distributed stochastic gradient descent (SGD) with Gaussian mechanism. To the best of our knowledge, this is also the first evaluation of Gaussian mechanism within the local/shuffle model under the distributed setting in the literature, which can be of independent interest.
翻译:最新的分散计算研究包括正式的隐私保障,例如不同的私人(DP)联合学习,利用每轮客户的随机抽样(通过子抽样进行隐私放大)来实现令人满意的隐私水平。然而,实现这一点需要一种实际中可能无法维持的强势假设,包括精确和统一的客户子抽样,以及处理客户数据的高度信任的聚合器。在本文件中,我们探索一种更加实用的规程,进行冲洗的检查,以解决上述问题。协议取决于客户独立和随机地决定参与计算,免除服务器启动的子取样的要求,并能够对客户辍学者进行强有力的建模。此外,我们使用被称为洗礼模型的较弱的信任模式,而不是使用信任的聚合器。为此,我们引入新的工具来描述R\'enye差异隐私(RDP)的特征,以打乱的检查。我们的新技术至少改进了三次隐私保障,即使用不同参数系统中的大致DP强势构成。此外,我们提供了一种被称为洗礼模型的Squal-chalial 机制, 以及我们发行的Gal-chnial-deal-liversal roup roup roup