Federated learning (FL), where data remains at the federated clients, and where only gradient updates are shared with a central aggregator, was assumed to be private. Recent work demonstrates that adversaries with gradient-level access can mount successful inference and reconstruction attacks. In such settings, differentially private (DP) learning is known to provide resilience. However, approaches used in the status quo (\ie central and local DP) introduce disparate utility vs. privacy trade-offs. In this work, we take the first step towards mitigating such trade-offs through {\em hierarchical FL (HFL)}. We demonstrate that by the introduction of a new intermediary level where calibrated DP noise can be added, better privacy vs. utility trade-offs can be obtained; we term this {\em hierarchical DP (HDP)}. Our experiments with 3 different datasets (commonly used as benchmarks for FL) suggest that HDP produces models as accurate as those obtained using central DP, where noise is added at a central aggregator. Such an approach also provides comparable benefit against inference adversaries as in the local DP case, where noise is added at the federated clients.
翻译:联邦学习(FL)数据仍属于联邦客户,只有梯度更新与中央聚合器共享,而联邦学习(FL)数据仍属于联邦学习(FL)被认为属于私人性质。最近的工作表明,使用梯度接入的对手可以成功地进行推论和重建攻击。在这种环境中,已知不同的私人(DP)学习可以提供复原力。但是,现状(ce)中和地方DP(PL)采用的方法采用了不同的通用方法,与隐私权衡取舍。在这项工作中,我们迈出第一步,通过(e)级FL(HFL)减少这种权衡。我们通过引入一个新的中间级别,可以增加校准DP的噪音,从而证明可以实现更好的隐私与公用事业的权衡取舍;我们称这种保密与公用事业取舍(HDP(HDP)不同,通常用作FL的基准)的实验表明,HDP(HDP)的模型与使用中央驱动器获得的模型一样准确,中央聚合器添加噪音。在中央聚合器中,这种方法也提供了与当地DP案例中的用户相比的推论对手的类似的好处。