Federated Learning (FL) is a collaborative scheme to train a learning model across multiple participants without sharing data. While FL is a clear step forward towards enforcing users' privacy, different inference attacks have been developed. In this paper, we quantify the utility and privacy trade-off of a FL scheme using private personalized layers. While this scheme has been proposed as local adaptation to improve the accuracy of the model through local personalization, it has also the advantage to minimize the information about the model exchanged with the server. However, the privacy of such a scheme has never been quantified. Our evaluations using motion sensor dataset show that personalized layers speed up the convergence of the model and slightly improve the accuracy for all users compared to a standard FL scheme while better preventing both attribute and membership inferences compared to a FL scheme using local differential privacy.
翻译:联邦学习联合会(FL)是一个合作计划,旨在对多个参与者进行不共享数据的培训学习模式。尽管FL是朝着加强用户隐私迈出的明显一步,但已经制定了不同的推论攻击。在本文件中,我们量化了使用私人个性化层次的FL计划的效用和隐私权衡。虽然这个计划被提议为通过本地个性化提高模型准确性的地方调整,但它也有利于最大限度地减少与服务器交换的模型信息。然而,这种计划的隐私从未被量化。我们使用运动传感器数据集进行的评估表明,个性化层加快了模型的趋同,与标准的FL计划相比,略有提高了所有用户的准确性,同时更好地防止了与使用本地差异隐私的FL计划相比的属性和归属推断。