Federated Learning is a recent approach to train statistical models on distributed datasets without violating privacy constraints. The data locality principle is preserved by sharing the model instead of the data between clients and the server. This brings many advantages but also poses new challenges. In this report, we explore this new research area and perform several experiments to deepen our understanding of what these challenges are and how different problem settings affect the performance of the final model. Finally, we present a novel approach to one of these challenges and compare it to other methods found in literature.
翻译:联邦学习组织是近期在不侵犯隐私限制的情况下对分布式数据集进行统计模型培训的一种方法,通过在客户和服务器之间共享模型而不是数据来维护数据地点原则,这带来了许多好处,但也带来了新的挑战。在本报告中,我们探索了这个新的研究领域,并进行了几项实验,以加深我们对这些挑战是什么以及不同问题环境如何影响最终模型的绩效的理解。最后,我们提出了应对其中一项挑战的新办法,并将其与文献中发现的其他方法进行比较。