Federated learning (FL) is an emerging machine learning paradigm involving multiple clients, e.g., mobile phone devices, with an incentive to collaborate in solving a machine learning problem coordinated by a central server. FL was proposed in 2016 by Kone\v{c}n\'{y} et al. and McMahan et al. as a viable privacy-preserving alternative to traditional centralized machine learning since, by construction, the training data points are decentralized and never transferred by the clients to a central server. Therefore, to a certain degree, FL mitigates the privacy risks associated with centralized data collection. Unfortunately, optimization for FL faces several specific issues that centralized optimization usually does not need to handle. In this thesis, we identify several of these challenges and propose new methods and algorithms to address them, with the ultimate goal of enabling practical FL solutions supported with mathematically rigorous guarantees.
翻译:联邦学习(FL)是一个新兴的机器学习模式,涉及多个客户,例如移动电话设备,鼓励他们合作解决由中央服务器协调的机器学习问题。2016年,Kone\v{c}n\\{y}et al.和McMahan et al.提出FL,作为传统的中央机器学习的一种可行的隐私保护替代方案,因为通过建设,培训数据点是分散的,客户从未将数据点转移到中央服务器。因此,在某种程度上,FL减轻了与集中数据收集有关的隐私风险。不幸的是,FL优化面临一些通常不需要集中优化处理的具体问题。在本论文中,我们确定了其中的若干挑战,并提出了应对这些挑战的新方法和算法,最终目标是使实用的FL解决方案得到严格数学保障的支持。