Personalization methods in federated learning aim to balance the benefits of federated and local training for data availability, communication cost, and robustness to client heterogeneity. Approaches that require clients to communicate all model parameters can be undesirable due to privacy and communication constraints. Other approaches require always-available or stateful clients, impractical in large-scale cross-device settings. We introduce Federated Reconstruction, the first model-agnostic framework for partially local federated learning suitable for training and inference at scale. We motivate the framework via a connection to model-agnostic meta learning, empirically demonstrate its performance over existing approaches for collaborative filtering and next word prediction, and release an open-source library for evaluating approaches in this setting. We also describe the successful deployment of this approach at scale for federated collaborative filtering in a mobile keyboard application.
翻译:联合会学习的个性化方法旨在平衡联邦和地方培训在数据提供、通信成本和客户差异性方面的效益。由于隐私和通信限制,要求客户交流所有模型参数的方法可能不可取。其他方法需要随时可用或有名的客户,在大规模跨设备环境下不切实际。我们引入了联邦重建,这是适合大规模培训和推断的局部地方部分联邦化学习的第一个模式-不可知性框架。我们通过将模型-不可知的元学习连接起来,以经验方式展示其相对于现有协作过滤方法和下一个字词预测的绩效,并发布一个用于评估这一背景下方法的开放源图书馆。我们还介绍了在移动键盘应用程序中成功采用联邦化协作过滤法的情况。