Federated learning (FL) is a framework for machine learning across heterogeneous client devices in a privacy-preserving fashion. To date, most FL algorithms learn a "global" server model across multiple rounds. At each round, the same server model is broadcast to all participating clients, updated locally, and then aggregated across clients. In this work, we propose a more general procedure in which clients "select" what values are sent to them. Notably, this allows clients to operate on smaller, data-dependent slices. In order to make this practical, we outline a primitive, federated select, which enables client-specific selection in realistic FL systems. We discuss how to use federated select for model training and show that it can lead to drastic reductions in communication and client memory usage, potentially enabling the training of models too large to fit on-device. We also discuss the implications of federated select on privacy and trust, which in turn affect possible system constraints and design. Finally, we discuss open questions concerning model architectures, privacy-preserving technologies, and practical FL systems.
翻译:联邦学习( FL) 是一个在各种客户设备中以隐私保护方式进行机器学习的框架。 到目前为止, 多数 FL 算法在多个回合中学习“ 全球” 服务器模型。 在每一回合中, 相同的服务器模型都向所有参与的客户广播, 在当地更新, 然后在客户之间汇总。 在这项工作中, 我们建议一个更一般的程序, 客户“ 选择” 向客户发送什么值 。 值得注意的是, 这允许客户在较小、 数据依赖的切片上操作。 为了实现这个实用性, 我们勾画了一个原始的、 联合的选择, 使得用户能够在现实的 FL 系统中选择。 我们讨论如何使用被联合选择的服务器模型培训模式, 并表明它能够导致通信和客户记忆的使用急剧减少, 从而有可能使模型的培训过于庞大, 无法适应设计 。 我们还讨论封选对隐私和信任的影响, 这反过来会影响可能的系统制约和设计。 最后, 我们讨论关于模型结构、 隐私保护技术和实用的 FL 系统的开放问题 。