In federated learning, a central server coordinates the training of a single model on a massively distributed network of devices. This setting can be naturally extended to a multi-task learning framework, to handle real-world federated datasets that typically show strong statistical heterogeneity among devices. Despite federated multi-task learning being shown to be an effective paradigm for real-world datasets, it has been applied only on convex models. In this work, we introduce VIRTUAL, an algorithm for federated multi-task learning for general non-convex models. In VIRTUAL the federated network of the server and the clients is treated as a star-shaped Bayesian network, and learning is performed on the network using approximated variational inference. We show that this method is effective on real-world federated datasets, outperforming the current state-of-the-art for federated learning, and concurrently allowing sparser gradient updates.
翻译:在联合学习中,一个中央服务器可以协调对大规模分布式设备网络的单一模型的培训。 这个设置可以自然扩展到一个多任务学习框架, 以便处理通常显示设备之间高度统计差异的真实世界联合数据集。 尽管联盟式多任务学习被证明是真实世界数据集的有效范例, 它只应用在 convex 模型上。 在这项工作中, 我们引入了 ViRTUAL, 一种为普通非 convex 模型进行联合化多任务学习的算法。 在 ViRTUAL 中, 服务器和客户的联合网络被当作恒星形贝叶斯网络, 并且使用大致的变异推论在网络上进行学习。 我们显示, 这种方法在现实世界的联邦数据集上有效, 超过了当前联邦化学习的状态, 同时允许稀疏的梯度更新 。