As data is generated and stored almost everywhere, learning a model from a data-decentralized setting is a task of interest for many AI-driven service providers. Although federated learning is settled down as the main solution in such situations, there still exists room for improvement in terms of personalization. Training federated learning systems usually focuses on optimizing a global model that is identically deployed to all client devices. However, a single global model is not sufficient for each client to be personalized on their performance as local data assumes to be not identically distributed across clients. We propose a method to address this situation through the lens of ensemble learning based on the construction of a low-loss subspace continuum that generates a high-accuracy ensemble of two endpoints (i.e. global model and local model). We demonstrate that our method achieves consistent gains both in personalized and unseen client evaluation settings through extensive experiments on several standard benchmark datasets.
翻译:由于数据的产生和储存几乎无处不在,从数据分散的环境下学习一个模型是许多AI驱动的服务提供者感兴趣的一项任务。尽管联合学习被确定为这类情况下的主要解决办法,但在个性化方面仍有改进的余地。培训联合学习系统通常侧重于优化一个与所有客户设备完全相同的全球模型。然而,单一的全球模型不足以使每个客户在自己的性能上实现个性化,因为当地数据假定在客户之间分布不完全相同。我们建议了一种方法,通过构建一个低损失子空间连续体,产生两个端点(即全球模型和地方模型)的高准确性共同值的混合学习透镜来解决这一问题。我们证明,我们的方法通过对几个标准基准数据集进行广泛试验,在个性和不可见的客户评价环境中都取得了一致的成果。