Classical paradigms for distributed learning, such as federated or decentralized gradient descent, employ consensus mechanisms to enforce homogeneity among agents. While these strategies have proven effective in i.i.d. scenarios, they can result in significant performance degradation when agents follow heterogeneous objectives or data. Distributed strategies for multitask learning, on the other hand, induce relationships between agents in a more nuanced manner, and encourage collaboration without enforcing consensus. We develop a generalization of the exact diffusion algorithm for subspace constrained multitask learning over networks, and derive an accurate expression for its mean-squared deviation when utilizing noisy gradient approximations. We verify numerically the accuracy of the predicted performance expressions, as well as the improved performance of the proposed approach over alternatives based on approximate projections.
翻译:传统的分布式学习范式,如联邦或分布式梯度下降,采用共识机制来强制实现代理之间的同质性。尽管这些策略在独立同分布的场景下已被证明有效,但在代理遵循异构目标或数据时,这可能会导致显着的性能降低。与此相反,多任务学习的分布式策略以更为微妙的方式在代理之间引入关系,并在不强制共识的情况下鼓励协作。我们为网络上的子空间约束多任务学习开发了一种精确扩散算法的泛化,推导出其均方偏差的精确表达式,当利用带噪声梯度近似时。我们通过数值验证了预测性能表达式的准确性,以及所提出方法相对于基于近似投影的替代方法的改进性能。