We consider the "all-for-one" decentralized learning problem for generalized linear models. The features of each sample are partitioned among several collaborating agents in a connected network, but only one agent observes the response variables. To solve the regularized empirical risk minimization in this distributed setting, we apply the Chambolle--Pock primal--dual algorithm to an equivalent saddle-point formulation of the problem. The primal and dual iterations are either in closed-form or reduce to coordinate-wise minimization of scalar convex functions. We establish convergence rates for the empirical risk minimization under two different assumptions on the loss function (Lipschitz and square root Lipschitz), and show how they depend on the characteristics of the design matrix and the Laplacian of the network.
翻译:我们考虑通用线性模型的“一对一”分散学习问题。每个样本的特征在一个连接的网络中由数个合作机构分割,但只有一个代理机构观察反应变量。为了解决在这一分布环境中常规化的经验风险最小化,我们将查布尔-波克原始-双向算法应用到问题相当的马鞍配方。原始和双重迭代要么是封闭式的,要么减少协调性地最大限度地减少缩缩略语功能。我们根据两种不同的损失函数假设(利普西茨和利普西茨平方根)确定实验风险最小化的趋同率,并表明它们如何取决于设计矩阵和网络的拉普拉西奇的特性。