We study whether iterated vector fields (vector fields composed with themselves) are conservative. We give explicit examples of vector fields for which this self-composition preserves conservatism. Notably, this includes gradient vector fields of loss functions associated with some generalized linear models. As we show, characterizing the set of vector fields satisfying this condition leads to non-trivial geometric questions. In the context of federated learning, we show that when clients have loss functions whose gradients satisfy this condition, federated averaging is equivalent to gradient descent on a surrogate loss function. We leverage this to derive novel convergence results for federated learning. By contrast, we demonstrate that when the client losses violate this property, federated averaging can yield behavior which is fundamentally distinct from centralized optimization. Finally, we discuss theoretical and practical questions our analytical framework raises for federated learning.
翻译:我们研究迭代矢量字段( 自我构成的矢量字段) 是否是保守的。 我们给出了矢量字段的清晰示例, 而这种自我构造保护了保守主义。 值得注意的是, 其中包括与某些通用线性模型相关的损失函数的梯度矢量字段。 正如我们所显示的那样, 符合此条件的矢量字段的特性导致非三角几何问题。 在联邦化学习中, 我们显示, 当客户有损失函数, 其梯度满足此条件时, 累进平均等于代位损失函数的梯度下降。 我们利用这个例子为联盟化学习得出新的趋同结果。 相反, 我们证明, 当客户损失了这种属性时, 累进平均值可以产生与集中优化截然不同的行为。 最后, 我们讨论我们的分析框架提出的理论和实用问题, 以联盟化学习为目的。