Vertical federated learning (VFL) is an effective paradigm of training the emerging cross-organizational (e.g., different corporations, companies and organizations) collaborative learning with privacy preserving. Stochastic gradient descent (SGD) methods are the popular choices for training VFL models because of the low per-iteration computation. However, existing SGD-based VFL algorithms are communication-expensive due to a large number of communication rounds. Meanwhile, most existing VFL algorithms use synchronous computation which seriously hamper the computation resource utilization in real-world applications. To address the challenges of communication and computation resource utilization, we propose an asynchronous stochastic quasi-Newton (AsySQN) framework for VFL, under which three algorithms, i.e. AsySQN-SGD, -SVRG and -SAGA, are proposed. The proposed AsySQN-type algorithms making descent steps scaled by approximate (without calculating the inverse Hessian matrix explicitly) Hessian information convergence much faster than SGD-based methods in practice and thus can dramatically reduce the number of communication rounds. Moreover, the adopted asynchronous computation can make better use of the computation resource. We theoretically prove the convergence rates of our proposed algorithms for strongly convex problems. Extensive numerical experiments on real-word datasets demonstrate the lower communication costs and better computation resource utilization of our algorithms compared with state-of-the-art VFL algorithms.
翻译:垂直联动学习(VFL)是培训新兴跨组织(例如不同公司、公司和组织)合作学习的有效范例,以维护隐私为目的,对隐私进行协作学习。Stochatic 梯度下降(SGD)方法是培训VFL模型的流行选择,因为按百分比计算方法较低。然而,现有的SGD基于VFL的VFL算法由于大量通信周期而具有通信成本;同时,大多数现有的VFL算法使用同步计算方法,这严重妨碍了实际应用中的计算资源利用率。为了应对通信和计算资源利用率的挑战,我们建议VFLFL采用非同步的半牛顿(AsySQQN)框架,根据这一框架,提出了三种算法,即AsySQN-SGD、SVRG和SAGA。提议的AsySQN型算法方法,通过测算(不明确地计算反赫斯矩阵矩阵矩阵矩阵),使血缘整合方法比SGD-Newton(As)更快,因此可以大幅降低我们计算数据成本。我们的拟议数字计算方法。我们的拟议数字计算方法可以证明我们数字计算方法的计算成本。