In this paper, we consider partitioned edge learning (PARTEL), which implements parameter-server training, a well known distributed learning method, in a wireless network. Thereby, PARTEL leverages distributed computation resources at edge devices to train a large-scale artificial intelligence (AI) model by dynamically partitioning the model into parametric blocks for separated updating at devices. Targeting broadband channels, we consider the joint control of parameter allocation, sub-channel allocation, and transmission power to improve the performance of PARTEL. Specifically, the policies for joint SUbcarrier, Parameter, and POweR allocaTion (SUPPORT) are optimized under the criterion of minimum learning latency. Two cases are considered. First, for the case of decomposable models (e.g., logistic regression), the latency-minimization problem is a mixed-integer program and non-convex. Due to its intractability, we develop a practical solution by integer relaxation and transforming it into an equivalent convex problem of model size maximization under a latency constraint. Thereby, a low-complexity algorithm is designed to compute the SUPPORT policy. Second, consider the case of deep neural network (DNN) models which can be trained using PARTEL by introducing some auxiliary variables. This, however, introduces constraints on model partitioning reducing the granularity of parameter allocation. The preceding policy is extended to DNN models by applying the proposed techniques of load rounding and proportional adjustment to rein in latency expansion caused by the load granularity constraints.
翻译:在本文中,我们考虑分层边缘学习(PARTEL),在无线网络中实施参数-服务器培训,这是一种广为人知的分布式学习方法。因此,EpartEL利用在边缘设备上分配计算资源的方法,通过动态将模型分成参数区块,在设备上进行分离更新,来培训大型人工智能(AI)模型,将模型分为参数的参数区块,在设备上进行分离更新。针对宽带频道,我们考虑对参数分配、子通道分配和传输能力进行联合控制,以改善PartEL的性能。具体来说,SUBcarrier、Parameter和POweR AllocaTIion(SUpport)联合推广服务的政策在最小学习耐久标准下得到了优化。首先,对于不兼容模型(e.g.物流回归)的情况,最小最小最小化是一个混合的配置程序和非凝固化程序。由于其不易变现性,我们制定了一个实际的解决办法,通过整调和将模型转化为模型的模型的模型最大化模型问题。因此,在使用经过深层战略分析的精度调整的精度政策限制中,可以将低度调整后算法,从而将精度调整到精度调整。