To reduce uploading bandwidth and address privacy concerns, deep learning at the network edge has been an emerging topic. Typically, edge devices collaboratively train a shared model using real-time generated data through the Parameter Server framework. Although all the edge devices can share the computing workloads, the distributed training processes over edge networks are still time-consuming due to the parameters and gradients transmission procedures between parameter servers and edge devices. Focusing on accelerating distributed Convolutional Neural Networks (CNNs) training at the network edge, we present DynaComm, a novel scheduler that dynamically decomposes each transmission procedure into several segments to achieve optimal communications and computations overlapping during run-time. Through experiments, we verify that DynaComm manages to achieve optimal scheduling for all cases compared to competing strategies while the model accuracy remains untouched.
翻译:为了减少上传带宽和解决隐私问题,网络边缘的深层次学习是一个新出现的主题。通常,边端装置通过参数服务器框架,通过实时生成的数据,合作培训一个共享模型。虽然所有边装置都可以分担计算工作量,但由于参数服务器和边端装置之间的参数和梯度传输程序,边端网络分布式培训过程仍然耗费时间。我们介绍DynaComm,这是一个新颖的排程器,将每个传输程序动态地分解成几个部分,以实现最佳通信和计算运行时重叠。我们通过实验,核实DynaCom设法对所有案件实现最佳的排期安排,而相对于竞争战略而言,模型准确性仍然未变。