Split learning is a promising privacy-preserving distributed learning scheme that has low computation requirement at the edge device but has the disadvantage of high communication overhead between edge device and server. To reduce the communication overhead, this paper proposes a loss-based asynchronous training scheme that updates the client-side model less frequently and only sends/receives activations/gradients in selected epochs. To further reduce the communication overhead, the activations/gradients are quantized using 8-bit floating point prior to transmission. An added benefit of the proposed communication reduction method is that the computations at the client side are reduced due to reduction in the number of client model updates. Furthermore, the privacy of the proposed communication reduction based split learning method is almost the same as traditional split learning. Simulation results on VGG11, VGG13 and ResNet18 models on CIFAR-10 show that the communication cost is reduced by 1.64x-106.7x and the computations in the client are reduced by 2.86x-32.1x when the accuracy degradation is less than 0.5% for the single-client case. For 5 and 10-client cases, the communication cost reduction is 11.9x and 11.3x on VGG11 for 0.5% loss in accuracy.
翻译:分拆学习是一个很有希望的隐私保留分布式学习计划,在边缘设备上具有低计算要求,但具有边缘装置和服务器之间高通信管理高额的缺点。为减少通信管理费,本文件提议了一个基于损失的无同步培训计划,在选定的时代对客户-侧模式进行更新的次数较少,而只发送/接收激活/升级;为进一步降低通信管理费,启动/升级在传输前使用8比特浮动点进行四分化。拟议通信减少方法的另一个好处是,客户方的计算由于客户模式更新数量减少而减少。此外,基于分散学习的拟议通信减少方法的隐私性几乎与传统的分解学习相同。关于CIFAR-10的VGG11、VGG13和ResNet18模型的模拟结果显示,通信成本减少1.64x-106.7x,客户的计算减少2.86x-32.1x,当单一客户案例的准确度下降低于0.5 %时,则减少客户方的计算减少。对于5和10个客户方的通信成本降低为11.9%和0.5x的准确度。