Distributed asynchronous offline training has received widespread attention in recent years because of its high performance on large-scale data and complex models. As data are distributed from cloud-centric to edge nodes, a big challenge for distributed machine learning systems is how to handle native and natural non-independent and identically distributed (non-IID) data for training. Previous asynchronous training methods do not have a satisfying performance on non-IID data because it would result in that the training process fluctuates greatly which leads to an abnormal convergence. We propose a gradient scheduling algorithm with partly averaged gradients and global momentum (GSGM) for non-IID data distributed asynchronous training. Our key idea is to apply global momentum and local average to the biased gradient after scheduling, in order to make the training process steady. Experimental results show that for non-IID data training under the same experimental conditions, GSGM on popular optimization algorithms can achieve a 20% increase in training stability with a slight improvement in accuracy on Fashion-Mnist and CIFAR-10 datasets. Meanwhile, when expanding distributed scale on CIFAR-100 dataset that results in sparse data distribution, GSGM can perform a 37% improvement on training stability. Moreover, only GSGM can converge well when the number of computing nodes grows to 30, compared to the state-of-the-art distributed asynchronous algorithms. At the same time, GSGM is robust to different degrees of non-IID data.
翻译:近年来,分散式分散式离线培训因其在大型数据和复杂模型方面的高性能而得到广泛关注。随着数据从云中心向边缘节点分布,分布式机器学习系统面临一个重大挑战,即如何处理本地和自然非独立和同样分布的(非IID)培训数据。以往的非同步培训方法在非IID数据上没有令人满意的性能,因为这将导致培训过程大幅波动,导致异常趋同。我们建议了一种梯度列表算法,其部分平均梯度和全球势头(GSGM),用于传播非IID非同步培训。我们的主要想法是,如何在时间安排后将全球势头和当地平均数应用于偏差梯度,以使培训进程稳定。实验结果表明,对于非IID的数据培训,在相同的实验条件下,GM关于大众优化算法的GSGM可实现20%的培训稳定性增长,而FASI-Mnist和CIFAR-10数据集的精确度略有提高。同时,当将非分布级的GFAR-100数据比例扩大为CFSA-C-CMLMA的数据升级为30数据时,只能将GMMM-C-alalalalmadaldalalalmadalal dal dal 。在30 上进行数据升级数据更新数据。