Federated Learning (FL) has emerged as a new paradigm for training machine learning models distributively without sacrificing data security and privacy. Learning models on edge devices such as mobile phones is one of the most common use cases for FL. However, Non-identical independent distributed~(non-IID) data in edge devices easily leads to training failures. Especially, over-parameterized machine learning models can easily be over-fitted on such data, hence, resulting in inefficient federated learning and poor model performance. To overcome the over-fitting issue, we proposed an adaptive dynamic pruning approach for FL, which can dynamically slim the model by dropping out unimportant parameters, hence, preventing over-fittings. Since the machine learning model's parameters react differently for different training samples, adaptive dynamic pruning will evaluate the salience of the model's parameter according to the input training sample, and only retain the salient parameter's gradients when doing back-propagation. We performed comprehensive experiments to evaluate our approach. The results show that our approach by removing the redundant parameters in neural networks can significantly reduce the over-fitting issue and greatly improves the training efficiency. In particular, when training the ResNet-32 on CIFAR-10, our approach reduces the communication cost by 57\%. We further demonstrate the inference acceleration capability of the proposed algorithm. Our approach reduces up to 50\% FLOPs inference of DNNs on edge devices while maintaining the model's quality.
翻译:联邦学习联合会(FL)已成为在不牺牲数据安全和隐私的情况下进行机器学习模式分配的新模式,在不牺牲数据安全和隐私的情况下对机器学习模式进行培训; 移动电话等边端设备学习模式是FL最常用的例子之一。 然而,在边端设备中不相同的独立分布~(非IID)数据很容易导致培训失败; 特别是,过度独立分布的机器学习模式很容易在这类数据上过度适用,从而导致联合学习效率低下和模型性能差。 为了克服超标问题,我们提议了FL的适应性动态调整方法,通过放弃无关紧要的参数来动态地缩小模型,从而防止过配。 然而,由于机器学习模式参数对不同培训样本的反应不同,适应性动态调整将评估模型参数的显著性能,根据投入培训样本评估时只能保留显著的梯度。 我们进行了全面的实验,以评价我们的方法。 通过删除神经网络中的冗余参数,可以大大降低模型的参数,从而防止过度匹配。 我们的50-10年的计算方法将降低我们的拟议速度。