With the vigorous development of artificial intelligence technology, various engineering technology applications have been implemented one after another. The gradient descent method plays an important role in solving various optimization problems, due to its simple structure, good stability and easy implementation. In multi-node machine learning system, the gradients usually need to be shared. Shared gradients are generally unsafe. Attackers can obtain training data simply by knowing the gradient information. In this paper, to prevent gradient leakage while keeping the accuracy of model, we propose the super stochastic gradient descent approach to update parameters by concealing the modulus length of gradient vectors and converting it or them into a unit vector. Furthermore, we analyze the security of super stochastic gradient descent approach. Our algorithm can defend against attacks on the gradient. Experiment results show that our approach is obviously superior to prevalent gradient descent approaches in terms of accuracy, robustness, and adaptability to large-scale batches.
翻译:随着人工智能技术的蓬勃发展,各种工程技术应用已经相继得到应用。由于梯度下降法的结构简单、稳定且实施简便,因此在解决各种优化问题方面起着重要作用。在多节机器学习系统中,梯度通常需要共享。共享梯度通常是不安全的。攻击者只需了解梯度信息就可以获得培训数据。在本文中,为了防止梯度渗漏,同时保持模型的准确性,我们建议采用超级随机梯度梯度下降法更新参数,隐藏梯度矢量的模量长度,并将其转换成一个单位矢量。此外,我们分析了超随机梯度梯度梯度下降方法的安全性。我们的算法可以抵御梯度攻击。实验结果显示,从准确性、稳健性和适应大型量来看,我们的方法显然优于普遍的梯度下降方法。