In recent years, machine learning techniques utilizing large-scale datasets have achieved remarkable performance. Differential privacy, by means of adding noise, provides strong privacy guarantees for such learning algorithms. The cost of differential privacy is often a reduced model accuracy and a lowered convergence speed. This paper investigates the impact of differential privacy on learning algorithms in terms of their carbon footprint due to either longer run-times or failed experiments. Through extensive experiments, further guidance is provided on choosing the noise levels which can strike a balance between desired privacy levels and reduced carbon emissions.
翻译:近年来,利用大规模数据集的机器学习技术取得了显著的成绩,通过添加噪音,差异隐私为这种学习算法提供了强有力的隐私保障,差异隐私的成本往往降低模型精确度,降低趋同速度。本文调查了差异隐私对学习算法的影响,即由于较长的运行时间或失败的实验而导致的碳足迹。通过广泛的实验,为选择在理想隐私水平和减少碳排放之间取得平衡的噪音水平提供了进一步的指导。