Federated learning (FL) has gained significant attention recently as a privacy-enhancing tool to jointly train a machine learning model by multiple participants. The prior work on FL has mostly studied how to protect label privacy during model training. However, model evaluation in FL might also lead to potential leakage of private label information. In this work, we propose an evaluation algorithm that can accurately compute the widely used AUC (area under the curve) metric when using the label differential privacy (DP) in FL. Through extensive experiments, we show our algorithms can compute accurate AUCs compared to the ground truth. The code is available at {\url{https://github.com/bytedance/fedlearner/tree/master/example/privacy/DPAUC}}.
翻译:最近,联邦学习(FL)作为一种隐私强化工具,吸引了多方参与者共同培训机器学习模式,最近引起了人们的极大关注。以前关于FL的工作主要研究如何在模式培训期间保护标签隐私,然而,FL的模型评价还可能导致私人标签信息泄露。在这项工作中,我们建议一种评价算法,在使用FL的标签差异隐私(DP)时,可以准确计算广泛使用的ACU(曲线下区域)。通过广泛的实验,我们显示我们的算法可以计算准确的AUCs与地面真相的比较。代码可在 url{https://github.com/bytedance/fedlearner/tree/master/example/privacy/DPAUC}查阅。