To enable an ethical and legal use of machine learning algorithms, they must both be fair and protect the privacy of those whose data are being used. However, implementing privacy and fairness constraints might come at the cost of utility (Jayaraman & Evans, 2019; Gong et al., 2020). This paper investigates the privacy-utility-fairness trade-off in neural networks by comparing a Simple (S-NN), a Fair (F-NN), a Differentially Private (DP-NN), and a Differentially Private and Fair Neural Network (DPF-NN) to evaluate differences in performance on metrics for privacy (epsilon, delta), fairness (risk difference), and utility (accuracy). In the scenario with the highest considered privacy guarantees (epsilon = 0.1, delta = 0.00001), the DPF-NN was found to achieve better risk difference than all the other neural networks with only a marginally lower accuracy than the S-NN and DP-NN. This model is considered fair as it achieved a risk difference below the strict (0.05) and lenient (0.1) thresholds. However, while the accuracy of the proposed model improved on previous work from Xu, Yuan and Wu (2019), the risk difference was found to be worse.
翻译:为使机器学习算法在道德和法律上得到使用,它们必须既公平,又能保护数据被使用者的隐私;然而,实施隐私和公平限制可能以公用事业(Jayaraman & Evans, 2019年;Gong等人,2020年)为代价(Jayaraman & Evans, 2019年;Gong等人,2020年);本文件通过比较简单(S-NNN)、公平(F-NNN)、差别私人(DP-NNN)和差别化私人和公平神经网络(DPF-NNN),调查神经网络的隐私-公用事业-公平交易(Py-Pility-公平交易)情况,以比较一个简单的(S-NN)、公平(F-NN)和有区别的私人和公平(DP-NNNN)信息网络(DP-F-NN),以评价隐私(epsilon, delta)标准、公平(风险差异)和实用性(准确性(准确性)标准方面的不同之处,但在考虑隐私保障最高的情况下(epsilon)标准(20)和最低风险(Win19)下,而拟议的模型的准确性差异比前工作更差)。