As machine learning algorithms are increasingly deployed for high-impact automated decision making, ethical and increasingly also legal standards demand that they treat all individuals fairly, without discrimination based on their age, gender, race or other sensitive traits. In recent years much progress has been made on ensuring fairness and reducing bias in standard machine learning settings. Yet, for network embedding, with applications in vulnerable domains ranging from social network analysis to recommender systems, current options remain limited both in number and performance. We thus propose DeBayes: a conceptually elegant Bayesian method that is capable of learning debiased embeddings by using a biased prior. Our experiments show that these representations can then be used to perform link prediction that is significantly more fair in terms of popular metrics such as demographic parity and equalized opportunity.
翻译:随着机器学习算法越来越多地用于高效自动决策,伦理和法律标准也越来越要求它们一视同仁地公平对待所有个人,不因其年龄、性别、种族或其他敏感特征而加以歧视。近年来,在确保标准机器学习环境中的公平和减少偏见方面取得了很大进展。然而,在网络嵌入方面,从社会网络分析到建议系统等脆弱领域的应用仍然有限,目前的选择在数量和性能方面都是有限的。因此,我们提议德拜耶斯:一种概念上优雅的贝叶西亚方法,它能够通过使用偏颇的先行方法来学习有偏见的嵌入。我们的实验表明,这些表示可以用来进行联系预测,在人口均等和机会均等等流行指标方面,这种预测非常公平。