Submodular maximization has become established as the method of choice for the task of selecting representative and diverse summaries of data. However, if datapoints have sensitive attributes such as gender or age, such machine learning algorithms, left unchecked, are known to exhibit bias: under- or over-representation of particular groups. This has made the design of fair machine learning algorithms increasingly important. In this work we address the question: Is it possible to create fair summaries for massive datasets? To this end, we develop the first streaming approximation algorithms for submodular maximization under fairness constraints, for both monotone and non-monotone functions. We validate our findings empirically on exemplar-based clustering, movie recommendation, DPP-based summarization, and maximum coverage in social networks, showing that fairness constraints do not significantly impact utility.
翻译:次级模式最大化已经成为选择代表性和多种数据摘要的首选方法。然而,如果数据点具有性别或年龄等敏感属性,则已知这类机器学习算法(不加限制)具有偏向性:特定群体代表不足或过多。这使得公平机器学习算法的设计越来越重要。在这项工作中,我们处理的问题是:能否为大规模数据集创建公平摘要?为此,我们为单质和非单质功能的公平制约下,为子模式最大化开发了第一个流近似算法。我们从经验上验证了我们关于基于例外的集群、电影建议、基于DPP的组合以及社会网络覆盖最大化的研究结果,表明公平制约不会对实用性产生重大影响。