Machine learning decision systems are getting omnipresent in our lives. From dating apps to rating loan seekers, algorithms affect both our well-being and future. Typically, however, these systems are not infallible. Moreover, complex predictive models are really eager to learn social biases present in historical data that can lead to increasing discrimination. If we want to create models responsibly then we need tools for in-depth validation of models also from the perspective of potential discrimination. This article introduces an R package fairmodels that helps to validate fairness and eliminate bias in classification models in an easy and flexible fashion. The fairmodels package offers a model-agnostic approach to bias detection, visualization and mitigation. The implemented set of functions and fairness metrics enables model fairness validation from different perspectives. The package includes a series of methods for bias mitigation that aim to diminish the discrimination in the model. The package is designed not only to examine a single model, but also to facilitate comparisons between multiple models.
翻译:机器学习决策系统正在我们生活中变得无处不在。 从约会应用程序到贷款寻求者评级,算法会影响我们的福祉和未来。 但是,通常,这些系统不会错失。 此外,复杂的预测模型非常渴望了解历史数据中存在的社会偏见,可能导致歧视增加。如果我们想要负责任地创建模型,那么我们需要从潜在歧视的角度来深入验证模型的工具。这篇文章引入了一个R包集集集集样板,帮助以简单灵活的方式验证公平,消除分类模型中的偏差。“公平模型”集提供了一种识别、可视化和缓解偏见的模型和不可知性方法。执行的功能和公平度指标集能够从不同角度验证模型的公平性。包包包括一系列旨在减少模型中歧视的减少偏差的方法。包不仅设计了单一模型,而且还便利了多种模型之间的比较。