The deployment of biased machine learning (ML) models has resulted in adverse effects in crucial sectors such as criminal justice and healthcare. To address these challenges, a diverse range of machine learning fairness interventions have been developed, aiming to mitigate bias and promote the creation of more equitable models. Despite the growing availability of these interventions, their adoption in real-world applications remains limited, with many practitioners unaware of their existence. To address this gap, we systematically identified and compiled a dataset of 62 open source fairness interventions and identified active ones. We conducted an in-depth analysis of their specifications and features to uncover considerations that may drive practitioner preference and to identify the software interventions actively maintained in the open source ecosystem. Our findings indicate that 32% of these interventions have been actively maintained within the past year, and 50% of them offer both bias detection and mitigation capabilities, mostly during inprocessing.
翻译:暂无翻译