Bias originates from both data and algorithmic design, often exacerbated by traditional fairness methods that fail to address the subtle impacts of protected attributes. This study introduces an approach to mitigate bias in machine learning by leveraging model uncertainty. Our approach utilizes a multi-task learning (MTL) framework combined with Monte Carlo (MC) Dropout to assess and mitigate uncertainty in predictions related to protected labels. By incorporating MC Dropout, our framework quantifies prediction uncertainty, which is crucial in areas with vague decision boundaries, thereby enhancing model fairness. Our methodology integrates multi-objective learning through pareto-optimality to balance fairness and performance across various applications. We demonstrate the effectiveness and transferability of our approach across multiple datasets and enhance model explainability through saliency maps to interpret how input features influence predictions, thereby enhancing the interpretability of machine learning models in practical applications.
翻译:暂无翻译