Deployed machine learning models are evaluated by multiple metrics beyond accuracy, such as fairness and robustness. However, such models are typically trained to minimize the average loss for a single metric, which is typically a proxy for accuracy. Training to optimize a single metric leaves these models prone to fairness violations, especially when the population of sub-groups in the training data are imbalanced. This work addresses the challenge of jointly optimizing fairness and predictive performance in the multi-class classification setting by introducing Fairness Optimized Reweighting via Meta-Learning (FORML), a training algorithm that balances fairness constraints and accuracy by jointly optimizing training sample weights and a neural network's parameters. The approach increases fairness by learning to weight each training datum's contribution to the loss according to its impact on reducing fairness violations, balancing the contributions from both over- and under-represented sub-groups. We empirically validate FORML on a range of benchmark and real-world classification datasets and show that our approach improves equality of opportunity fairness criteria over existing state-of-the-art reweighting methods by approximately 1% on image classification tasks and by approximately 5% on a face attribute prediction task. This improvement is achieved without pre-processing data or post-processing model outputs, without learning an additional weighting function, and while maintaining accuracy on the original predictive metric.
翻译:应用机器学习模型时采用超出精确度的多种衡量标准,例如公平和稳健性;然而,这类模型一般都经过培训,以尽量减少单一衡量标准的平均损失,而单一衡量标准通常是准确性的替代物; 优化单一衡量标准的培训使这些模型容易出现违反公平的情况,特别是当培训数据中的分组人口不平衡时; 这项工作通过采用公平性优化优化优化优化的优化利用Meta-Learning(FORML)这一培训算法,在公平性制约和准确性之间取得平衡; 通过联合优化培训样本重量和神经网络参数来平衡公平性限制和准确性; 采用优化单一衡量标准的培训方法,提高公平性,根据对减少公平性违规行为的影响,在培训数据分组中,平衡来自任职人数过多和不足的分组的贡献,使这些模式更加公平性; 通过引入公平性优化优化优化优化最佳标准(FORML),通过一种培训算法,通过联合优化培训样本重量和神经网络参数来平衡公平性限制和准确性; 通过学习每个培训数据对损失的贡献,这种方法提高了加权的份量,在不进行预测前的进度上又保持了前的进度。