Across machine learning (ML) sub-disciplines, researchers make explicit mathematical assumptions in order to facilitate proof-writing. We note that, specifically in the area of fairness-accuracy trade-off optimization scholarship, similar attention is not paid to the normative assumptions that ground this approach. Such assumptions presume that 1) accuracy and fairness are in inherent opposition to one another, 2) strict notions of mathematical equality can adequately model fairness, 3) it is possible to measure the accuracy and fairness of decisions independent from historical context, and 4) collecting more data on marginalized individuals is a reasonable solution to mitigate the effects of the trade-off. We argue that such assumptions, which are often left implicit and unexamined, lead to inconsistent conclusions: While the intended goal of this work may be to improve the fairness of machine learning models, these unexamined, implicit assumptions can in fact result in emergent unfairness. We conclude by suggesting a concrete path forward toward a potential resolution.
翻译:跨机器学习(ML)次级纪律,研究人员作出明确的数学假设,以便利校对。我们注意到,特别是在公平-准确交易优化奖学金领域,没有同样重视作为这一方法依据的规范性假设。这种假设假定假定:(1) 精确和公平是内在的对立;(2) 严格的数学平等概念可以充分模拟公平;(3) 可以衡量独立于历史背景的决定的准确性和公正性;(4) 可以衡量独立于历史背景的决定的准确性和公平性;(4) 收集更多关于边缘化个人的数据是减轻权衡影响的合理解决办法。 我们争辩说,这种往往被隐含和未审查的假设会导致不一致的结论:虽然这项工作的预期目标可能是提高机器学习模式的公平性,但这些未经审查的隐含假设实际上可能导致突发的不公平。我们最后提出一条具体途径,走向潜在的解决方案。