Automated decision systems (ADS) are increasingly used for consequential decision-making. These systems often rely on sophisticated yet opaque machine learning models, which do not allow for understanding how a given decision was arrived at. This is not only problematic from a legal perspective, but non-transparent systems are also prone to yield unfair outcomes because their sanity is challenging to assess and calibrate in the first place -- which is particularly worrisome for human decision-subjects. Based on this observation and building upon existing work, I aim to make the following three main contributions through my doctoral thesis: (a) understand how (potential) decision-subjects perceive algorithmic decisions (with varying degrees of transparency of the underlying ADS), as compared to similar decisions made by humans; (b) evaluate different tools for transparent decision-making with respect to their effectiveness in enabling people to appropriately assess the quality and fairness of ADS; and (c) develop human-understandable technical artifacts for fair automated decision-making. Over the course of the first half of my PhD program, I have already addressed substantial pieces of (a) and (c), whereas (b) will be the major focus of the second half.
翻译:自动决策系统(ADS)越来越多地用于相应的决策。这些系统往往依赖复杂而不透明的机器学习模式,无法理解某项决定是如何作出的。这不仅从法律角度存在问题,而且不透明的系统也容易产生不公平的结果,因为它们的灵敏性首先难以评估和校准 -- -- 这对于人类决策主体来说尤其令人担忧。根据这一观察和现有工作,我的目标是通过我的博士论文作出以下三个主要贡献:(a) 理解(潜在的)决策主体如何看待算法决定(基础的ADS具有不同程度的透明度),与人类作出的类似决定相比;(b) 评估透明决策的不同工具,以便其有效性,使人们能够适当评估ADS的质量和公平性;(c) 为公平的自动化决策开发人所无法理解的技术手工艺。在我博士课程的前一半中,我已经处理了(a)和(c)的实质性部分,而(b) 将是第二部分的主要重点。