Automated decision systems are increasingly used for consequential decision making -- for a variety of reasons. These systems often rely on sophisticated yet opaque models, which do not (or hardly) allow for understanding how or why a given decision was arrived at. This is not only problematic from a legal perspective, but non-transparent systems are also prone to yield undesirable (e.g., unfair) outcomes because their sanity is difficult to assess and calibrate in the first place. In this work, we conduct a study to evaluate different attempts of explaining such systems with respect to their effect on people's perceptions of fairness and trustworthiness towards the underlying mechanisms. A pilot study revealed surprising qualitative insights as well as preliminary significant effects, which will have to be verified, extended and thoroughly discussed in the larger main study.
翻译:由于各种原因,自动决策系统越来越多地用于相应的决策,这些系统往往依赖复杂而不透明的模型,这些模型无法(或几乎无法)理解如何或为什么做出某项决定,这不仅从法律角度来说有问题,而且不透明的系统也容易产生不良(例如不公平)结果,因为它们的灵敏性首先难以评估和校准。在这项工作中,我们进行一项研究,评价解释这种系统对人们关于公平和信任基本机制的看法的影响的不同尝试。一项试点研究揭示出令人惊讶的定性洞见以及初步的重大效果,这些将在更大的主要研究中加以核实、扩大和彻底讨论。