Algorithmic transparency entails exposing system properties to various stakeholders for purposes that include understanding, improving, and contesting predictions. Until now, most research into algorithmic transparency has predominantly focused on explainability. Explainability attempts to provide reasons for a machine learning model's behavior to stakeholders. However, understanding a model's specific behavior alone might not be enough for stakeholders to gauge whether the model is wrong or lacks sufficient knowledge to solve the task at hand. In this paper, we argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions. First, we discuss methods for assessing uncertainty. Then, we characterize how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems. Finally, we outline methods for displaying uncertainty to stakeholders and recommend how to collect information required for incorporating uncertainty into existing ML pipelines. This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness. We aim to encourage researchers and practitioners to measure, communicate, and use uncertainty as a form of transparency.
翻译:分析透明度意味着为了理解、改进和质疑预测等目的,将系统特性暴露给各个利益攸关方,以了解、改进和质疑预测。到目前为止,对算法透明度的大多数研究主要侧重于解释性。解释性尝试试图向利益攸关方提供机器学习模型行为的理由。然而,仅仅了解模型的具体行为可能不足以让利益攸关方衡量模型是否错误或缺乏足够的知识来完成手头的任务。在本文件中,我们主张考虑一种补充性的透明度形式,方法是估计和通报模型预测的不确定性。首先,我们讨论评估不确定性的方法。然后,我们说明如何利用不确定性来减轻模型的不公正性,加强决策,并建立可信赖的系统。最后,我们概述了向利益攸关方展示不确定性的方法,并就如何收集将不确定性纳入现有ML管道所需的信息提出建议。这项工作构成了从涵盖机器学习、可视化/HCI、设计、决策以及公平等文献中抽取的跨学科审查。我们的目的是鼓励研究人员和从业人员衡量、交流和使用不确定性,以此作为一种透明度的形式。