Given that there are a variety of stakeholders involved in, and affected by, decisions from machine learning (ML) models, it is important to consider that different stakeholders have different transparency needs. Previous work found that the majority of deployed transparency mechanisms primarily serve technical stakeholders. In our work, we want to investigate how well transparency mechanisms might work in practice for a more diverse set of stakeholders by conducting a large-scale, mixed-methods user study across a range of organizations, within a particular industry such as health care, criminal justice, or content moderation. In this paper, we outline the setup for our study.
翻译:鉴于各种利益攸关方参与并受机器学习模式决定的影响,必须考虑到不同利益攸关方有不同的透明度需求。以往的工作发现,大多数部署的透明度机制主要为技术利益攸关方服务。在我们的工作中,我们希望调查透明度机制在实践中如何通过在保健、刑事司法或内容节制等特定行业的各类组织开展大规模混合方法用户研究,对更多样化的利益攸关方群体发挥有效作用。本文概述了我们研究的设置。