Effective human-AI collaboration requires a system design that provides humans with meaningful ways to make sense of and critically evaluate algorithmic recommendations. In this paper, we propose a way to augment human-AI collaboration by building on a common organizational practice: identifying experts who are likely to provide complementary opinions. When machine learning algorithms are trained to predict human-generated assessments, experts' rich multitude of perspectives is frequently lost in monolithic algorithmic recommendations. The proposed approach aims to leverage productive disagreement by (1) identifying whether some experts are likely to disagree with an algorithmic assessment and, if so, (2) recommend an expert to request a second opinion from.
翻译:有效的人类-AI合作需要一种系统设计,为人类提供有意义的理解和批判性评估算法建议的方法。在本文件中,我们提出一种方法,通过在共同的组织做法的基础上加强人类-AI合作:确定有可能提供补充意见的专家。当机器学习算法经过培训以预测人类产生的评估时,专家的丰富观点往往在单一算法建议中消失。拟议方法的目的是通过(1)确定一些专家是否可能不同意算法评估,以及(2)建议专家征求第二个意见,从而利用生产分歧。