For effective collaboration between humans and intelligent agents that employ machine learning for decision-making, humans must understand what agents can and cannot do to avoid over/under-reliance. A solution to this problem is adjusting human reliance through communication using reliance calibration cues (RCCs) to help humans assess agents' capabilities. Previous studies typically attempted to calibrate reliance by continuously presenting RCCs, and when an agent should provide RCCs remains an open question. To answer this, we propose Pred-RC, a method for selectively providing RCCs. Pred-RC uses a cognitive reliance model to predict whether a human will assign a task to an agent. By comparing the prediction results for both cases with and without an RCC, Pred-RC evaluates the influence of the RCC on human reliance. We tested Pred-RC in a human-AI collaboration task and found that it can successfully calibrate human reliance with a reduced number of RCCs.
翻译:为了在利用机器学习决策的人类和智能代理人之间进行有效合作,人类必须了解哪些代理人可以而且不能做哪些事情来避免过度/依赖不足。解决这个问题的一个解决办法是通过利用依赖校准提示(RCCs)进行沟通,调整人类依赖性,以帮助人类评估代理人的能力。以往的研究通常试图通过不断展示RCC来调整依赖性,当代理人应当提供RCC时,仍然是一个未决问题。为了回答这个问题,我们提议Pred-RC,这是有选择地提供RCC的一种方法。Pred-RC使用认知依赖模型来预测一个人是否会指派给一个代理人任务。通过将两种案件的预测结果与RCC(RCC)进行比较,Pred-RC评估RC评估RC对人类依赖性的影响。我们在一项人类-AI合作任务中测试了RC(Pred-RC),发现它能够通过减少RCCs数量来成功地校准人类依赖性。