In AI-assisted decision-making, it is critical for human decision-makers to know when to trust AI and when to trust themselves. However, prior studies calibrated human trust only based on AI confidence indicating AI's correctness likelihood (CL) but ignored humans' CL, hindering optimal team decision-making. To mitigate this gap, we proposed to promote humans' appropriate trust based on the CL of both sides at a task-instance level. We first modeled humans' CL by approximating their decision-making models and computing their potential performance in similar instances. We demonstrated the feasibility and effectiveness of our model via two preliminary studies. Then, we proposed three CL exploitation strategies to calibrate users' trust explicitly/implicitly in the AI-assisted decision-making process. Results from a between-subjects experiment (N=293) showed that our CL exploitation strategies promoted more appropriate human trust in AI, compared with only using AI confidence. We further provided practical implications for more human-compatible AI-assisted decision-making.
翻译:在AI协助的决策中,人类决策者必须知道何时信任AI和何时信任自己。然而,先前的研究只根据AI信任度来校准人类信任度,表明AI的正确性可能性,但忽视了人类的CL,妨碍了最佳团队决策。为了缩小这一差距,我们提议在任务执行一级促进基于双方的CL的人类适当信任度。我们首先通过接近其决策模型和在类似情况下计算其潜在性能来模拟人类的CL。我们通过两项初步研究展示了我们模型的可行性和有效性。然后,我们提出了三项CL利用战略来明确/不明确地校准用户在AI协助的决策过程中的信任度。一项主题间实验的结果(N=293)表明,我们的CL利用战略促进了在AI方面的更适当的人类信任度,而不是仅仅利用AI的信心。我们进一步为更符合人性要求的AI辅助决策提供了实际影响。