Machine learning is becoming a commonplace part of our technological experience. The notion of explainable AI (XAI) is attractive when regulatory or usability considerations necessitate the ability to back decisions with a coherent explanation. A large body of research has addressed algorithmic methods of XAI, but it is still unclear how to determine what is best suited to create human cooperation and adoption of automatic systems. Here we develop an experimental methodology where participants play a web-based game, during which they receive advice from either a human or algorithmic advisor, accompanied with explanations that vary in nature between experimental conditions. We use a reference-dependent decision-making framework, evaluate the game results over time, and in various key situations, to determine whether the different types of explanations affect the readiness to adopt, willingness to pay and trust a financial AI consultant. We find that the types of explanations that promotes adoption during first encounter differ from those that are most successful following failure or when cost is involved. Furthermore, participants are willing to pay more for AI-advice that includes explanations. These results add to the literature on the importance of XAI for algorithmic adoption and trust.
翻译:机器学习正在成为我们技术经验的一个常见部分。 当监管或可用性考虑要求能够以一致的解释支持决策时,可解释的AI(XAI)概念具有吸引力。 大量研究已经涉及XAI的算法方法,但目前还不清楚如何确定什么最适合建立人类合作和采用自动系统。 我们在这里开发了一种实验方法,参与者在其中玩网上游戏,从人或算法顾问那里获得建议,同时附上不同实验条件性质的解释。我们使用一个参考决策框架,评估游戏在时间上和各种关键情况下的结果,以确定不同解释是否影响接受、支付和信任财务AI顾问的意愿。我们发现,在初次遇到时促进收养的解释类型不同于失败后最成功的解释或涉及费用时最成功的解释。 此外,参与者愿意为包括解释在内的AI建议支付更多费用。这些结果补充了关于XAI在算法采纳和信任方面的重要性的文献。