We study whether receiving advice from either a human or algorithmic advisor, accompanied by five types of Local and Global explanation labelings, has an effect on the readiness to adopt, willingness to pay, and trust in a financial AI consultant. We compare the differences over time and in various key situations using a unique experimental framework where participants play a web-based game with real monetary consequences. We observed that accuracy-based explanations of the model in initial phases leads to higher adoption rates. When the performance of the model is immaculate, there is less importance associated with the kind of explanation for adoption. Using more elaborate feature-based or accuracy-based explanations helps substantially in reducing the adoption drop upon model failure. Furthermore, using an autopilot increases adoption significantly. Participants assigned to the AI-labeled advice with explanations were willing to pay more for the advice than the AI-labeled advice with a No-explanation alternative. These results add to the literature on the importance of XAI for algorithmic adoption and trust.
翻译:我们研究从人力顾问或算法顾问那里得到的建议,加上五种当地和全球解释标签,是否对是否愿意接受、愿意支付和信任一个金融AI顾问产生影响。我们使用一个独特的实验框架,对时间和各种关键情况下的差异进行比较,参与者在其中玩网上游戏并产生实际的货币后果。我们发现,在初始阶段对模型的精确解释导致较高的采纳率。当模型的性能不完善时,与采用该模型的解释不那么重要。使用更精细的基于特征或基于精确的解释有助于大大减少模型失败时的采纳下降。此外,使用自动试剂大大提高了采纳率。被指派给AI贴标签的咨询的参与者愿意支付比用不解释替代的AI标签建议更多的建议。这些结果增加了关于XAI对算法的采纳和信任重要性的文献。