Recent works have recognized the need for human-centered perspectives when designing and evaluating human-AI interactions and explainable AI methods. Yet, current approaches fall short at intercepting and managing unexpected user behavior resulting from the interaction with AI systems and explainability methods of different stake-holder groups. In this work, we explore the use of AI and explainability methods in the insurance domain. In an qualitative case study with participants with different roles and professional backgrounds, we show that AI and explainability methods are used in creative ways in daily workflows, resulting in a divergence between their intended and actual use. Finally, we discuss some recommendations for the design of human-AI interactions and explainable AI methods to manage the risks and harness the potential of unexpected user behavior.
翻译:最近的工作已经认识到,在设计和评估人类-AI互动和可解释的AI方法时,需要以人为中心的观点;然而,目前的方法还不足以拦截和管理由于与AI系统的互动和不同利益攸关方群体的解释方法而产生的意外用户行为;在这项工作中,我们探索了AI的使用和保险领域的解释方法;在对不同角色和专业背景的参与者进行的一项定性案例研究中,我们显示,AI和解释方法在日常工作流程中被创造性地使用,导致其预期和实际使用之间的差异;最后,我们讨论了关于设计人类-AI互动的一些建议,以及可用于管理风险和利用意外用户行为潜力的可解释的AI方法。