Artificial intelligence (AI) is gaining momentum, and its importance for the future of work in many areas, such as medicine and banking, is continuously rising. However, insights on the effective collaboration of humans and AI are still rare. Typically, AI supports humans in decision-making by addressing human limitations. However, it may also evoke human bias, especially in the form of automation bias as an over-reliance on AI advice. We aim to shed light on the potential to influence automation bias by explainable AI (XAI). In this pre-test, we derive a research model and describe our study design. Subsequentially, we conduct an online experiment with regard to hotel review classifications and discuss first results. We expect our research to contribute to the design and development of safe hybrid intelligence systems.
翻译:人工智能(AI)正在形成势头,对医学和银行等许多领域未来工作的重要性正在不断提高,然而,关于人类与AI的有效协作的洞察力仍然很少。通常,AI通过解决人类的局限性来支持人类决策。然而,它也可能引起人类的偏见,特别是作为过度依赖AI咨询意见的自动化偏见。我们的目标是通过解释AI(XAI)来揭示影响自动化偏见的可能性。在本次测试之前,我们得出一个研究模型,描述我们的研究设计。随后,我们进行了关于旅馆审查分类的在线实验,并讨论初步结果。我们期望我们的研究能够有助于设计和开发安全的混合情报系统。