Artificial Intelligence (AI) systems are increasingly used for decision-making across domains, raising debates over the information and explanations they should provide. Most research on Explainable AI (XAI) has focused on feature-based explanations, with less attention on alternative styles. Personality traits like the Need for Cognition (NFC) can also lead to different decision-making outcomes among low and high NFC individuals. We investigated how presenting AI information (prediction, confidence, and accuracy) and different explanation styles (example-based, feature-based, rule-based, and counterfactual) affect accuracy, reliance on AI, and cognitive load in a loan application scenario. We also examined low and high NFC individuals' differences in prioritizing XAI interface elements (loan attributes, AI information, and explanations), accuracy, and cognitive load. Our findings show that high AI confidence significantly increases reliance on AI while reducing cognitive load. Feature-based explanations did not enhance accuracy compared to other conditions. Although counterfactual explanations were less understandable, they enhanced overall accuracy, increasing reliance on AI and reducing cognitive load when AI predictions were correct. Both low and high NFC individuals prioritized explanations after loan attributes, leaving AI information as the least important. However, we found no significant differences between low and high NFC groups in accuracy or cognitive load, raising questions about the role of personality traits in AI-assisted decision-making. These findings highlight the need for user-centric personalization in XAI interfaces, incorporating diverse explanation styles and exploring multiple personality traits and other user characteristics to optimize human-AI collaboration.
翻译:人工智能系统在各领域的决策中应用日益广泛,引发了关于其应提供何种信息与解释的讨论。当前大多数可解释人工智能研究聚焦于基于特征的解释方式,对其他解释风格关注较少。同时,个性特质如认知需求也可能导致高、低认知需求个体产生不同的决策结果。本研究在贷款申请场景中,探讨了呈现人工智能信息(预测结果、置信度与准确率)及不同解释风格(基于案例、基于特征、基于规则与反事实解释)如何影响决策准确率、对人工智能的依赖程度以及认知负荷。我们还考察了高、低认知需求个体在优先关注可解释人工智能界面元素(贷款属性、人工智能信息与解释)、决策准确率及认知负荷方面的差异。研究发现:高人工智能置信度能显著提升用户对人工智能的依赖,同时降低认知负荷;与其他实验条件相比,基于特征的解释并未提高决策准确率;尽管反事实解释的可理解性较低,但当人工智能预测正确时,它能提升整体准确率,增强对人工智能的依赖并减轻认知负荷。无论认知需求高低,用户在贷款属性之后均优先关注解释内容,而将人工智能信息视为最次要因素。然而,高、低认知需求群体在准确率与认知负荷方面未呈现显著差异,这引发了对个性特质在人工智能辅助决策中作用的反思。本研究结果强调,可解释人工智能界面需注重以用户为中心的个人化设计,整合多样化的解释风格,并进一步探究多种个性特质与其他用户特征,以优化人机协作效能。