Explanatory information helps users to evaluate the suggestions offered by AI-driven decision support systems. With large language models, adjusting explanation expressions has become much easier. However, how these expressions influence human decision-making remains largely unexplored. This study investigated the effect of explanation tone (e.g., formal or humorous) on decision-making, focusing on AI roles and user attributes. We conducted user experiments across three scenarios depending on AI roles (assistant, second-opinion provider, and expert) using datasets designed with varying tones. The results revealed that tone significantly influenced decision-making regardless of user attributes in the second-opinion scenario, whereas its impact varied by user attributes in the assistant and expert scenarios. In addition, older users were more influenced by tone, and highly extroverted users exhibited discrepancies between their perceptions and decisions. Furthermore, open-ended questionnaires highlighted that users expect tone adjustments to enhance their experience while emphasizing the importance of tone consistency and ethical considerations. Our findings provide crucial insights into the design of explanation expressions.
翻译:暂无翻译