While recent advances in AI-based automated decision-making have shown many benefits for businesses and society, they also come at a cost. It has for long been known that a high level of automation of decisions can lead to various drawbacks, such as automation bias and deskilling. In particular, the deskilling of knowledge workers is a major issue, as they are the same people who should also train, challenge and evolve AI. To address this issue, we conceptualize a new class of DSS, namely Intelligent Decision Assistance (IDA) based on a literature review of two different research streams -- DSS and automation. IDA supports knowledge workers without influencing them through automated decision-making. Specifically, we propose to use techniques of Explainable AI (XAI) while withholding concrete AI recommendations. To test this conceptualization, we develop hypotheses on the impacts of IDA and provide first evidence for their validity based on empirical studies in the literature.
翻译:虽然最近在以AI为基础的自动化决策方面取得的进展给企业和社会带来了许多好处,但它们也带来了成本。人们早就知道,决策的高度自动化可能导致各种缺点,例如自动化偏向和剥夺技能。特别是,知识工作者丧失技能是一个主要问题,因为他们也是同样应当培训、挑战和演变AI的人。为了解决这一问题,我们构想了一个新的DSS类别,即根据对两种不同研究流 -- -- DSS和自动化 -- -- 的文献审查,智能决策援助(IDA)。开发协会支持知识工作者,而不会通过自动化决策影响他们。具体地说,我们提议使用可解释的AI(XAI)技术,同时不提出具体的AI建议。为了检验这种概念化,我们制定了关于IDA影响的假设,并根据文献中的经验研究,为这些假设的有效性提供初步证据。