Over the past decade explainable artificial intelligence has evolved from a predominantly technical discipline into a field that is deeply intertwined with social sciences. Insights such as human preference for contrastive -- more precisely, counterfactual -- explanations have played a major role in this transition, inspiring and guiding the research in computer science. Other observations, while equally important, have received much less attention. The desire of human explainees to communicate with artificial intelligence explainers through a dialogue-like interaction has been mostly neglected by the community. This poses many challenges for the effectiveness and widespread adoption of such technologies as delivering a single explanation optimised according to some predefined objectives may fail to engender understanding in its recipients and satisfy their unique needs given the diversity of human knowledge and intention. Using insights elaborated by Niklas Luhmann and, more recently, Elena Esposito we apply social systems theory to highlight challenges in explainable artificial intelligence and offer a path forward, striving to reinvigorate the technical research in this direction. This paper aims to demonstrate the potential of systems theoretical approaches to communication in understanding problems and limitations of explainable artificial intelligence.
翻译:过去十年来,可以解释的人工智能从一个以技术为主的学科演变成一个与社会科学紧密交织的领域,如人类偏爱对比的 -- -- 更确切地说,反事实 -- -- 解释等观点在这一转变过程中发挥了重要作用,激励和指导了计算机科学研究。其他观察虽然同样重要,但得到的关注却少得多。人类解释者通过对话式互动与人工智能解释者沟通的愿望大多被社区忽视。这对诸如根据某些预先确定的目标优化提供单一解释等技术的有效性和广泛采用提出了许多挑战,可能无法在接受者中产生理解,并满足他们独特的需要,因为人类知识和意图的多样性。利用Niklas Luhmann和最近Elena Esposito我们应用社会系统理论来强调在可解释的人工智能方面的挑战,并提供一个前进的道路,努力重振这一方向的技术研究。本文件旨在展示系统理论方法在理解问题和可解释的人工智能的局限性方面进行交流的潜力。