After the tremendous advances of deep learning and other AI methods, more attention is flowing into other properties of modern approaches, such as interpretability, fairness, etc. combined in frameworks like Responsible AI. Two research directions, namely Explainable AI and Uncertainty Quantification are becoming more and more important, but have been so far never combined and jointly explored. In this paper, I show how both research areas provide potential for combination, why more research should be done in this direction and how this would lead to an increase in trustability in AI systems.
翻译:在深层学习和其他AI方法取得巨大进展之后,人们越来越注意现代方法的其他特性,如解释性、公平性等,并纳入负责任的AI等框架。 两个研究方向,即可解释性AI和不确定性量化,正变得越来越重要,但迄今为止从未合并或共同探讨过。 在本文件中,我说明了这两个研究领域如何提供可能的结合,为什么应该朝这个方向进行更多的研究,以及这如何导致AI系统更加可信任。