With the rise of AI systems in real-world applications comes the need for reliable and trustworthy AI. An essential aspect of this are explainable AI systems. However, there is no agreed standard on how explainable AI systems should be assessed. Inspired by the Turing test, we introduce a human-centric assessment framework where a leading domain expert accepts or rejects the solutions of an AI system and another domain expert. By comparing the acceptance rates of provided solutions, we can assess how the AI system performs compared to the domain expert, and whether the AI system's explanations (if provided) are human-understandable. This setup -- comparable to the Turing test -- can serve as a framework for a wide range of human-centric AI system assessments. We demonstrate this by presenting two instantiations: (1) an assessment that measures the classification accuracy of a system with the option to incorporate label uncertainties; (2) an assessment where the usefulness of provided explanations is determined in a human-centric manner.
翻译:随着在现实应用中的AI系统的兴起,需要可靠和可靠的AI系统。这是一个基本的方面,这是可以解释的AI系统。然而,在如何评估可解释的AI系统方面没有商定的标准。在图灵测试的启发下,我们引入了一个以人为中心的评估框架,由一位主要领域专家接受或拒绝AI系统的解决办法和另一位域专家的解决办法。通过比较所提供的解决方案的接受率,我们可以评估AI系统与域专家相比的接受率,以及AI系统的解释(如果提供的话)是否为人所理解。这一设置 -- -- 与图灵测试相似 -- -- 可以作为一系列以人为中心的AI系统评估的框架。我们通过提出以下两点来证明这一点:(1) 评估衡量某一系统的分类准确性,并选择纳入标签的不确定性;(2) 评估所提供的解释是否有用,以人为中心的方式加以确定。