As machine learning algorithms getting adopted in an ever-increasing number of applications, interpretation has emerged as a crucial desideratum. In this paper, we propose a mathematical definition for the human-interpretable model. In particular, we define interpretability between two information process systems. If a prediction model is interpretable by a human recognition system based on the above interpretability definition, the prediction model is defined as a completely human-interpretable model. We further design a practical framework to train a completely human-interpretable model by user interactions. Experiments on image datasets show the advantages of our proposed model in two aspects: 1) The completely human-interpretable model can provide an entire decision-making process that is human-understandable; 2) The completely human-interpretable model is more robust against adversarial attacks.
翻译:随着机器学习算法在越来越多的应用中被采纳,解释已经成为一个关键的脱边缘。在本文中,我们提出了人类解释模型的数学定义。特别是,我们界定了两个信息过程系统之间的解释性。如果预测模型可以由基于上述可解释性定义的人类识别系统来解释,那么预测模型就被定义为完全由人解释的模式。我们进一步设计了一个实用框架,通过用户互动来培训一个完全由人解释的模型。图像数据集实验显示了我们提议的模型在两个方面的优势:(1) 完全由人解释的模型可以提供人类无法理解的整个决策过程;(2) 完全由人解释的模型对于对抗性攻击更加强大。