Machine learning solutions for pattern classification problems are nowadays widely deployed in society and industry. However, the lack of transparency and accountability of most accurate models often hinders their safe use. Thus, there is a clear need for developing explainable artificial intelligence mechanisms. There exist model-agnostic methods that summarize feature contributions, but their interpretability is limited to predictions made by black-box models. An open challenge is to develop models that have intrinsic interpretability and produce their own explanations, even for classes of models that are traditionally considered black boxes like (recurrent) neural networks. In this paper, we propose a Long-Term Cognitive Network for interpretable pattern classification of structured data. Our method brings its own mechanism for providing explanations by quantifying the relevance of each feature in the decision process. For supporting the interpretability without affecting the performance, the model incorporates more flexibility through a quasi-nonlinear reasoning rule that allows controlling nonlinearity. Besides, we propose a recurrence-aware decision model that evades the issues posed by unique fixed points while introducing a deterministic learning method to compute the tunable parameters. The simulations show that our interpretable model obtains competitive results when compared to the state-of-the-art white and black-box models.
翻译:模式分类问题的机械学习方法如今在社会和工业中广泛采用,然而,最准确模型缺乏透明度和问责制,往往妨碍其安全使用。因此,显然需要开发可解释的人工智能机制。存在一些模型-不可知性方法,这些方法概括了贡献的特点,但其可解释性仅限于黑盒模型所作的预测。一个公开的挑战是开发具有内在可解释性并产生自己解释的模型,即使是那些传统上被视为黑盒(经常)神经网络(经常)的模型类别。在本文中,我们建议建立一个长期认知网络,用于对结构化数据进行可解释的模式分类。我们的方法通过量化决策过程中每个特性的相关性,提出自己的解释机制。为了支持可解释性,而不影响业绩,模型通过准非线性推理规则纳入了更大的灵活性,从而能够控制非线性。此外,我们提出了一种反复认知的决定模式,可以回避独特固定点提出的问题,同时引入一种可解释性学习方法来计算金枪鱼参数。模拟显示,我们可解释的黑色模型在比较状态时会获得竞争性的模型。