The need for fully human-understandable models is increasingly being recognised as a central theme in AI research. The acceptance of AI models to assist in decision making in sensitive domains will grow when these models are interpretable, and this trend towards interpretable models will be amplified by upcoming regulations. One of the killer applications of interpretable AI is medical practice, which can benefit from accurate decision support methodologies that inherently generate trust. In this work, we propose FPT, (MedFP), a novel method that combines probabilistic trees and fuzzy logic to assist clinical practice. This approach is fully interpretable as it allows clinicians to generate, control and verify the entire diagnosis procedure; one of the methodology's strength is the capability to decrease the frequency of misdiagnoses by providing an estimate of uncertainties and counterfactuals. Our approach is applied as a proof-of-concept to two real medical scenarios: classifying malignant thyroid nodules and predicting the risk of progression in chronic kidney disease patients. Our results show that probabilistic fuzzy decision trees can provide interpretable support to clinicians, furthermore, introducing fuzzy variables into the probabilistic model brings significant nuances that are lost when using the crisp thresholds set by traditional probabilistic decision trees. We show that FPT and its predictions can assist clinical practice in an intuitive manner, with the use of a user-friendly interface specifically designed for this purpose. Moreover, we discuss the interpretability of the FPT model.
翻译:随着人工智能研究中对完全人类可理解模型的需求日益被认识到,接受AI模型协助敏感领域的决策制定将增长,而当这些模型可解释时,这种趋势将被放大。 可解释AI的杀手级应用之一是医学实践,准确的决策支持方法可以从根本上产生信任。 在本文中,我们提出了一种结合了概率树和模糊逻辑的新方法FPT(MedFP),以协助临床实践。这种方法是完全可解释的,因为它允许临床医生生成、控制和验证整个诊断过程。该方法的优势之一是能够通过提供不确定性和假设来减少误诊的频率。我们的方法应用于两个实际的医学场景作为概念验证:分类恶性甲状腺结节和预测慢性肾病患者的进展风险。我们的结果表明,概率模糊决策树可以为临床医生提供可解释的支持,此外,将模糊变量引入概率模型中会带来很多细微差别,而这些细微差别在使用传统的概率决策树所设定的刻度门槛时会丢失。我们展示了FPT及其预测如何以直观的方式协助临床实践,使用专门为此目的设计的用户友好界面。此外,我们还讨论了FPT模型的可解释性。