Explainable Artificial Intelligence (XAI) is a rising field in AI. It aims to produce a demonstrative factor of trust, which for human subjects is achieved through communicative means, which Machine Learning (ML) algorithms cannot solely produce, illustrating the necessity of an extra layer producing support to the model output. When approaching the medical field, we can see challenges arise when dealing with the involvement of human-subjects, the ideology behind trusting a machine to tend towards the livelihood of a human poses an ethical conundrum - leaving trust as the basis of the human-expert in acceptance to the machines decision. The aim of this paper is to apply XAI methods to demonstrate the usability of explainable architectures as a tertiary layer for the medical domain supporting ML predictions and human-expert opinion, XAI methods produce visualization of the feature contribution towards a given models output on both a local and global level. The work in this paper uses XAI to determine feature importance towards high-dimensional data-driven questions to inform domain-experts of identifiable trends with a comparison of model-agnostic methods in application to ML algorithms. The performance metrics for a glass-box method is also provided as a comparison against black-box capability for tabular data. Future work will aim to produce a user-study using metrics to evaluate human-expert usability and opinion of the given models.
翻译:人工智能(XAI)是一个不断上升的领域。它旨在产生一个示范性的信任要素,对人类主体来说,信任是通过交流手段实现的,而机器学习(ML)算法不能仅仅产生这种信任要素,这表明需要增加一层支持模型输出的层,说明需要为模型输出提供额外的支持。在接近医学领域时,我们可以看到在处理人类主体参与时会出现挑战,信任机器背后倾向于人类生计的意识形态构成一个道德难题——将信任作为接受机器决定的人类专家的基础。本文的目的是应用XAI方法来证明可解释的结构作为医学领域第三层的可用性,支持ML预测和人类专家意见。在接近医学领域时,我们可以看到,当处理人类主体参与时会出现挑战,信任机器背后倾向于人类生计的意识形态就会产生一个道德难题——将信任作为接受机器决定的人类专家的基础。本文件的目的是,利用XAI确定对高度数据驱动问题的特征的重要性,使域专家了解可辨别的趋势,并比较应用模型诊断方法来比较ML算法的模型应用情况。在应用ML算法和人类专家意见方面,将业绩衡量标准作为衡量标准工具,将产生一种衡量标准的方法,用以比较。