This paper applies eXplainable Artificial Intelligence (XAI) methods to investigate the socioeconomic disparities in COVID patient mortality. An Extreme Gradient Boosting (XGBoost) prediction model is built based on a de-identified Austin area hospital dataset to predict the mortality of COVID-19 patients. We apply two XAI methods, Shapley Additive exPlanations (SHAP) and Locally Interpretable Model Agnostic Explanations (LIME), to compare the global and local interpretation of feature importance. This paper demonstrates the advantages of using XAI which shows the feature importance and decisive capability. Furthermore, we use the XAI methods to cross-validate their interpretations for individual patients. The XAI models reveal that Medicare financial class, older age, and gender have high impact on the mortality prediction. We find that LIME local interpretation does not show significant differences in feature importance comparing to SHAP, which suggests pattern confirmation. This paper demonstrates the importance of XAI methods in cross-validation of feature attributions.
翻译:本文应用了可移植人工智能(XAI)方法来调查COVID患者死亡率的社会经济差距。一种极端渐进推力(XGBoost)预测模型建立在一种分辨奥斯汀地区医院数据集的基础上,以预测COVID-19患者的死亡率。我们采用了两种XAI方法,即Shapley Additive Explosations(SHAP)和地方解释模型Agnistic Inc解释(LIME),以比较全球和地方对特征重要性的解释。本文展示了使用XAI的优势,该模型显示了特征的重要性和决定性能力。此外,我们使用XAI方法来交叉验证其对个体患者的解释。XAI模型表明,Medicare金融等级、老年和性别对死亡率预测具有高度影响。我们发现,LIME当地解释与SHAP相比在特征重要性上没有显著差别,这表明了模式确认。本文展示了XAI方法在特征归属交叉验证中的重要性。