We developed an inherently interpretable multilevel Bayesian framework for representing variation in regression coefficients that mimics the piecewise linearity of ReLU-activated deep neural networks. We used the framework to formulate a survival model for using medical claims to predict hospital readmission and death that focuses on discharge placement, adjusting for confounding in estimating causal local average treatment effects. We trained the model on a 5% sample of Medicare beneficiaries from 2008 and 2011, based on their 2009--2011 inpatient episodes, and then tested the model on 2012 episodes. The model scored an AUROC of approximately 0.76 on predicting all-cause readmissions -- defined using official Centers for Medicare and Medicaid Services (CMS) methodology -- or death within 30-days of discharge, being competitive against XGBoost and a Bayesian deep neural network, demonstrating that one need-not sacrifice interpretability for accuracy. Crucially, as a regression model, we provide what blackboxes cannot -- the exact gold-standard global interpretation of the model, identifying relative risk factors and quantifying the effect of discharge placement. We also show that the posthoc explainer SHAP fails to provide accurate explanations.
翻译:我们开发了一个内在的、可解释的多层次巴伊西亚框架,以代表回归系数的变化,它模仿了RELU激活的深神经网络的细细线。我们利用这个框架开发了一个生存模型,用于使用医疗索赔来预测医院重新接纳和死亡,重点是排出,根据对当地因果平均治疗效应的估计进行调整;我们根据医疗保险受益人2009-2011年的住院病例,对2008年和2011年5%的样本进行了培训,然后对2012年的样本进行了测试。该模型在预测所有原因的重新接纳方面获得了大约0.76的AUROC,该模型使用了正式的医疗保健和医疗援助服务中心(CMS)的方法来定义,或者在30天内死亡,与XGBoost和Bayesian深度神经网络相比具有竞争力,表明一个不需要牺牲解释准确性的模式。作为一个回归模型,我们提供了黑盒无法提供的东西 -- -- 精确的黄金标准全球模型解释,确定了相对风险因素,并量化了排出排放的影响。我们还表明后HAP没有提供准确的解释。