Most machine learning (ML) models are developed for prediction only; offering no option for causal interpretation of their predictions or parameters/properties. This can hamper the health systems' ability to employ ML models in clinical decision-making processes, where the need and desire for predicting outcomes under hypothetical investigations (i.e., counterfactual reasoning/explanation) is high. In this research, we introduce a new representation learning framework (i.e., partial concept bottleneck), which considers the provision of counterfactual explanations as an embedded property of the risk model. Despite architectural changes necessary for jointly optimising for prediction accuracy and counterfactual reasoning, the accuracy of our approach is comparable to prediction-only models. Our results suggest that our proposed framework has the potential to help researchers and clinicians improve personalised care (e.g., by investigating the hypothetical differential effects of interventions)
翻译:多数机学(ML)模型仅用于预测;没有选择选择对其预测或参数/财产进行因果解释;这可能妨碍卫生系统在临床决策过程中采用ML模型的能力,因为临床决策过程中需要并渴望在假设调查(即反事实推理/解释)下预测结果,因此,在假设调查(即反事实推理/解释)下预测结果的需求和愿望很大;在这项研究中,我们引入了新的代表性学习框架(即局部概念瓶颈),将提供反事实解释视为风险模型的内在属性。尽管为了共同选择预测准确性和反事实推理,有必要进行建筑变革,但我们方法的准确性与只预测模型相当。我们的结果表明,我们拟议的框架有可能帮助研究人员和临床医生改善个性化护理(例如,通过调查干预的假设差别效应)