Machine learning has shown much promise in helping improve the quality of medical, legal, and financial decision-making. In these applications, machine learning models must satisfy two important criteria: (i) they must be causal, since the goal is typically to predict individual treatment effects, and (ii) they must be interpretable, so that human decision makers can validate and trust the model predictions. There has recently been much progress along each direction independently, yet the state-of-the-art approaches are fundamentally incompatible. We propose a framework for learning interpretable models from observational data that can be used to predict individual treatment effects (ITEs). In particular, our framework converts any supervised learning algorithm into an algorithm for estimating ITEs. Furthermore, we prove an error bound on the treatment effects predicted by our model. Finally, in an experiment on real-world data, we show that the models trained using our framework significantly outperform a number of baselines.
翻译:机器学习在帮助提高医疗、法律和财政决策质量方面显示出很大的希望。 在这些应用中,机器学习模式必须满足两个重要标准:(一) 机器学习模式必须是因果关系,因为目标通常是预测个人治疗效果,以及(二) 机器学习必须是可解释的,以便人类决策者能够验证和信任模型预测。最近在每个方向上都取得了很大进展,但最先进的方法根本上是互不相容的。我们提出了一个从可用于预测个人治疗效果的观察数据(ITE)中学习可解释模型的框架。特别是,我们的框架将任何受监督的学习算法转换成估算ITE的算法。此外,我们还证明了我们模型预测的治疗效果有误。最后,在一次关于现实世界数据的实验中,我们显示,使用我们框架所培训的模型大大超过一些基线。