Internet companies are increasingly using machine learning models to create personalized policies which assign, for each individual, the best predicted treatment for that individual. They are frequently derived from black-box heterogeneous treatment effect (HTE) models that predict individual-level treatment effects. In this paper, we focus on (1) learning explanations for HTE models; (2) learning interpretable policies that prescribe treatment assignments. We also propose guidance trees, an approach to ensemble multiple interpretable policies without the loss of interpretability. These rule-based interpretable policies are easy to deploy and avoid the need to maintain a HTE model in a production environment.
翻译:互联网公司越来越多地利用机器学习模式来制定个人化政策,为每个人指定最佳的预测治疗方法,这些方法往往来自黑盒多种治疗效应模型,预测个人一级治疗效果,本文侧重于:(1) 学习对HTE模型的解释;(2) 学习规定治疗任务可解释的政策;我们还提出了指导树,这是在不丧失可解释性的情况下采用多种可解释政策的一种方法;这些基于规则的可解释政策很容易应用,并避免在生产环境中保持HTE模型的必要性。