As predictive models -- e.g., from machine learning -- give likely outcomes, they may be used to reason on the effect of an intervention, a causal-inference task. The increasing complexity of health data has opened the door to a plethora of models, but also the Pandora box of model selection: which of these models yield the most valid causal estimates? Here we highlight that classic machine-learning model selection does not select the best outcome models for causal inference. Indeed, causal model selection should control both outcome errors for each individual, treated or not treated, whereas only one outcome is observed. Theoretically, simple risks used in machine learning do not control causal effects when treated and non-treated population differ too much. More elaborate risks build proxies of the causal error using ``nuisance'' re-weighting to compute it on the observed data. But does computing these nuisance adds noise to model selection? Drawing from an extensive empirical study, we outline a good causal model-selection procedure: using the so-called $R\text{-risk}$; using flexible estimators to compute the nuisance models on the train set; and splitting out 10\% of the data to compute risks.
翻译:暂无翻译