How to learn a good predictor on data with missing values? Most efforts focus on first imputing as well as possible and second learning on the completed data to predict the outcome. Yet, this widespread practice has no theoretical grounding. Here we show that for almost all imputation functions, an impute-then-regress procedure with a powerful learner is Bayes optimal. This result holds for all missing-values mechanisms, in contrast with the classic statistical results that require missing-at-random settings to use imputation in probabilistic modeling. Moreover, it implies that perfect conditional imputation may not be needed for good prediction asymptotically. In fact, we show that on perfectly imputed data the best regression function will generally be discontinuous, which makes it hard to learn. Crafting instead the imputation so as to leave the regression function unchanged simply shifts the problem to learning discontinuous imputations. Rather, we suggest that it is easier to learn imputation and regression jointly. We propose such a procedure, adapting NeuMiss, a neural network capturing the conditional links across observed and unobserved variables whatever the missing-value pattern. Experiments confirm that joint imputation and regression through NeuMiss is better than various two step procedures in our experiments with finite number of samples.
翻译:如何对缺少值的数据进行良好的预测? 大多数努力都侧重于首先估算以及可能和第二次对已完成的数据进行可能的预测,以预测结果。 然而,这种普遍的做法没有理论依据。 我们在这里显示,几乎所有估算功能, 与强健的学习者一起的估算后回归程序都是最佳的。 这个结果对所有缺失值机制来说都是最佳的。 这与传统的统计结果形成对照, 典型的统计结果要求错失随机假设在概率模型中使用估算。 此外, 它意味着, 完全的有条件的估算可能并不需要, 才能很好地预测结果。 事实上, 我们显示, 在完全估算的数据中, 最佳回归功能一般都是不连贯的, 这使得它很难学习。 巧妙的估算, 以便让回归功能保持不变, 仅仅将问题转向学习不连续的估算。 相反, 我们建议, 比较容易学习估算估算和回归的模型。 我们建议采用这样的程序, 调整 Neimis, 一个神经网络, 来捕捉到观察到的和未观测过的两步态的路径。