By filling in missing values in datasets, imputation allows these datasets to be used with algorithms that cannot handle missing values by themselves. However, missing values may in principle contribute useful information that is lost through imputation. The missing-indicator approach can be used in combination with imputation to instead represent this information as a part of the dataset. There are several theoretical considerations why missing-indicators may or may not be beneficial, but there has not been any large-scale practical experiment on real-life datasets to test this question for machine learning predictions. We perform this experiment for three imputation strategies and a range of different classification algorithms, on the basis of twenty real-life datasets. We find that on these datasets, missing-indicators generally increase classification performance. In addition, we find no evidence for most algorithms that nearest neighbour and iterative imputation lead to better performance than simple mean/mode imputation. Therefore, we recommend the use of missing-indicators with mean/mode imputation as a safe default, with the caveat that for decision trees, pruning is necessary to prevent overfitting. In a follow-up experiment, we determine attribute-specific missingness thresholds for each classifier above which missing-indicators are more likely than not to increase classification performance, and observe that these thresholds are much lower for categorical than for numerical attributes. Finally, we argue that mean imputation of numerical attributes may preserve some of the information from missing values, and we show that in the absence of missing-indicators, it can similarly be useful to apply mean imputation to one-hot encoded categorical attributes instead of mode imputation.
翻译:通过填入数据集中的缺失值,估算使这些数据集能够与无法自行处理缺失值的算法一起使用。 但是, 缺失值原则上可能有助于提供因估算而丢失的有用信息。 缺失指标方法可以与估算方法结合使用, 从而将这些信息作为数据集的一部分来代表。 存在一些理论考虑, 为什么缺失指标可能有益, 也可能不有益, 但是还没有对真实生命数据集做任何大规模的实际实验, 以测试机器学习预测中的问题。 我们为三种估算战略和一系列不同的分类算法做这个实验, 而这些算法在20个真实值数据集的基础上会丢失。 我们发现, 在这些数据集中, 缺失指标方法通常会提高分类性能。 此外, 我们没有找到任何证据证明, 最近的近邻和迭代估算能比简单的中度/中位估算更好的性能。 因此, 我们建议使用缺少的错误指标来测试, 默认缺值的平均值/, 和一系列不同的分类法算法的默认值, 而对于决定性值的默认值, 精度, 精度的精确值是, 最终的排序比我们更需要的排序的排序来判断性 。 。