Fitting models with high predictive accuracy that include all relevant but no irrelevant or redundant features is a challenging task on data sets with similar (e.g. highly correlated) features. We propose the approach of tuning the hyperparameters of a predictive model in a multi-criteria fashion with respect to predictive accuracy and feature selection stability. We evaluate this approach based on both simulated and real data sets and we compare it to the standard approach of single-criteria tuning of the hyperparameters as well as to the state-of-the-art technique "stability selection". We conclude that our approach achieves the same or better predictive performance compared to the two established approaches. Considering the stability during tuning does not decrease the predictive accuracy of the resulting models. Our approach succeeds at selecting the relevant features while avoiding irrelevant or redundant features. The single-criteria approach fails at avoiding irrelevant or redundant features and the stability selection approach fails at selecting enough relevant features for achieving acceptable predictive accuracy. For our approach, for data sets with many similar features, the feature selection stability must be evaluated with an adjusted stability measure, that is, a measure that considers similarities between features. For data sets with only few similar features, an unadjusted stability measure suffices and is faster to compute.
翻译:对于具有类似(例如高度关联性)特征的数据集,我们建议以多标准方式调整预测模型的超参数,以便预测准确性和特征选择稳定性;我们根据模拟和真实数据集对这种方法进行评估,并将它与单一标准调整超参数的标准方法以及符合最新技术“稳定性选择”的标准方法进行比较。我们的结论是,与两种既定方法相比,我们的方法具有相同或更好的预测性能。考虑到调时的稳定性不会降低所产生模型的预测准确性。我们的方法成功地选择了相关特性,同时避免了不相干或多余的特性。单标准方法未能避免不相关或冗余的特性,而稳定选择方法未能为达到可接受的预测性准确性选择足够的相关特性。对于具有许多类似特性的数据集,我们的方法必须用调整的稳定度来评价特征选择稳定性稳定性,即考虑到所产生模型的稳定性的稳定性不会降低预测准确性;我们的方法在避免不相干或冗余的特性的同时成功地选择相关特性,而稳定性选择方法也未能选择足够的相关特性。对于许多具有类似特性的数据集,我们的方法,必须用一个经调整后的稳定度评估特征选择稳定性稳定性的稳定性,即考虑相近似于特性的测量特性的测量特性的特性。