Along with the desire to address more complex problems, feature selection methods have gained in importance. Feature selection methods can be classified into wrapper method, filter method, and embedded method. Being a powerful embedded feature selection method, Lasso has attracted the attention of many researchers. However, as a linear approach, the applicability of Lasso has been limited. In this work, we propose LassoLayer that is one-to-one connected and trained by L1 optimization, which work to drop out unnecessary units for prediction. For nonlinear feature selections, we build LassoMLP: the network equipped with LassoLayer as its first layer. Because we can insert LassoLayer in any network structure, it can harness the strength of neural network suitable for tasks where feature selection is needed. We evaluate LassoMLP in feature selection with regression and classification tasks. LassoMLP receives features including considerable numbers of noisy factors that is harmful for overfitting. In the experiments using MNIST dataset, we confirm that LassoMLP outperforms the state-of-the-art method.
翻译:随着解决更复杂问题的愿望,地物选择方法也变得更加重要。 地物选择方法可以分为包装法、过滤法和嵌入法。 作为强大的嵌入地物选择方法,Lasso已经吸引了许多研究人员的注意。 但是,作为线性方法,Lasso的适用性受到限制。 在这项工作中,我们建议L1优化的一对一连接和训练的LassoLayer能够退出不必要的预测单位。对于非线性地物选择,我们建立LassoMLP:装有LassoLayer作为第一层的网络。由于我们可以在任何网络结构中插入LassoLayer,它能够利用适合于需要地物选择的任务的神经网络的力量。我们用回归和分类任务来评价LassoMLP。LassoMLP接收了包括大量噪音因素的特征,这些特征对过度适应有害。在使用MNIST数据集的实验中,我们确认LassoMLP超越了最先进的方法。