Tabular datasets are the last "unconquered castle" for deep learning, with traditional ML methods like Gradient-Boosted Decision Trees still performing strongly even against recent specialized neural architectures. In this paper, we hypothesize that the key to boosting the performance of neural networks lies in rethinking the joint and simultaneous application of a large set of modern regularization techniques. As a result, we propose regularizing plain Multilayer Perceptron (MLP) networks by searching for the optimal combination/cocktail of 13 regularization techniques for each dataset using a joint optimization over the decision on which regularizers to apply and their subsidiary hyperparameters. We empirically assess the impact of these regularization cocktails for MLPs in a large-scale empirical study comprising 40 tabular datasets and demonstrate that (i) well-regularized plain MLPs significantly outperform recent state-of-the-art specialized neural network architectures, and (ii) they even outperform strong traditional ML methods, such as XGBoost.
翻译:标签数据集是用于深层学习的最后一个“ 未征服的城堡 ”, 传统的 ML 方法, 如 梯子- 波形决定树等, 即使在最近的专门神经结构下, 也仍然表现强劲。 在本文中, 我们假设提高神经网络性能的关键在于重新思考大量现代正规化技术的联合和同时应用。 因此, 我们提议通过寻找13个正规化技术的最佳组合/ 孔径, 使每个数据集的13个正规化技术正规化, 使用联合优化方法, 来决定哪些监管者应用, 以及它们的辅助超参数 。 我们从经验上评估大型实验研究中, 由40个表格数据集构成的 MLP 正规化鸡尾酒对 MLPs 的影响, 并证明 (i) 正规化的纯化的纯化的 MLP 明显超越了最近的州专门神经网络结构, 以及 (ii) 它们甚至超越了强大的传统 ML 方法, 如 XGBoost 。