Dedicated neural network (NN) architectures have been designed to handle specific data types (such as CNN for images or RNN for text), which ranks them among state-of-the-art methods for dealing with these data. Unfortunately, no architecture has been found for dealing with tabular data yet, for which tree ensemble methods (tree boosting, random forests) usually show the best predictive performances. In this work, we propose a new sparse initialization technique for (potentially deep) multilayer perceptrons (MLP): we first train a tree-based procedure to detect feature interactions and use the resulting information to initialize the network, which is subsequently trained via standard stochastic gradient strategies. Numerical experiments on several tabular data sets show that this new, simple and easy-to-use method is a solid concurrent, both in terms of generalization capacity and computation time, to default MLP initialization and even to existing complex deep learning solutions. In fact, this wise MLP initialization raises the resulting NN methods to the level of a valid competitor to gradient boosting when dealing with tabular data. Besides, such initializations are able to preserve the sparsity of weights introduced in the first layers of the network through training. This fact suggests that this new initializer operates an implicit regularization during the NN training, and emphasizes that the first layers act as a sparse feature extractor (as for convolutional layers in CNN).
翻译:专门设计的神经网络(NN)结构旨在处理特定的数据类型(如用于图像的CNN 或用于文本的RNN),将这些数据列为处理这些数据的最先进方法。不幸的是,尚未找到处理表格数据的结构,而对于这些数据,树木混合方法(树木推动、随机森林)通常显示最佳预测性能。在这项工作中,我们为(潜在深层)多层透视器(MLP)提出了一种新的稀疏初始化技术:我们首先训练一种基于树木的程序,以探测特征互动并利用由此产生的信息来启动网络,随后通过标准的随机梯度战略对其进行培训。几个表格数据集的数值实验表明,这种新的、简单和易于使用的方法在一般化能力和计算时间方面,是一种牢固的并存,在默认 MLP 初始化甚至现有复杂的深层学习解决方案方面。事实上,这种明智的MLP 初始初始化方法将最终产生的NNP方法提升到在处理表层数据初始层时对梯度的梯度增强值水平的水平。此外,在开始的深度化过程中,这种精度的常规化方法将保持了这个深度的深度,在开始阶段中,从而保持了这个深度的深度的常规化。