This paper presents a novel technique based on gradient boosting to train the final layers of a neural network (NN). Gradient boosting is an additive expansion algorithm in which a series of models are trained sequentially to approximate a given function. A neural network can also be seen as an additive expansion where the scalar product of the responses of the last hidden layer and its weights provide the final output of the network. Instead of training the network as a whole, the proposed algorithm trains the network sequentially in $T$ steps. First, the bias term of the network is initialized with a constant approximation that minimizes the average loss of the data. Then, at each step, a portion of the network, composed of $J$ neurons, is trained to approximate the pseudo-residuals on the training data computed from the previous iterations. Finally, the $T$ partial models and bias are integrated as a single NN with $T \times J$ neurons in the hidden layer. Extensive experiments in classification and regression tasks, as well as in combination with deep neural networks, are carried out showing a competitive generalization performance with respect to neural networks trained with different standard solvers, such as Adam, L-BFGS, SGD and deep models. Furthermore, we show that the proposed method design permits to switch off a number of hidden units during test (the units that were last trained) without a significant reduction of its generalization ability. This permits the adaptation of the model to different classification speed requirements on the fly.
翻译:本文展示了一种基于梯度提升的新技术,用于培训神经网络的最后层。 渐变推进是一种添加式扩张算法, 一系列模型在其中按顺序训练, 以接近给定函数。 神经网络也可以被视为一种添加式扩张, 最后一个隐藏层的反应及其重量的标量产品提供了网络的最终输出。 拟议的算法不是对整个网络进行培训, 而是对网络进行连续培训, 以美元为级数。 首先, 网络的偏差术语是不断接近, 以尽可能减少数据的平均损失。 然后, 在每步中, 由美元神经元组成的一系列模型被按顺序训练。 神经网络的一部分也可以被培训为添加性扩展, 以根据从先前的迭代计算的培训数据, 模拟的标值和偏差作为单一的NNNT和时数。 在分类和回归任务中进行广泛的实验, 与深度神经网络的近似近近近近速率, 并同时, 在每步中, 显示由$J值组成的网络的竞争性一般能力, 在经过培训的系统上, 展示一个经过深层次测试的系统设计模型, 。