Deep learning has revolutionized the computer vision and image classification domains. In this context Convolutional Neural Networks (CNNs) based architectures are the most widely applied models. In this article, we introduced two procedures for training Convolutional Neural Networks (CNNs) and Deep Neural Network based on Gradient Boosting (GB), namely GB-CNN and GB-DNN. These models are trained to fit the gradient of the loss function or pseudo-residuals of previous models. At each iteration, the proposed method adds one dense layer to an exact copy of the previous deep NN model. The weights of the dense layers trained on previous iterations are frozen to prevent over-fitting, permitting the model to fit the new dense as well as to fine-tune the convolutional layers (for GB-CNN) while still utilizing the information already learned. Through extensive experimentation on different 2D-image classification and tabular datasets, the presented models show superior performance in terms of classification accuracy with respect to standard CNN and Deep-NN with the same architectures.
翻译:深层学习使计算机视觉和图像分类领域发生革命性变革。 在这种情况下, 革命神经网络( CNNs) 的建筑是应用最广泛的模型。 在本条中, 我们引入了两种程序, 即GB- CNN 和GB-DNN( GB), 用于培训革命神经网络( GB- CNN) 和基于渐进推进的深层神经网络( GB) 和深层神经网络( GB- CNS) 。 这些模型经过培训, 以适应先前模型的损失函数或伪重复功能的梯度。 在每次迭代中, 提议的方法将一个稠密层添加到先前深层NNM 模型的准确副本中。 在先前迭代中培训的稠密层的重量被冻结, 以防止过度安装, 使该模型能够适应新的稠密层( GB- CNN) 和 深层神经网络( ), 同时仍在使用已经学到的信息 。 通过对不同的 2D image 分类和 表格数据集进行广泛的实验,, 展示的模型显示在与同一结构中标准CNNN 和深NNNNNN 的分类精密的精密的精度方面表现优优。