We introduce two-scale loss functions for use in various gradient descent algorithms applied to classification problems via deep neural networks. This new method is generic in the sense that it can be applied to a wide range of machine learning architectures, from deep neural networks to support vector machines for example. These two-scale loss functions allow to focus the training onto objects in the training set which are not well classified. This leads to an increase in several measures of performance for appropriately-defined two-scale loss functions with respect to the more classical cross-entropy when tested on traditional deep neural networks on the MNIST, CIFAR10, and CIFAR100 data-sets.
翻译:我们引入了用于通过深神经网络用于分类问题的各种梯度下沉算法的双重损失功能,这种新方法是通用的,因为它可以适用于从深神经网络到支持矢量机等范围广泛的机器学习结构,从深神经网络到支持矢量机,这些双重损失功能使得培训的重点能够集中在培训组中未充分分类的物体上,这导致在MNIST、CIFAR10和CIFAR100数据组对传统的深海神经网络进行测试时,在较经典的跨孔径机方面,对适当界定的双度损失功能的几种性能衡量方法有所增加。