The weight decay regularization term is widely used during training to constrain expressivity, avoid overfitting, and improve generalization. Historically, this concept was borrowed from the SVM maximum margin principle and extended to multi-class deep networks. Carefully inspecting this principle reveals that it is not optimal for multi-class classification in general, and in particular when using deep neural networks. In this paper, we explain why this commonly used principle is not optimal and propose a new regularization scheme, called {\em Pairwise Margin Maximization} (PMM), which measures the minimal amount of displacement an instance should take until its predicted classification is switched. In deep neural networks, PMM can be implemented in the vector space before the network's output layer, i.e., in the deep feature space, where we add an additional normalization term to avoid convergence to a trivial solution. We demonstrate empirically a substantial improvement when training a deep neural network with PMM compared to the standard regularization terms.
翻译:在培训过程中广泛使用重量衰变的正规化术语来限制表达性、避免过度调整、改进一般化。 从历史上看,这一概念是从SVM最大幅度原则中借用的,并扩大到多级深层网络。仔细检查这一原则后发现,对于一般的多级分类,特别是在使用深层神经网络时,它不是最佳的。在本文件中,我们解释了为什么这一常用原则不是最佳的,并提出了一个新的正规化计划,称为“PMM” (PMM) (PM) (PM) (PM) (PM) (PM) (PM) ) (PM) (PM) (PM) ) (PM) (PM) ) (PM) (PM) (PM) (PM) (PM) (PM) (PM) (PM) (PM) (PM) (PM) (PM) (PM) (PM) (PM) (PM) (PM) (PM) (PM) (PM (PM) (PM) (PM) (PM) (PM) (PM) (PM) (PM) (PM) (PM) (PM) (PM) (P) (PM) (PM) (PM) (P) (P) (P) (P) (P) (P) (P) (PM) (P) (P) (P) (P) (P) (P) (P) (P) (P) (P) (P) (P) (P) (P) (P) (P) (P) (P) (P) (P) (P) (P) (P) (P) (P) (P) (P) (P) (P) (P) (P) (P) (P) (P) (P) (P) (P) (P) (P) (P) (P) (P) (P) (P) (P) (T) (T) (T) (P) (P) (I) (I) (I) (I) (I) (I(I(I) (I) (T) (T) (T) (I