In probabilistic classification, a discriminative model based on the softmax function has a potential limitation in that it assumes unimodality for each class in the feature space. The mixture model can address this issue, although it leads to an increase in the number of parameters. We propose a sparse classifier based on a discriminative GMM, referred to as a sparse discriminative Gaussian mixture (SDGM). In the SDGM, a GMM-based discriminative model is trained via sparse Bayesian learning. Using this sparse learning framework, we can simultaneously remove redundant Gaussian components and reduce the number of parameters used in the remaining components during learning; this learning method reduces the model complexity, thereby improving the generalization capability. Furthermore, the SDGM can be embedded into neural networks (NNs), such as convolutional NNs, and can be trained in an end-to-end manner. Experimental results demonstrated that the proposed method outperformed the existing softmax-based discriminative models.
翻译:在概率分类中,基于软负函数的歧视性模式具有潜在的局限性,因为它假定了特性空间中每一类的单一模式。混合模型可以解决这个问题,尽管它导致参数数量的增加。我们提议了一个基于歧视性的GM(被称为一种稀有的歧视性高斯混合体)的稀有分类器。在SDGM中,一种基于GM(GM)的基于GM(GM)的歧视性模式通过稀疏的巴伊西亚人的学习进行训练。我们利用这个稀疏的学习框架,可以同时删除多余的高斯元件,并减少其余组成部分在学习期间使用的参数数量;这种学习方法可以降低模型的复杂性,从而改进一般化能力。此外,SDGM(GM)可以嵌入神经网络(NNS),例如同级的NNNW(NNS),也可以以端对端到端方式进行培训。实验结果表明,拟议的方法超过了现有的软式的基于歧视模型。