To encourage intra-class compactness and inter-class separability among trainable feature vectors, large-margin softmax methods are developed and widely applied in the face recognition community. The introduction of the large-margin concept into the softmax is reported to have good properties such as enhanced discriminative power, less overfitting and well-defined geometric intuitions. Nowadays, language modeling is commonly approached with neural networks using softmax and cross entropy. In this work, we are curious to see if introducing large-margins to neural language models would improve the perplexity and consequently word error rate in automatic speech recognition. Specifically, we first implement and test various types of conventional margins following the previous works in face recognition. To address the distribution of natural language data, we then compare different strategies for word vector norm-scaling. After that, we apply the best norm-scaling setup in combination with various margins and conduct neural language models rescoring experiments in automatic speech recognition. We find that although perplexity is slightly deteriorated, neural language models with large-margin softmax can yield word error rate similar to that of the standard softmax baseline. Finally, expected margins are analyzed through visualization of word vectors, showing that the syntactic and semantic relationships are also preserved.
翻译:为了鼓励阶级内部的紧凑性和阶级间分化,在可训练的特性矢量中,开发了大型边际软式方法,并在面部识别界广泛应用。据报告,在软体中引入大边观概念具有良好的特性,例如增强歧视力量、不那么过分和定义明确的几何直觉。现在,语言建模通常与神经网络使用软体轴和交叉矩形进行。在这项工作中,我们很想知道向神经语言模型引入大边距是否会在自动语音识别中改善翻转率和随后的单词错误率。具体地说,我们首先实施并测试前一工作后在面识别中的各种常规边际概念。为了处理自然语言数据的分布,我们然后比较不同的文字矢量规范缩放战略。之后,我们运用了最佳的规范缩放设置,结合各种边距和操动的神经语言模型,在自动语音识别中进行重新校正实验。我们发现,虽然硬语言模型略微变形,但与大边径软面语音识别的硬度差率率,最终可产生软度的软度的软体图像基线关系。