An important research direction in automatic speech recognition (ASR) has centered around the development of effective methods to rerank the output hypotheses of an ASR system with more sophisticated language models (LMs) for further gains. A current mainstream school of thoughts for ASR N-best hypothesis reranking is to employ a recurrent neural network (RNN)-based LM or its variants, with performance superiority over the conventional n-gram LMs across a range of ASR tasks. In real scenarios such as a long conversation, a sequence of consecutive sentences may jointly contain ample cues of conversation-level information such as topical coherence, lexical entrainment and adjacency pairs, which however remains to be underexplored. In view of this, we first formulate ASR N-best reranking as a prediction problem, putting forward an effective cross-sentence neural LM approach that reranks the ASR N-best hypotheses of an upcoming sentence by taking into consideration the word usage in its precedent sentences. Furthermore, we also explore to extract task-specific global topical information of the cross-sentence history in an unsupervised manner for better ASR performance. Extensive experiments conducted on the AMI conversational benchmark corpus indicate the effectiveness and feasibility of our methods in comparison to several state-of-the-art reranking methods.
翻译:自动语音识别(ASR)的一个重要研究方向是制定有效方法,重新排列具有更先进语言模型的ASR系统的产出假设,以进一步取得成果。目前ASRN最佳假设重新排序的主流思想是采用经常性神经网络(RNN)或其变体,其性能优于常规ng LM,贯穿于ASR的一系列任务。在长期对话等真实情况下,连续连续一系列判决可能同时包含大量对话级别信息的线索,如主题一致性、词汇渗透和对口关系等,但这些信息仍未得到充分探讨。鉴于此,我们首先将ASRN最佳排序作为预测问题,提出有效的交叉感应神经M方法,通过考虑ASRN在前几句中的用词用词来重新排序即将判刑的最佳假说。我们还探索从不同主题层面的全球专题信息中提取大量信息,例如主题的一致性、词汇渗透和对口配对口信息,但这些信息仍未得到充分探讨。为此,我们首先将ASRN最佳排序作为预测问题,提出有效的交叉感应神经M方法,在考虑其前几句中的用词使用后重新排列。此外,我们还探索了在未受监督的对比性空间空间对话中以更精确度测试方式对准的状态进行若干比较。