Disfluency detection has mainly been solved in a pipeline approach, as post-processing of speech recognition. In this study, we propose Transformer-based encoder-decoder models that jointly solve speech recognition and disfluency detection, which work in a streaming manner. Compared to pipeline approaches, the joint models can leverage acoustic information that makes disfluency detection robust to recognition errors and provide non-verbal clues. Moreover, joint modeling results in low-latency and lightweight inference. We investigate two joint model variants for streaming disfluency detection: a transcript-enriched model and a multi-task model. The transcript-enriched model is trained on text with special tags indicating the starting and ending points of the disfluent part. However, it has problems with latency and standard language model adaptation, which arise from the additional disfluency tags. We propose a multi-task model to solve such problems, which has two output layers at the Transformer decoder; one for speech recognition and the other for disfluency detection. It is modeled to be conditioned on the currently recognized token with an additional token-dependency mechanism. We show that the proposed joint models outperformed a BERT-based pipeline approach in both accuracy and latency, on both the Switchboard and the corpus of spontaneous Japanese.
翻译:在研究中,我们建议采用基于变换器的编码器解码器模型,以共同解决语音识别和溶解性检测问题,这些模型以流式方式发挥作用。与管道方法相比,联合模型可以利用声学信息,使溶解性检测强到识别错误,并提供非文字线索。此外,在低持久性和轻度推断中,联合模型可以解决低持久性和轻度错觉。我们调查了两种用于流出不便检测的联合模型变异:一种是超文本化模型和多任务模型。超文本模型的文本培训带有特殊标记,表明破碎部分的起始点和终点。但是,与管道方法相比,它会利用声学信息,使溶解性检测强到识别错误的错误,并提供非文字线索。我们建议采用多种任务模型来解决这类问题,在变换器解调器中有两个输出层;一种是语音识别,另一种是错位模型。我们用一个模型在目前确认的自发性模式和双向式模式上展示了一种标志性模式。