Multi-task learning (MTL) and the attention technique have been proven to effectively extract robust acoustic features for various speech-related applications in noisy environments. In this study, we integrated MTL and the attention-weighting mechanism and propose an attention-based MTL (ATM0 approach to realize a multi-model learning structure and to promote the speech enhancement (SE) and speaker identification (SI) systems simultaneously. There are three subsystems in the proposed ATM: SE, SI, and attention-Net (AttNet). In the proposed system, a long-short-term memory (LSTM) is used to perform SE, while a deep neural network (DNN) model is applied to construct SI and AttNet in ATM. The overall ATM system first extracts the representative features and then enhances the speech spectra in LSTM-SE and classifies speaker identity in DNN-SI. We conducted our experiment on Taiwan Mandarin hearing in noise test database. The evaluation results indicate that the proposed ATM system not only increases the quality and intelligibility of noisy speech input but also improves the accuracy of the SI system when compared to the conventional MTL approaches.
翻译:多任务学习(MTL)和注意力技术已被证明能够有效地为吵闹环境中各种与语言有关的应用有效地提取强大的声学特征,在这项研究中,我们结合了MTL和注意力加权机制,并提出了以关注为基础的MTL(ATM0 方法,以实现多模式学习结构,同时促进语音增强和语音识别系统。拟议的ATM系统有三个子系统:SE、SI和注意力网(AttNet)。在拟议的系统中,使用长期短期内存(LSTM)来进行SE,同时运用深神经网络模型来在ATM中建造SI和AttNet。整个ATM系统首先提取代表特征,然后加强LSTM-SE的语音光谱,然后在DNN-SI中将发言者身份分类。我们在噪音测试数据库中进行了台湾曼达林听力实验。评价结果表明,拟议的ATM系统不仅提高了声音输入的质量和可感应力,而且提高了SIML方法的准确性。