Recently, Transformer-based architectures have been explored for speaker embedding extraction. Although the Transformer employs the self-attention mechanism to efficiently model the global interaction between token embeddings, it is inadequate for capturing short-range local context, which is essential for the accurate extraction of speaker information. In this study, we enhance the Transformer with the enhanced locality modeling in two directions. First, we propose the Locality-Enhanced Conformer (LE-Confomer) by introducing depth-wise convolution and channel-wise attention into the Conformer blocks. Second, we present the Speaker Swin Transformer (SST) by adapting the Swin Transformer, originally proposed for vision tasks, into speaker embedding network. We evaluate the proposed approaches on the VoxCeleb datasets and a large-scale Microsoft internal multilingual (MS-internal) dataset. The proposed models achieve 0.75% EER on VoxCeleb 1 test set, outperforming the previously proposed Transformer-based models and CNN-based models, such as ResNet34 and ECAPA-TDNN. When trained on the MS-internal dataset, the proposed models achieve promising results with 14.6% relative reduction in EER over the Res2Net50 model.
翻译:最近,人们探索了基于变换器的建筑结构,以进行语音嵌入。虽然变换器使用自我关注机制来高效模拟象征性嵌入器之间的全球互动,但不足以捕捉短距离本地环境,这是准确提取语音信息的关键。在本研究中,我们用两个方向的强化地点建模加强变换器,从两个方向加强变换器,用强化地点建模有两个方向。首先,我们建议对调解区块引入深度-感动和对频道的关注,从而在组合区块中引入“LE-Confomer ” 。第二,虽然变换器使用自我关注机制来高效模拟象征性嵌入器之间的全球互动,高效模拟象征性嵌入器之间的全球互动,但不足以捕捉短期本地环境背景,这对于准确提取语音信息。在本研究中,我们评价了VoxCeeleb数据集和大型微软软体内部多语言(MS-内部)数据集集的拟议办法。我们建议的模式在VoxCeleb 1号测试中实现了0.75%的EER(LE-M-内测试,设置了比先前提议的变换以变器为基础的模型和CNNC50基模型,例如ResNet34和ECPAPAPAPPAPAPAPAPAPAPA6-TD6-TDNNNNNNNNNN,当在进行关于MS50号模型的模型的训练时,在进行相对的削减后,提议的模型将14R5的削减后,提议的模型以14(ER)将取得有有希望结果的14的14的削减结果。</s>