Recently, Transformer-based architectures have been explored for speaker embedding extraction. Although the Transformer employs the self-attention mechanism to efficiently model the global interaction between token embeddings, it is inadequate for capturing short-range local context, which is essential for the accurate extraction of speaker information. In this study, we enhance the Transformer with the locality modeling in two directions. First, we propose the Locality-Enhanced Conformer (LE-Confomer) by introducing depth-wise convolution and channel-wise attention into the Conformer blocks. Second, we present the Speaker Swin Transformer (SST) by adapting the Swin Transformer, originally proposed for vision tasks, into speaker embedding network. We evaluate the proposed approaches on the VoxCeleb datasets and a large-scale Microsoft internal multilingual (MS-internal) dataset. The proposed models achieve 0.75% EER on VoxCeleb 1 test set, outperforming the previously proposed Transformer-based models and CNN-based models, such as ResNet34 and ECAPA-TDNN. When trained on the MS-internal dataset, the proposed models achieve promising results with 14.6% relative reduction in EER over the Res2Net50 model.
翻译:最近,已经探索了基于变换器的建筑结构,以进行语音嵌入。虽然变换器使用自我关注机制来高效模拟象征性嵌入器之间的全球互动,但不足以捕捉短距离本地环境,这是准确提取扬声器信息的关键。在这项研究中,我们用两个方向的定位模型加强变换器。首先,我们建议对变换器(LE-Confomer)采用“本地-强化组合(LE-Confomer)”,在组合区块中引入深度对变换器和频道的注意。第二,我们通过将原为愿景任务提议的Swin变换器(SST)改编成声器来介绍议长双向变换器(SST),但不足以捕捉到短距离本地环境,这是准确提取扬声器信息的关键。我们评价了VoxCeeleb数据集和大规模微软化内部多语(MS-内部)数据集的拟议方法。我们建议的模式在VoxCeleb 1测试中实现了0.75% EER,比先前提议的变换器模型和CNN(如ResNet34和ECAPPA-TDNNNN)模型,在MS-50 相对数据设置中实现了14.6。