Automatic speaker verification (ASV) with ad-hoc microphone arrays has received attention. Unlike traditional microphone arrays, the number of microphones and their spatial arrangement in an ad-hoc microphone array is unknown, which makes conventional multi-channel ASV techniques ineffective in ad-hoc microphone array settings. Recently, an utterance-level ASV with ad-hoc microphone arrays has been proposed, which first extracts utterance-level speaker embeddings from each channel of an ad-hoc microphone array, and then fuses the embeddings for the final verification. However, this method cannot make full use of the cross-channel information. In this paper, we present a novel multi-channel ASV model at the frame-level. Specifically, we add spatio-temporal processing blocks (STB) before the pooling layer, which models the contextual relationship within and between channels and across time, respectively. The channel-attended outputs from STB are sent to the pooling layer to obtain an utterance-level speaker representation. Experimental results demonstrate the effectiveness of the proposed method.
翻译:与传统的麦克风阵列不同,麦克风阵列中麦克风的数目及其空间安排并不为人所知,这使得传统的多通道ASV技术在临时麦克风阵列设置中无效。最近,提出了带有自动麦克风阵列的语音自动核查(ASV),首先从每个频道抽出一个自动麦克风阵列的语音嵌入器,然后将嵌入的嵌入器装在最后核查中。然而,这一方法无法充分利用跨通道信息。在本文件中,我们在框架一级展示一个新的多通道ASV模型。具体地说,我们在集合层之前添加了阵列时处理区块(STB),以分别模拟各频道内和不同时间的背景关系。STB的频道辅助输出被发送到集合层,以获得语音演示。实验结果显示了拟议方法的有效性。