We present a system for the Zero Resource Speech Challenge 2021, which combines a Contrastive Predictive Coding (CPC) with deep cluster. In deep cluster, we first prepare pseudo-labels obtained by clustering the outputs of a CPC network with k-means. Then, we train an additional autoregressive model to classify the previously obtained pseudo-labels in a supervised manner. Phoneme discriminative representation is achieved by executing the second-round clustering with the outputs of the final layer of the autoregressive model. We show that replacing a Transformer layer with a Conformer layer leads to a further gain in a lexical metric. Experimental results show that a relative improvement of 35% in a phonetic metric, 1.5% in the lexical metric, and 2.3% in a syntactic metric are achieved compared to a baseline method of CPC-small which is trained on LibriSpeech 460h data. We achieve top results in this challenge with the syntactic metric.
翻译:我们为2021年零资源语音挑战展示了一个系统,它将一个相矛盾的预测编码(CPC)与深层组群结合起来。 在深层组群中,我们首先准备了通过用 k 手段组合一个CPC 网络的产出而获得的假标签。 然后,我们训练了另一个自动递减模型,以监督的方式对先前获得的假标签进行分类。通过实施第二回合组合和自动递减模型最后一层的输出,可以实现电话歧视性代表制。我们显示,用一个前层取代一个变异器层会进一步增加一个词汇指标。实验结果显示,与对LibriSpeech 460h数据进行培训的CPC小型基线方法相比,一个语音指标的35%、词汇指标的1.5%和合成指标的2.3%取得了相对改善。我们用合成指标在这一挑战中取得了最大成果。