Acoustic word embeddings (AWEs) are discriminative representations of speech segments, and learned embedding space reflects the phonetic similarity between words. With multi-view learning, where text labels are considered as supplementary input, AWEs are jointly trained with acoustically grounded word embeddings (AGWEs). In this paper, we expand the multi-view approach into a proxy-based framework for deep metric learning by equating AGWEs with proxies. A simple modification in computing the similarity matrix allows the general pair weighting to formulate the data-to-proxy relationship. Under the systematized framework, we propose an asymmetric-proxy loss that combines different parts of loss functions asymmetrically while keeping their merits. It follows the assumptions that the optimal function for anchor-positive pairs may differ from one for anchor-negative pairs, and a proxy may have a different impact when it substitutes for different positions in the triplet. We present comparative experiments with various proxy-based losses including our asymmetric-proxy loss, and evaluate AWEs and AGWEs for word discrimination tasks on WSJ corpus. The results demonstrate the effectiveness of the proposed method.
翻译:声词嵌入( AWES) 是语言部分的歧视性表达, 所学的嵌入空间反映了言词之间的语音相似性 。 在多视角学习中, 文本标签被视为补充性投入, AWES 联合接受基于声学的字嵌入( AGWES) 的培训 。 在本文中, 我们通过将 AGWES 与代理人等同起来, 将多视角方法扩展为基于代理的深层次计量学习框架 。 在计算相似性矩阵时简单修改后, 允许普通对齐权重来构建数据对代关系 。 在系统化框架内, 我们提出不对称的代理损失, 将不同部分损失功能不对等地结合起来, 并保留其优点 。 其依据的假设是, 固定阳性对配的最佳功能可能不同于锚反对, 当替代三重力的不同位置时, 一个代理可能具有不同的影响 。 我们对各种基于代理性的损失进行了比较实验, 包括我们的不对称- 交叉性损失, 并评估 AWESJ 柱状上的文字歧视任务 AWES 。