In this paper, we address the problem of pitch estimation using Self Supervised Learning (SSL). The SSL paradigm we use is equivariance to pitch transposition, which enables our model to accurately perform pitch estimation on monophonic audio after being trained only on a small unlabeled dataset. We use a lightweight ($<$ 30k parameters) Siamese neural network that takes as inputs two different pitch-shifted versions of the same audio represented by its Constant-Q Transform. To prevent the model from collapsing in an encoder-only setting, we propose a novel class-based transposition-equivariant objective which captures pitch information. Furthermore, we design the architecture of our network to be transposition-preserving by introducing learnable Toeplitz matrices. We evaluate our model for the two tasks of singing voice and musical instrument pitch estimation and show that our model is able to generalize across tasks and datasets while being lightweight, hence remaining compatible with low-resource devices and suitable for real-time applications. In particular, our results surpass self-supervised baselines and narrow the performance gap between self-supervised and supervised methods for pitch estimation.
翻译:本文通过自监督学习(SSL)方法解决音高估计问题。我们采用的自监督范式对音高平移具有等变性,使得模型仅需在少量未标注数据集上训练后,即可对单声道音频实现精确的音高估计。我们使用一个轻量级(参数数量<30k)的孪生神经网络,其输入为同一音频经不同音高偏移后的两个常数Q变换表示。为防止模型在仅编码器设置下发生坍缩,我们提出一种新颖的基于类别的平移等变性目标函数以捕获音高信息。此外,通过引入可学习的托普利兹矩阵,我们设计了具有平移保持特性的网络架构。我们在人声歌唱与乐器音高估计两项任务上评估模型性能,结果表明该模型能够跨任务与数据集实现泛化,同时保持轻量化特性,从而兼容低资源设备并适用于实时应用场景。特别值得注意的是,我们的实验结果超越了现有自监督基线方法,显著缩小了自监督与监督学习方法在音高估计任务上的性能差距。