Self-supervised methods have emerged as a promising avenue for representation learning in the recent years since they alleviate the need for labeled datasets, which are scarce and expensive to acquire. Contrastive methods are a popular choice for self-supervision in the audio domain, and typically provide a learning signal by forcing the model to be invariant to some transformations of the input. These methods, however, require measures such as negative sampling or some form of regularisation to be taken to prevent the model from collapsing on trivial solutions. In this work, instead of invariance, we propose to use equivariance as a self-supervision signal to learn audio tempo representations from unlabelled data. We derive a simple loss function that prevents the network from collapsing on a trivial solution during training, without requiring any form of regularisation or negative sampling. Our experiments show that it is possible to learn meaningful representations for tempo estimation by solely relying on equivariant self-supervision, achieving performance comparable with supervised methods on several benchmarks. As an added benefit, our method only requires moderate compute resources and therefore remains accessible to a wide research community.
翻译:近年来,自我监督的方法已成为一种很有希望的代议制学习渠道,因为它们缓解了对标签数据集的需求,这种数据集稀缺而昂贵,获取成本昂贵。 对比方法是音频域自我监督的流行选择,通常通过迫使模型对输入的某些变异产生学习信号,迫使模型对输入的某些变异性。然而,这些方法要求采取负面抽样或某种形式的常规化等措施,以防止模型在微不足道的解决方案上崩溃。在这项工作中,我们提议使用自上视信号,从未贴标签的数据中学习音频节奏。我们得出一个简单的损失功能,防止网络在培训期间在微小的解决方案上崩溃,而不需要任何形式的正规化或负面抽样。我们的实验表明,完全依靠等同的自上调性思维,实现与若干基准上受监督的方法相类似的性能,我们的方法只需要适度的调整资源,因此对于广泛的研究界来说仍然是易懂的。