Segmenting audio into homogeneous sections such as music and speech helps us understand the content of audio. It is useful as a pre-processing step to index, store, and modify audio recordings, radio broadcasts and TV programmes. Deep learning models for segmentation are generally trained on copyrighted material, which cannot be shared. Annotating these datasets is time-consuming and expensive and therefore, it significantly slows down research progress. In this study, we present a novel procedure that artificially synthesises data that resembles radio signals. We replicate the workflow of a radio DJ in mixing audio and investigate parameters like fade curves and audio ducking. We trained a Convolutional Recurrent Neural Network (CRNN) on this synthesised data and outperformed state-of-the-art algorithms for music-speech detection. This paper demonstrates the data synthesis procedure as a highly effective technique to generate large datasets to train deep neural networks for audio segmentation.
翻译:将音频分解成音乐和语音等同质部分有助于我们理解音频的内容。 它作为索引、储存和修改录音、无线电广播和电视节目的预处理步骤很有用。 深入的分解学习模式一般都是关于版权材料的培训,这些材料是不能共享的。 指出这些数据集耗时费钱,因此大大减缓了研究进展。 在这个研究中, 我们提出了一个仿照无线电信号的人工合成数据的新程序。 我们复制了电台DJ的工作流程, 混合音频和调查参数, 如淡化曲线和音频鸭子。 我们训练了一个关于这一合成数据的革命性常态神经网络(CRNN), 以及超越最先进的音乐语音检测算法。 本文展示了数据合成程序, 这是一种非常有效的技术, 可以生成大型的数据集, 用于培训音频分解的深神经网络 。