This paper proposes a deep learning framework for classification of BBC television programmes using audio. The audio is firstly transformed into spectrograms, which are fed into a pre-trained convolutional Neural Network (CNN), obtaining predicted probabilities of sound events occurring in the audio recording. Statistics for the predicted probabilities and detected sound events are then calculated to extract discriminative features representing the television programmes. Finally, the embedded features extracted are fed into a classifier for classifying the programmes into different genres. Our experiments are conducted over a dataset of 6,160 programmes belonging to nine genres labelled by the BBC. We achieve an average classification accuracy of 93.7% over 14-fold cross validation. This demonstrates the efficacy of the proposed framework for the task of audio-based classification of television programmes.
翻译:本文提出英国广播公司电视节目使用音频分类的深层次学习框架,首先将音频转换成光谱,并输入经过训练的进化神经网络(CNN),获得录音录音中声音事件的预测概率;然后计算预测概率和探测到的音频事件的统计数字,以提取电视节目的歧视性特征;最后,将所提取的内嵌特征输入到一个分类器中,将节目分为不同类别;我们试验的是属于英国广播公司标注的9种类型的6 160个节目的数据集;我们平均分类准确度达到93.7%,跨14倍的交叉验证;这显示了电视节目音频分类任务拟议框架的效力。