The state-of-the-art methods for drum transcription in the presence of melodic instruments (DTM) are machine learning models trained in a supervised manner, which means that they rely on labeled datasets. The problem is that the available public datasets are limited either in size or in realism, and are thus suboptimal for training purposes. Indeed, the best results are currently obtained via a rather convoluted multi-step training process that involves both real and synthetic datasets. To address this issue, starting from the observation that the communities of rhythm games players provide a large amount of annotated data, we curated a new dataset of crowdsourced drum transcriptions. This dataset contains real-world music, is manually annotated, and is about two orders of magnitude larger than any other non-synthetic dataset, making it a prime candidate for training purposes. However, due to crowdsourcing, the initial annotations contain mistakes. We discuss how the quality of the dataset can be improved by automatically correcting different types of mistakes. When used to train a popular DTM model, the dataset yields a performance that matches that of the state-of-the-art for DTM, thus demonstrating the quality of the annotations.
翻译:在配音工具(DTM)的情形下,最先进的鼓笔录方法(DTM)是经过监督培训的机器学习模型,这意味着它们依赖标签的数据集。问题在于现有的公共数据集在大小或现实方面是有限的,因此对于培训而言是不最理想的。事实上,最佳的结果目前是通过一个相当复杂的多步骤培训过程获得的,该过程涉及真实和合成数据集。为了解决这个问题,从注意节奏游戏玩家社区提供大量附加说明的数据开始,我们制作了一组新的多方源的桶抄录数据集。这个数据集包含真实世界音乐,手动加注,比任何其他非合成数据集大两个数量级,使其成为培训的主要候选数据。然而,由于众包,最初的说明包含错误。我们讨论了如何通过自动纠正不同类型的错误来改进数据集的质量。当用来训练流行的 DTM 模型时, 数据集产生一种匹配状态 TM 说明质量的性能, 从而证明 DTM 质量。