Few-shot learning aims to train models that can recognize novel classes given just a handful of labeled examples, known as the support set. While the field has seen notable advances in recent years, they have often focused on multi-class image classification. Audio, in contrast, is often multi-label due to overlapping sounds, resulting in unique properties such as polyphony and signal-to-noise ratios (SNR). This leads to unanswered questions concerning the impact such audio properties may have on few-shot learning system design, performance, and human-computer interaction, as it is typically up to the user to collect and provide inference-time support set examples. We address these questions through a series of experiments designed to elucidate the answers to these questions. We introduce two novel datasets, FSD-MIX-CLIPS and FSD-MIX-SED, whose programmatic generation allows us to explore these questions systematically. Our experiments lead to audio-specific insights on few-shot learning, some of which are at odds with recent findings in the image domain: there is no best one-size-fits-all model, method, and support set selection criterion. Rather, it depends on the expected application scenario. Our code and data are available at https://github.com/wangyu/rethink-audio-fsl.
翻译:少见的学习旨在培训能够承认新类的模型,这些模型仅提供为数不多的标签例子,称为支助组。虽然外地近年来取得了显著的进展,但它们往往侧重于多级图像分类。相反,音频往往由于声音重叠而成为多标签,导致多调和信号到音响比率等独特特性。这导致关于这些音频属性可能对微小的学习系统设计、性能和人-计算机互动产生的影响的未回答问题,因为通常用户要收集并提供推论时间支持的既定例子。我们通过一系列旨在阐明这些问题答案的实验来解决这些问题。我们引入了两个新的数据集,即FSD-MIX-CLIPS和FSD-MIX-SED, 其程序生成使我们能够系统地探讨这些问题。我们的实验导致对微小的学习有声的洞察了解,其中一些与图像域的最新发现不相匹配:没有最佳的一刀切模型、方法和支持设置的选择标准。我们引入了两个新的数据集,即FSD-MIX-CLIPS和FSD-MU/SO/AVD。 相反,它取决于预期的模型/MAWI/SAUD。