In this paper, we describe an approach for representation learning of audio signals for the task of COVID-19 detection. The raw audio samples are processed with a bank of 1-D convolutional filters that are parameterized as cosine modulated Gaussian functions. The choice of these kernels allows the interpretation of the filterbanks as smooth band-pass filters. The filtered outputs are pooled, log-compressed and used in a self-attention based relevance weighting mechanism. The relevance weighting emphasizes the key regions of the time-frequency decomposition that are important for the downstream task. The subsequent layers of the model consist of a recurrent architecture and the models are trained for a COVID-19 detection task. In our experiments on the Coswara data set, we show that the proposed model achieves significant performance improvements over the baseline system as well as other representation learning approaches. Further, the approach proposed is shown to be uniformly applicable for speech and breathing signals and for transfer learning from a larger data set.
翻译:在本文中,我们描述一种为COVID-19探测任务代表音频信号的学习方法; 原始音频样本用一个1进制过滤器库进行处理,这些过滤器的参数化为焦温调制高斯函数; 选择这些内核可以将过滤器库解释为光带通路过滤器; 过滤输出是集合、 记录压缩的, 并用于基于自我注意的关联权重机制; 相关权重强调对下游任务十分重要的时间频率分解的关键区域; 模型随后的层由经常性结构组成,模型经过培训,以完成COVID-19的检测任务; 在对科斯瓦拉数据集的实验中,我们表明拟议的模型在基线系统和其他代表学习方法上取得了显著的性能改进。 此外, 所提出的方法被证明适用于语音和呼吸信号的统一性,以及从较大数据集的传输。