The COVID-19 pandemic has resulted in more than 125 million infections and more than 2.7 million casualties. In this paper, we attempt to classify covid vs non-covid cough sounds using signal processing and deep learning methods. Air turbulence, the vibration of tissues, movement of fluid through airways, opening, and closure of glottis are some of the causes for the production of the acoustic sound signals during cough. Does the COVID-19 alter the acoustic characteristics of breath, cough, and speech sounds produced through the respiratory system? This is an open question waiting for answers. In this paper, we incorporated novel data augmentation methods for cough sound augmentation and multiple deep neural network architectures and methods along with handcrafted features. Our proposed system gives 14% absolute improvement in area under the curve (AUC). The proposed system is developed as part of Interspeech 2021 special sessions and challenges viz. diagnosing of COVID-19 using acoustics (DiCOVA). Our proposed method secured the 5th position on the leaderboard among 29 participants.
翻译:COVID-19大流行已造成超过1.25亿感染和270万伤亡。在本文中,我们试图利用信号处理和深层学习方法,对 Covid 与非covid咳嗽声音进行分类。空气动荡、组织振动、液流通过空气道、开口和关闭凝胶是咳嗽期间产生声响信号的部分原因。COVID-19大流行是否改变了呼吸系统产生的呼吸、咳嗽和语音声音的声学特征?这是一个有待回答的未决问题。在本文中,我们采用了咳嗽扩增和多种深神经网络结构与方法以及手工艺特征的新数据增强方法。我们提议的系统在曲线下地区提供了14%的绝对改进(AUC)。拟议系统是作为Interspeech 2021特别会议的一部分开发的,也是挑战,例如:利用声学(DCOVA)对COVID-19的声学进行分解。我们提出的方法在29名参与者中占据了领导板上的第5个位置。