Respiratory rate (RR) is an important biomarker as RR changes can reflect severe medical events such as heart disease, lung disease, and sleep disorders. Unfortunately, however, standard manual RR counting is prone to human error and cannot be performed continuously. This study proposes a method for continuously estimating RR, RRWaveNet. The method is a compact end-to-end deep learning model which does not require feature engineering and can use low-cost raw photoplethysmography (PPG) as input signal. RRWaveNet was tested subject-independently and compared to baseline in three datasets (BIDMC, CapnoBase, and WESAD) and using three window sizes (16, 32, and 64 seconds). RRWaveNet outperformed current state-of-the-art methods with mean absolute errors at optimal window size of 1.66 \pm 1.01, 1.59 \pm 1.08, and 1.92 \pm 0.96 breaths per minute for each dataset. In remote monitoring settings, such as in the WESAD dataset, we apply transfer learning to two other ICU datasets, reducing the MAE to 1.52 \pm 0.50 breaths per minute, showing this model allows accurate and practical estimation of RR on affordable and wearable devices. Our study shows feasibility of remote RR monitoring in the context of telemedicine and at home.
翻译:呼吸率(RR)是一个重要的生物标志,因为RR变化可以反映心脏病、肺病和睡眠失常等严重医疗事件,但不幸的是,标准的人工RR计数容易发生人为错误,无法持续进行。本研究提出了持续估算RRR、RRWaveNet的方法。该方法是一种不要求进行特征工程的从端到端的深层次学习的紧凑模式,可以使用低成本原始光谱成像仪作为输入信号。RRWaveNet可以独立地测试对象,并与三个数据集(BIDMC、CapnoBase和WESAD)的基准比较,使用三个窗口大小(16、32和64秒)进行标准人工RRRRRRRR的计算。RWaveNet比目前先进最新的方法,在1.66\pm1.01、1.59\pm1.08和1.92\pm0.56每分钟的低成本原始光谱扫描。在远程监测环境中,例如WESAD模型中,我们将模型和呼吸系统模型的精确性模型中,我们将模型中的其他两个数据库数据转换为IMUR。