With recent advancements in deep learning methods, automatically learning deep features from the original data is becoming an effective and widespread approach. However, the hand-crafted expert knowledge-based features are still insightful. These expert-curated features can increase the model's generalization and remind the model of some data characteristics, such as the time interval between two patterns. It is particularly advantageous in tasks with the clinically-relevant data, where the data are usually limited and complex. To keep both implicit deep features and expert-curated explicit features together, an effective fusion strategy is becoming indispensable. In this work, we focus on a specific clinical application, i.e., sleep apnea detection. In this context, we propose a contrastive learning-based cross attention framework for sleep apnea detection (named ConCAD). The cross attention mechanism can fuse the deep and expert features by automatically assigning attention weights based on their importance. Contrastive learning can learn better representations by keeping the instances of each class closer and pushing away instances from different classes in the embedding space concurrently. Furthermore, a new hybrid loss is designed to simultaneously conduct contrastive learning and classification by integrating a supervised contrastive loss with a cross-entropy loss. Our proposed framework can be easily integrated into standard deep learning models to utilize expert knowledge and contrastive learning to boost performance. As demonstrated on two public ECG dataset with sleep apnea annotation, ConCAD significantly improves the detection performance and outperforms state-of-art benchmark methods.
翻译:随着最近深层学习方法的进展,从原始数据中自动学习深层特征正在成为一种有效和广泛的方法。然而,手工制作的专家知识特征仍然具有深刻的洞察力。这些专家精细的特征可以提高模型的概括性,提醒某些数据特征的模型,例如两种模式之间的时间间隔。在与临床相关的数据的任务中,数据通常有限和复杂,这特别有利。为了保持隐含的深层特征和专家精准的清晰特征,一项有效的融合战略正在变得不可或缺。在这项工作中,我们侧重于具体的临床应用,即睡眠式脑检测。在这方面,我们建议为睡眠性脑检测(名为ConCAD)建立一个对比性学习交叉关注框架。交叉关注机制可以通过根据重要性自动分配关注度的深度和专家特征。对比性学习可以通过使每个班的事例更加接近和将不同类别的情况推离入嵌入空间中。此外,新的混合性损失是同时进行对比性学习,同时进行对比性学习,通过将测试性测定的测试性能模型与测试性能测试性能的对比性模型进行分类,从而将对比性测试性测试性地利用对比性测试性模型,并进行对比性研究。