Despite recent advancements in deep learning, its application in real-world medical settings, such as phonocardiogram (PCG) classification, remains limited. A significant barrier is the lack of high-quality annotated datasets, which hampers the development of robust, generalizable models that can perform well on newly collected, out-of-distribution (OOD) data. Self-Supervised Learning (SSL) contrastive learning, has shown promise in mitigating the issue of data scarcity by using unlabeled data to enhance model robustness. Even though SSL methods have been proposed and researched in other domains, works focusing on the impact of data augmentations on model robustness for PCG classification are limited. In particular, while augmentations are a key component in SSL, selecting the most suitable policy during training is highly challenging. Improper augmentations can lead to substantial performance degradation and even hinder a network's ability to learn meaningful representations. Addressing this gap, our research aims to explore and evaluate a wide range of audio-based augmentations and uncover combinations that enhance SSL model performance in PCG classification. We conduct a comprehensive comparative analysis across multiple datasets, assessing the impact of various augmentations on model performance. Our findings reveal that depending on the training distribution, augmentation choice significantly influences model robustness, with fully-supervised models experiencing up to a 32\% drop in effectiveness when evaluated on unseen data, while SSL models demonstrate greater resilience, losing only 10\% or even improving in some cases. This study also highlights the most promising and appropriate augmentations for PCG signal processing, by calculating their effect size on training. These insights equip researchers with valuable guidelines for developing reliable models in PCG signal processing.
翻译:暂无翻译