The need for comprehensive and automated screening methods for retinal image classification has long been recognized. Well-qualified doctors annotated images are very expensive and only a limited amount of data is available for various retinal diseases such as age-related macular degeneration (AMD) and diabetic retinopathy (DR). Some studies show that AMD and DR share some common features like hemorrhagic points and exudation but most classification algorithms only train those disease models independently. Inspired by knowledge distillation where additional monitoring signals from various sources is beneficial to train a robust model with much fewer data. We propose a method called synergic adversarial label learning (SALL) which leverages relevant retinal disease labels in both semantic and feature space as additional signals and train the model in a collaborative manner. Our experiments on DR and AMD fundus image classification task demonstrate that the proposed method can significantly improve the accuracy of the model for grading diseases. In addition, we conduct additional experiments to show the effectiveness of SALL from the aspects of reliability and interpretability in the context of medical imaging application.
翻译:对视网膜图像分类的全面和自动化筛选方法的必要性早已得到承认。合格医生附加说明的图像非常昂贵,对于各种视网膜疾病,如与年龄有关的肌肉畸形(AMD)和糖尿病视网膜病(DR),只有有限的数据可用。一些研究表明,AMD和DR具有一些共同特征,如出血点和显出,但大多数分类算法只对这些疾病模型进行独立培训。在知识蒸馏的启发下,各种来源的额外监测信号有助于用更少的数据来训练一个健全的模型。我们提出了一种称为Synrgic对抗性标签学习(SALL)的方法,该方法在语带和特征空间利用相关的视网膜疾病标签作为补充信号,并以协作方式培训模型。我们在DR和AMD基金图像分类任务方面的实验表明,拟议的方法可以大大提高病分类模型的准确性。此外,我们还进行了更多的实验,以显示SAL在医学成像应用方面可靠性和可解释性方面的有效性。